id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2301.02320 | Differential embeddings into algebras of topological stable rank 1 | We identify a class of smooth Banach *-algebras that are differential
subalgebras of commutative C*-algebras whose openness of multiplication is
completely determined by the topological stable rank of the target C*-algebra.
We then show that group algebras of Abelian groups of unbounded exponent fail
to have uniformly open convolution. Finally, we completely characterise in the
complex case (uniform) openness of multiplication in algebras of continuous
functions in terms of the covering dimension. | Tomasz Kania, Natalia Maślany | 2023-01-05T22:37:48Z | http://arxiv.org/abs/2301.02320v3 | # Differential embeddings into algebras of topological stable rank 1
###### Abstract.
We identify a class of _smooth_ Banach *-algebras that are differential subalgebras of commutative C*-algebras whose openness of multiplication is completely determined by the topological stable rank of the target C*-algebra. We then show that group algebras of Abelian groups of unbounded exponent fail to have uniformly open convolution. Finally, we completely characterise in the complex case (uniform) openness of multiplication in algebras of continuous functions in terms of the covering dimension.
Key words and phrases:differential subalgebra, open multiplication, Banach algebra, ultrapower, covering dimension, norm-controlled inversion 2010 Mathematics Subject Classification: 46A30 (primary), and 46J10 (secondary) The first-named author acknowledges with thanks support received from SONATA 15 No. 2019/35/D/ST1/01734. The second-named author was supported by GACR grant GF20-22230L and received an incentive scholarship from the funds of the program Excellence Initiative - Research University at the Jagiellonian University in Krakow.
Following Nikolskii [29], for \(\delta>1\), we say that a Banach algebra \(A\) is \(\delta\)-_visible in_\(B,\) whenever
\[\psi\big{(}\delta^{-1}\big{)}=\sup\{\|a^{-1}\|_{A}\colon a\in A,\|a\|_{A}\leqslant 1,\|i(a^{-1})\|_{B}\leqslant\delta\}<\infty. \tag{1}\]
Then \(A\) admits norm-controlled inversion in \(B\) if and only if it is \(\delta\)-visible in \(B\) for all \(\delta>1\). Should that be the case, the norm-control function \(h\) can be arranged to be
\[h(\|a\|_{A},\|i(a^{-1})\|_{B})=\frac{1}{\|a\|_{A}}\psi\big{(}\|a\|_{A}\|i(a^{-1 })\|_{B}\big{)}. \tag{2}\]
For a commutative (*-)semi-simple Banach (*-)algebra \(A\) we say, for short, that \(A\)_admits norm-controlled inversion_, whenever it admits norm-controlled inversion in \(C(\Phi_{A})\), the space of continuous functions on the maximal (*-)ideal space \(\Phi_{A}\) of \(A\), when embedded by the Gelfand transform. (For a commutative (*-)semi-simple Banach (*-)algebra the Gelfand transform is injective; see also [15, Proposition 30.2(ii)].)
The Wiener (convolution) algebra \(\ell_{1}(\mathbb{Z})\) is a primary example of a commutative Banach *-algebra without the norm-controlled inversion in \(C(\mathbb{T})\), the algebra of continuous functions on the unit circle. Indeed, in [29] Nikolskii showed that for \(\delta\geqslant 2\) we have \(\psi(\delta^{-1})=\infty\), where \(\psi\) is given in (1). The same conclusion extends to convolution algebras \(\ell_{1}(G)\) for any infinite Abelian group \(G\) that lack norm-controlled inversion in \(C(\widehat{G})\), the algebra of continuous functions on the Pontryagin-dual group to \(G\), but this behaviour appears rather exceptional. On the positive side, various weighted algebras of Fourier series (see [17]) as well as algebras of Lipschitz functions on compact subsets of Euclidean spaces enjoy the norm-controlled inversion.
Norm-controlled inversion is a consequence of a smoothness of the embedding as observed by Blackadar and Cuntz [9]. More specifically, let \(i\colon A\to B\) be an injective homomorphism of Banach algebras. Then \(A\) is a _differential subalgebra_ of \(B\) whenever there exists \(D>0\) such that for all \(a,b\in A\) we have
\[\|ab\|_{A}\leqslant D(\|a\|_{A}\|i(b)\|_{B}+\|i(a)\|_{B}\|b\|_{A}). \tag{3}\]
When \(A\) and \(B\) are Banach *-algebras, we additionally require that \(i\) is *-preserving (hence it preserves the modulai of elements); we omit the symbol \(i\), when the map \(i\) is clear from the context (for example, when it is the formal inclusion of algebras). Differential subalgebras (especially of C*-algebras) have been extensively studied; see, _e.g._, [24, 21, 22, 31].
In the sequel, we shall make use of [21, Theorem 1.1(i)] that we record below:
**Lemma 1.1**.: _Differential *-subalgebras of C*-algebras have norm-controlled inversion._
Note that the condition of being a differential norm is _extremely weak_ assumption, and norms satisfying (3) meet the _weak_ form of smoothness (see [21, Theorem 1.1(v)]).
In the present paper we investigate the possible connections between smoothness of an embedding of Banach algebras and topological stable rank \(1\) (which for unital Banach algebras this is equivalent to having dense invertible group) with openness of multiplication of a given Banach algebra \(A\), _i.e._, the question of for which Banach algebras the map \(m\colon A\times A\to A\) given by \(m(a,b)=ab\)\((a,b\in A)\) is open, that is, it maps open sets to open
sets. The problem of which Banach algebras have open multiplication was systematically investigated by Draga and the first-named author in [16], where it was observed that unital Banach algebras with open multiplication have topological stable rank 1 but not _vice versa_. For example, matrix algebras \(M_{n}\) have topological stable rank 1 but multiplication therein is not open unless \(n=1\) ([7]). On the other hand, the problem of openness of convolution in \(\ell_{1}(\mathbb{Z})\) is persistently _open_.
Various function algebras have been observed to have open multiplication (even uniformly, where a map \(f\colon X\to Y\) is uniformly open whenever for every \(\varepsilon>0\) there is \(\delta>0\) such that for all \(x\in X\) one has \(f(B(x,\varepsilon))\supseteq B(f(x),\delta)\)): spaces of continuous/bounded functions: [2, 3, 4, 6, 5, 10, 25, 30] and spaces of functions of bounded variation: [11, 26]. The first main result of the paper unifies various approaches to openness of multiplication. (All unexplained terminology may be found in the subsequent section.)
**Theorem A**.: _Suppose that \(A\) is a unital symmetric dual Banach *-algebra that is a dense differential subalgebra of \(C(X)\). If \(A\) shares with \(X\) densely many points then multiplication in \(A\) is open at pairs of elements that are jointly non-degenerate. Furthermore, if \(A\) has open multiplication, then maximal ideal space of \(A\) is of dimension at most 1._
Theorem A applies, in particular, to \(A=C(X)\), which may be interpreted as complex counterpart of the main result of [8].
The proofs of the main results of [11, 26] centre around showing that the algebras of functions of \(p\)-bounded variation (for \(p=1\) and \(p\in(1,\infty)\), respectively) are approximable by jointly non-degenerate products. Our theorem appears to be the first general providing sufficient conditions for openness in a given commutative Banach *-algebra (_i.e._, a self-adjoint function algebra).
In [16, Corollary 4.13], Draga and the first-named author proved that \(\ell_{1}(\mathbb{Z})\) does not have uniformly open convolution (whether it is open or not remains an open problem). We strengthen this result by showing that having unbounded exponent (that is, the condition \(\sup_{g\in G}o(g)=\infty\), where \(o(g)\) denotes the rank of an element \(g\in G\))) is sufficient for _not_ having uniformly open convolution.
**Theorem B**.: _Let \(G\) be an Abelian group of unbounded exponent, i.e., \(\sup_{g\in G}o(g)=\infty\). Then convolution in \(\ell_{1}(G)\) is not uniformly open._
By Prufer's first theorem (see [27, p. 173]), every Abelian group of bounded exponent is isomorphic to a direct sum of a finite number of finite cyclic groups and a direct sum of possibly infinitely many copies of a fixed finite cyclic group, so if one seeks examples of group convolution algebras with uniform multiplication, the only candidates to be found are groups that are effectively direct sums of any number of copies of a fixed cyclic group.
Finally, we establish a complex counterpart of Komisarski's result [25] linking openness of multiplication in the real algebra \(C(X)\) of continuous functions on a compact space \(X\) with the covering dimension of \(X\). In the complex case \(C(X)\) has open multiplication if and only if \(X\) is zero-dimensional in which case multiplication is actually uniformly open with \(\delta(\varepsilon)=\varepsilon^{2}/4\) (\(\varepsilon>0\)). (See also [16, Proposition 4.16] for an alternative proof using direct
limits that does not depend on the scalar field; we refer to [18] for a modern exposition of dimension theory and standard facts thereof.)
**Theorem C**.: _Let \(X\) be a compact space. Then the following conditions are equivalent for the algebra \(C(X)\) of continuous complex-valued functions on \(X\):_
1. \(X\) _has open multiplication,_
2. \(X\) _has uniformly open multiplication,_
3. _the covering dimension of_ \(X\) _is at most_ \(1\)_._
_Moreover, the algebras \(C(X)\) have equi-uniformly open multiplications for all compact spaces of dimension at most \(1\)._
A necessary condition for a unital Banach algebra to have open multiplication is topological stable rank \(1\), that is, having dense group of invertibles. For a compact space \(X\) of dimension at least \(2\), this is not the case, so \(C(X)\) does not have open multiplication ([16, Proposition 4.4]). The proof of Theorem C is split into three cases.
* The first one uses a reduction to spaces being topological (planar) realisations of graphs. Here we rely on certain ideas from an unpublished manuscript of Behrends for which we have a permission to include them in the present note. We kindly acknowledge this crucial contribution from Professor Behrends establishing the case of \(X=[0,1]\).
* Then we proceed via an inverse limit argument to conclude the result for all compact metric spaces of dimension at most \(1\).
* Finally, we apply a result of Madresic [28] to conclude the general non-metrisable case from equi-uniform openness of multiplication of \(C(X)\) for all \(1\)-dimensional compact metric spaces \(X\).
## 2. Preliminaries
### Banach algebras
Compact spaces are assumed to be Hausdorff. All Banach algebras considered in this paper are over \(\mathbb{C}\), the field of complex scalars, unless otherwise specified. We denote by \(\mathbb{T}\) the unit circle in the complex plane.
A Banach algebra \(A\) has _topological stable rank 1_, whenever invertible elements are dense in \(A\) if \(A\) is unital or in the unitisation of \(A\) otherwise. Algebras whose elements have zero-dimensional spectra have topological stable rank \(1\) and includeiduals of \(C(X)\) for a compact space \(X\), the algebra of functions of bounded variation, or the algebra of compact operators on a Banach space; we refer to [16, Section 2] for more details.
#### 2.1.1. Arens regularity, dual Banach algebras
As observed by Arens [1], the biudal of a Banach algebra may be naturally endowed with two, rather than single one, multiplications (the left and right Arens products, denoted \(\Box\), \(\diamond\), respectively). Even though these multiplications may be explicitly defined, the following 'computation' rule is perhaps easier to comprehend: for \(f,g\in A^{**}\), where \(A\) is a Banach algebra, by Goldstine's theorem, one may choose bounded nets \((f_{j})\), \((g_{i})\) from A that are weak* convergent to \(f\) and \(g\), respectively. Then
* \(f\mathbin{\Box}g=\lim_{j}\lim_{i}f_{j}g_{i}\),
* \(f\diamond g=\lim_{i}\lim_{j}f_{j}g_{i}\)
are well-defined and do not depend on the choice of the approximating nets. A Banach algebra is _Arens-regular_ when the two multiplications coincide. For a locally compact space \(X\), the algebra \(C_{0}(X)\) is Arens-regular, but a group \(G\), the group algebra \(\ell_{1}(G)\) (see Section 2.5) is Arens-regular if and only if \(G\) is finite ([33]).
A _dual Banach algebra_ is a Banach algebra \(A\) that is a dual space to some Banach space \(E\) whose multiplication is separately \(\sigma(A,E)\)-continuous. Notable examples of dual Banach algebras include von Neumann algebras, Banach algebras that are reflexive as Banach spaces, or biduals of Arens-regular Banach algebras; see [13, Section 5] for more details.
Suppose that \(A\) is a dual Banach algebra and let \(i\colon A\to C(X)\) be an injective homomorphism for some compact space \(X\). We say that \(A\)_shares with \(X\) densely points_ whenever there exists a dense set \(Q\subset X\) such that \(i^{*}(\delta_{x})\in E\) (\(x\in Q\)), _i.e._, the functionals \(i^{*}(\delta_{x})\) (\(x\in Q\)) are \(\sigma(A,E)\)-continuous (here \(\delta_{x}\in C(X)^{*}\) is the Dirac delta evaluation functional at \(x\in X\)). Since for an Arens-regular Banach algebra the bidual endowed with the unique Arens product is a dual Banach algebra, we may record the following lemma.
**Lemma 2.1**.: _Let \(A\) be a unital Arens-regular Banach algebra and let \(i\colon A\to C(X)\) be an injective algebra homomorphism with dense range. Then \(A^{**}\) shares with the maximal ideal space of \(C(X)^{**}\) densely many points._
Proof.: Since \(A\) is Arens-regular, \(A^{**}\) is naturally a dual Banach algebra with the unique Arens product. Since \(i^{***}\) extends \(i^{*}\), for every \(x\in X\), we have \(i^{***}(\delta_{x})=i^{*}(\delta_{x})\in A^{*}\), so that \(i^{*}(\delta_{x})\) is \(\sigma(A^{**},A^{*})\)-continuous. It remains to invoke the fact that \(X\) can be identified with an open dense subset of the maximal ideal space of \(C(X)^{**}\) via \(x\mapsto(\delta_{x})^{**}=\delta_{\iota(x)}\) for some point \(\iota(x)\) in the maximal ideal space of \(C(X)^{**}\) (see the discussion after [12, Definition 3.3]); the map \(\iota\) is necessarily discontinuous unless \(X\) is finite).
Let us record two permanence properties of differential embeddings; even though we shall not utilise (ii) in the present paper, we keep it for possible future reference.
**Lemma 2.2**.: _Let \(A\) be a Banach algebra continuously embedded into another Banach algebra \(B\) by a homomorphism \(i\colon A\to B\) as a differential subalgebra._
* _Consider both in_ \(A^{**}\) _and_ \(B^{**}\) _either left or right Arens products. Then in either setting_ \(i^{**}\colon A^{**}\to B^{**}\) _is a differential embedding._
* _Let_ \(\mathcal{U}\) _be an ultrafilter. Then_ \(i^{\mathcal{U}}\colon A^{\mathcal{U}}\to B^{\mathcal{U}}\) _is a differential embedding between the respective ultrapowers._
Proof.: _Case 1._ Let \(\{a_{\alpha}\},\{b_{\beta}\}\subset A\) be bounded nets \(\sigma(A^{**},A^{*})\)-convergent to \(a,b\in A^{**}\) respectively, satisfying for any \(\alpha,\beta\) conditions \(\|a_{\alpha}\|_{A}\leqslant\|a\|_{A^{**}}\) and \(\|b_{\beta}\|_{B}\leqslant\|b\|_{B^{**}}\) (it is
possible by the Krein-Smulyan theorem). Then
\[\|ab\|_{A^{**}} \leqslant\liminf_{\alpha,\beta}\|a_{\alpha}b_{\beta}\|\] \[\leqslant D\cdot\liminf_{\alpha,\beta}\ \big{(}\|a_{\alpha}\|_{A}\|i(b_{\beta})\|_{B}+\|i(a_{\alpha})\|_{B}\|b_{ \beta}\|_{A}\big{)}\] \[\leqslant D\big{(}\|a\|_{A^{**}}\|i^{**}(b)\|_{B^{**}}+\|i^{**}(a) \|_{B^{**}}\|b\|_{A^{**}}\big{)}.\]
_Case 2._ Let \(a=[(a_{\gamma})_{\gamma\in\Gamma}]\), \(b=[(b_{\gamma})_{\gamma\in\Gamma}]\in A^{\mathbb{U}}.\) Then
\[\|ab\|_{A^{\mathbb{U}}} =\lim_{\gamma,\mathbb{U}}\big{\|}a_{\gamma}b_{\gamma}\big{\|}_{A}\] \[\leqslant\lim_{\gamma,\mathbb{U}}D\Big{(}\big{\|}a_{\gamma}\big{\|} _{A}\big{\|}i(b_{\gamma})\big{\|}_{B}+\big{\|}i(a_{\gamma})\big{\|}_{B}\big{\|} b_{\gamma}\big{\|}_{A}\Big{)}\] \[\leqslant D\Big{(}\lim_{\gamma,\mathbb{U}}\big{\|}a_{\gamma} \big{\|}_{A}\cdot\lim_{\gamma,\mathbb{U}}\|i(b_{\gamma})\big{\|}_{B}+\lim_{ \gamma,\mathbb{U}}\big{\|}i(a_{\gamma})\big{\|}_{B}\cdot\lim_{\gamma,\mathbb{U }}\big{\|}b_{\gamma}\big{\|}_{A}\Big{)}\] \[=D\Big{(}\big{\|}a\big{\|}_{A^{\mathbb{U}}}\big{\|}i^{\mathbb{U}} \big{(}b\big{)}\big{\|}_{B^{\mathbb{U}}}+\big{\|}i^{\mathbb{U}}\big{(}a\big{)} \big{\|}_{B^{\mathbb{U}}}\big{\|}b\big{\|}_{A^{\mathbb{U}}}\Big{)}.\]
### Banach *-algebras
Let \(A\) be a unital Banach *-algebra. In this setting, for \(a\in A\) we interpret \(|a|^{2}\) as \(a^{*}a\). We say that elements \(a,b\) in \(A\) are _jointly non-degenerate_, when \(|a|^{2}+|b|^{2}\) is invertible. When \(X\) is a compact space and \(a,b\in C(X)\), we sometimes say that elements with \(|a|^{2}+|b|^{2}\geqslant\eta\) (for some \(\eta>0\)) are _jointly \(\eta\)-non-degenerate_. Let us introduce the following definition.
**Definition 2.3**.: A unital Banach *-algebra \(A\) is _approximable by jointly non-degenerate products_ whenever for all \(a,b\in A\) and \(\varepsilon>0\) there exist jointly non-degenerate elements \(a^{\prime},b^{\prime}\in A\) with \(\|a-a^{\prime}\|,\|b-b^{\prime}\|<\varepsilon\) such that \(ab=a^{\prime}b^{\prime}\).
_Remark 1_.: It is readily seen that \(C(X)\) for a zero-dimensional compact space \(X\) has this property. Indeed, let \(f,g\in C(X)\) and \(\varepsilon>0\). Consider the sets
* \(D_{1}=\{x\in X\colon|f(x)|\geqslant\varepsilon/3\}\)
* \(D_{2}=\{x\in X\colon|g(x)|\geqslant\varepsilon/3\}\)
* \(D_{3}=\{x\in X\colon|f(x)|,|g(x)|\leqslant\varepsilon/2\}\).
Certainly, the sets \(D_{1},D_{2},D_{3}\) are closed and cover the space \(X\). As \(X\) is zero-dimensional, there exist pairwise clopen sets \(D_{1}^{\prime}\subseteq D_{1}\), \(D_{2}^{\prime}\subseteq D_{2}\), and \(D_{3}^{\prime}\subseteq D_{3}\) that still cover \(X\), _i.e._, \(X=D_{1}^{\prime}\cup D_{2}^{\prime}\cup D_{3}^{\prime}\). Let \(f^{\prime}=f\cdot\mathds{1}_{D_{1}^{\prime}\cup D_{2}^{\prime}}+\frac{ \varepsilon}{2}\mathds{1}_{D_{3}^{\prime}}\) and \(g^{\prime}=g\cdot\mathds{1}_{D_{1}^{\prime}\cup D_{2}^{\prime}}+\frac{2}{ \varepsilon}fg\mathds{1}_{D_{3}^{\prime}}\). Then \(f^{\prime}\), \(g^{\prime}\) are the sought jointly non-degenerate approximants. On the other hand, as \(C(X)\) for \(X=[0,1]\) and compact spaces alike readily not approximable by jointly non-degenerate issues due to connectedness.
Kowalczyk and Turowska [26] showed that the algebra \(BV[0,1]\) of functions of bounded variation on the unit interval is approximable by jointly non-degenerate products and Canarias, Karlovich, and Shargorodsky [11] extended this result to algebras of bounded \(p\)-variation on the interval as well as certain further function algebras.
### Ultraproducts
Ultraproducts of mathematical structures usually come in two main guises: the algebraic one (first-order) and the analytic one (second-order). Let us briefly summarise the link between these in the context of groups and their group algebras. This has been essentially developed by Daws in [14, Section 5.4] and further explained in [16, Section 2.3.2].
Let \((S_{\gamma})_{\gamma\in\Gamma}\) be an infinite collection of semigroups and let \(\mathcal{U}\) be an ultrafilter on \(\Gamma\). The (algebraic) _ultraproduct_\(\prod_{\gamma\in\Gamma}^{\mathcal{U}}S_{\gamma}\) with respect to \(\mathcal{U}\) (denoted \(S^{\mathcal{U}}\) when \(S_{\gamma}=S\) for all \(\gamma\in\Gamma\) and then termed the _ultrapower_ of \(S\) with respect to \(\mathcal{U}\)) is the quotient of the direct product \(\prod_{\gamma\in\Gamma}S_{\gamma}\) by the congruence
\[(g_{\gamma})_{\gamma\in\Gamma}\sim(h_{\gamma})_{\gamma\in\Gamma}\quad\text{ if and only if }\quad\{\gamma\in\Gamma\colon g_{\gamma}=h_{\gamma}\}\in\mathcal{U}.\]
Then the just-defined ultraproduct is naturally a semigroup/group/Abelian group if \(S_{\gamma}\) are semigroups/groups/Abelian groups for \(\gamma\in\Gamma\).
Let \((A_{\gamma})_{\gamma\in\Gamma}\) be an infinite collection of Banach spaces. Then the \(\ell_{\infty}(\Gamma)\)-direct sum \(A=(\bigoplus_{\gamma\in\Gamma}A_{\gamma})_{\ell_{\infty}(\Gamma)}\), that is, the space of all tuples \((x_{\gamma})_{\gamma\in\Gamma}\) with \(x_{\gamma}\in A_{\gamma}\) (\(\gamma\in\Gamma\)) and \(\sup_{\gamma\in\Gamma}\|x_{\gamma}\|<\infty\) is a Banach space under the supremum norm. Moreover, the subspace \(J=c_{0}^{\mathcal{U}}(A_{\gamma})_{\gamma\in\Gamma}\) comprising all tuples \((x_{\gamma})_{\gamma\in\Gamma}\) such that \(\lim_{\gamma\to\mathcal{U}}\|x_{\gamma}\|=0\) is closed. The (Banach-space) _ultraproduct_\(\prod_{\gamma\in\Gamma}^{\mathcal{U}}A_{\gamma}\) of \((A_{\gamma})_{\gamma\in\Gamma}\) with respect to \(\mathcal{U}\) is the quotient space \(A/J\). If \(A_{\gamma}\) (\(\gamma\in\Gamma\)) are Banach algebras, then naturally so is \(A\) and \(J\) is then a closed ideal therein. Consequently, the ultraproduct is a Banach algebra. Let us record formally a link between these two constructions.
**Lemma 2.4**.: _Let \((S_{\gamma})_{\gamma\in\Gamma}\) be an infinite collection of semigroups and let \(\mathcal{U}\) be a countably incomplete ultrafilter on \(\Gamma\). Then there exists a unique contractive homomorphism_
\[\iota\colon\prod_{\gamma\in\Gamma}^{\mathcal{U}}\ell_{1}(S_{\gamma})\to\ell_{ 1}\Bigl{(}\prod_{\gamma\in\Gamma}^{\mathcal{U}}S_{\gamma}\Bigr{)} \tag{4}\]
_that satisfies_
\[\iota\Bigl{(}\bigl{[}(e_{g_{\gamma}})_{\gamma\in\Gamma}\bigr{]}\Bigr{)}=e_{[( g_{\gamma})_{\gamma\in\Gamma}]}\qquad\Bigl{(}\bigl{[}(e_{g_{\gamma}})_{\gamma \in\Gamma}\bigr{]}\in\prod_{\gamma\in\Gamma}^{\mathcal{U}}\ell_{1}(S_{\gamma} )\Bigr{)}.\]
### Abelian groups
Let \(G\) be a group. For \(g\in G\) we denote by \(o(g)\) the order of the element \(g\). For a (locally compact) Abelian group \(G\) we denote by \(\widehat{G}\) the Pontryagin dual group of \(G\); for details and basic properties concerning this duality we refer to [23, Chapter 6].
If \(G\) is an (Abelian) divisible group, that is, for any \(g\in G\) and \(n\in\mathbb{N}\) there is \(h\in G\) such that \(g=nh\), then \(G\) is an injective object in the category Abelian groups, which means that for any Abelian groups \(H_{1}\subset H_{2}\), every homomorphism \(\varphi\colon H_{1}\to G\) extends to a homomorphism \(\overline{\varphi}\colon H_{2}\to G\). Direct sums of arbitrary many copies of \(\mathbb{Q}\), the additive group of rationals, are divisible.
Let us record for the future reference the following observation, likely well known to algebraically-oriented model theorists.
**Lemma 2.5**.: _Suppose that \(G\) is an Abelian group with \(\sup_{g\in G}o(g)=\infty\). Then \(\mathbb{Z}^{(\mathbb{R})}\) embeds into some ultrapower of \(G\) with respect to an ultrafilter on \(\mathbb{N}\)._
Proof.: Let \(\mathcal{U}\) be a non-principal ultrafilter on \(\mathbb{N}\) and let \((g_{n})_{n=1}^{\infty}\) be a sequence in \(G\) such that \(\sup_{n}o(g_{n})=\infty\). Then \(g=[(g_{n})_{n=1}^{\infty}]\) has infinite order in \(H=G^{\mathcal{U}}\). Let \(\mathcal{A}\) be an almost disjoint family of infinite subsets of \(\mathbb{N}\) that has cardinality continuum. Then all but at most one elements \(\mathcal{A}\) are not in \(\mathcal{U}\) (as \(\mathcal{U}\) is non-principal and closed under finite intersections), so let us assume that \(\mathcal{A}\subset\mathcal{U}^{\prime}\). For each \(A\in\mathcal{A}\) we set
\[g_{A}(i)=\left\{\begin{array}{ll}g,&i\notin A,\\ 0,&i\in A.\end{array}\right.\quad(i\in\mathbb{N}).\]
Then \(h_{A}=[(g_{A}(i))_{i=1}^{\infty}]\in H^{\mathcal{U}}\) and \(o(h_{A})=\infty\) (\(A\in\mathcal{A}\)). Moreover, \(\{h_{A}\colon A\in\mathcal{A}\}\) is a \(\mathbb{Z}\)-linearly independent set of cardinality continuum. As such, the subgroup it generates is isomorphic to \(\mathbb{Z}^{(\mathbb{R})}\). It remains to notice that canonically \((G^{\mathcal{U}})^{\mathcal{U}}\cong G^{\mathcal{U}\otimes\mathcal{U}}\), as required.
### Semigroup algebras
Let \(S\) be a semigroup written multiplicatively. In the Banach space \(\ell_{1}(S)\) one can define a convolution product by
\[x*y=\sum_{t\in S}\Big{(}\sum_{r\text{-}s=t}x_{r}y_{s}\Big{)}e_{t}\quad(x=(x_{s })_{s\in S},\;y=(y_{s})_{s\in S}\in\ell_{1}(S)),\]
where \((e_{s})_{s\in S}\) is the canonical unit vector basis of \(\ell_{1}(S)\), together with \(\ell_{1}(S)\) becomes a Banach algebra. For the additive semigroup of natural numbers, the convolution in \(\ell_{1}(\mathbb{N})\) renders the familiar Cauchy product.
Suppose that \(T\subseteq S\) is a subsemigroup. Then \(\ell_{1}(T)\) is naturally a closed subspace of \(\ell_{1}(S)\), which is moreover a closed subalgebra. Every surjective semigroup homomorphism \(\vartheta\colon T\to S\) implements a surjective homomorphism \(\iota_{\vartheta}\colon\ell_{1}(T)\to\ell_{1}(S)\) on the Banach-algebra level by the action
\[\iota_{\vartheta}e_{t}=e_{\vartheta(t)}\quad(t\in T). \tag{5}\]
When \(G\) is an Abelian (discrete) group, then the (compact) dual group \(\widehat{G}\) is the maximal ideal space of the convolution algebra \(\ell_{1}(G)\). More information on semigroup algebras may be found in [12, Chapter 4].
## 3. Proofs of Theorems A and B
The crux of Theorem A lies at the subsequent lemma whose proof shares with the proofs of the main results of [26] and [11] the idea for the construction of the approximation scheme for sought elements; the result itself is however more general and so are the techniques applied along the way.
**Lemma 3.1**.: _Suppose that \(A\) is a unital symmetric Banach *-algebra such that there exists an injective *-homomorphism \(i\colon A\to C(X)\) for some compact space \(X\) such that \(A\) has norm-controlled inversion in \(C(X)\). Let us consider either case:_
* \(A=C(X)\)_,_
* \(A=E^{*}\) _is a dual Banach algebra that shares with_ \(X\) _densely many points._
_Then multiplication in \(A\) is open at all pairs of jointly non-degenerate elements._
_Furthermore, suppose that \(i\) has dense range in \(C(X)\). If \(A\) has open multiplication, then the maximal ideal space of \(A\) is of dimension at most 1._
Proof.: Suppose that \(A\) has norm-controlled inversion implemented by a *-homomorphism \(i\colon A\to C(X)\). We have
\[\|i(f)\|_{\infty}=\sup_{x\in X}|(i(f))(x)|\leqslant C\|f\|_{A}\quad(f\in A), \tag{6}\]
where \(C=\|i\|\geqslant 1\). Suppose that \(F,G\in A\) are jointly non-degenerate (in particular, \(|F|+|G|\) is invertible in \(C(X)\) being nowhere zero). Fix \(\varepsilon\in(0,1)\) and let
\[\gamma:=\min\bigg{\{}1,\frac{1}{2}\inf_{x\in X}\big{(}(|i(F))(x)|+|(i(G))(x)| \big{)}\bigg{\}} \tag{7}\]
Set
\[K:=2\cdot\max\big{\{}\|F\|_{A},\|G\|_{A},1\big{\}}, \tag{8}\]
\[\widehat{T}:=\frac{2C}{\gamma^{2}}\cdot\psi\bigg{(}\frac{4K^{2}}{\gamma^{2}} \bigg{)}>0, \tag{9}\]
where the function \(\psi\) satisfies (1). Moreover, let \(T:=\max\{\widehat{T},1\}.\) Pick an arbitrary element \(H\in A\) so that
\[\|H\|_{A}<\frac{\varepsilon\cdot\gamma}{CK^{3}T^{2}} \tag{10}\]
and consider
\[F_{0}:=F,\quad G_{0}:=G,\quad H_{0}:=H \tag{11}\]
We then define recursively the sequences \((F_{n})_{n=0}^{\infty}\), \((G_{n})_{n=0}^{\infty}\), and \((H_{n})_{n=0}^{\infty}\) by
\[F_{n+1}:=F_{n}+\frac{H_{n}\overline{G_{n}}}{|F_{n}|^{2}+|G_{n}|^{2}},G_{n+1}:= G_{n}+\frac{H_{n}\overline{F_{n}}}{|F_{n}|^{2}+|G_{n}|^{2}},H_{n+1}:=-\frac{H_{n}^{2} \overline{F_{n}G_{n}}}{(|F_{n}|^{2}+|G_{n}|^{2})^{2}}. \tag{12}\]
We _claim_ that
1. \[F_{n}G_{n}+H_{n}=FG+H\quad(n=0,1,2,\ldots),\]
2. \[\|F_{n}\|_{A},\|G_{n}\|_{A}\leqslant\tfrac{1}{2}K+1-2^{-n}<K,\]
3. \[\inf_{x\in X}\big{(}|(i(F_{n}))(x)|+|(i(G_{n}))(x)|\big{)}\geqslant\gamma+ \gamma\cdot 2^{-n}>0,\]
4. \[\|H_{n}\|_{A}\leqslant\frac{1}{2^{n}}\cdot\frac{\varepsilon\cdot\gamma}{CK^{3 }T^{2}}.\]
Note that (iii) implies that sequences (12) are well defined. We will prove these claims by induction.
It follows from (11) that \(F_{0}G_{0}+H_{0}=FG+H.\) We obtain from (7)-(11) that
* \(\|F_{0}\|_{A}=\|F\|_{A}\leqslant K/2\),
* \(\|G_{0}\|_{A}=\|G\|_{A}\leqslant K/2\),
* \(\|H_{0}\|_{A}=\|H\|_{A}<\frac{\varepsilon\cdot\gamma}{CK^{3}T^{2}}\),
* \(\inf_{x\in X}\left(|F_{0}(x)|+|G_{0}(x)|\right)=\inf_{x\in X}\left(|F(x)|+|G(x)| \right)\geqslant 2\gamma>0\).
That is, (i)-(iv) are satisfied for \(n=0\).
Now we assume that (i)-(iv) are fulfilled for some \(n=0,1,2,\ldots\) Consequently, sequences (12) are well defined. Then, taking into account (8), we see that \(K/2\geqslant 1\) and
\[F_{n}G_{n}+H_{n}=FG+H, \tag{13}\]
\[\|F_{n}\|_{A}\leqslant\frac{K}{2}+1-2^{-n}<K, \tag{14}\]
\[\|G_{n}\|_{A}\leqslant\frac{K}{2}+1-2^{-n}<K, \tag{15}\]
\[\inf_{x\in X}\left(|(i(F_{n}))(x)|+|(i(G_{n}))(x)|\right)\geqslant\gamma+ \gamma\cdot 2^{-n}>\gamma, \tag{16}\]
\[\|H_{n}\|_{A}\leqslant\varepsilon\cdot 2^{-n}\cdot\frac{\gamma}{CK^{3}T^{2}}. \tag{17}\]
Let us show that (i)-(iv) are fulfilled for \(n+1\).
For (i), it follows from (12)-(13) that
\[F_{n+1}G_{n+1}+H_{n+1} =\left(F_{n}+\frac{H_{n}\cdot\overline{G_{n}}}{|F_{n}|^{2}+|G_{n }|^{2}}\right)\left(G_{n}+\frac{H_{n}\cdot\overline{F_{n}}}{|F_{n}|^{2}+|G_{n }|^{2}}\right)-\frac{H_{n}^{2}\cdot\overline{F_{n}G_{n}}}{(|F_{n}|^{2}+|G_{n }|^{2})^{2}}\] \[=F_{n}G_{n}+H_{n}\frac{F_{n}\overline{F_{n}}+G_{n}\overline{G_{n} }}{|F_{n}|^{2}+|G_{n}|^{2}}+H_{n}^{2}\frac{\overline{F_{n}G_{n}}}{(|F_{n}|^{2} +|G_{n}|^{2})^{2}}-H_{n}^{2}\frac{\overline{F_{n}G_{n}}}{(|F_{n}|^{2}+|G_{n}|^ {2})^{2}}\] \[=F_{n}G_{n}+H_{n}=FG+H.\]
Hence, (i) is satisfied for \(n+1\).
As for (ii), using (14)-(15) we conclude that
\[\begin{array}{rcl}\||F_{n}|^{2}+|G_{n}|^{2}\|_{A}&\leqslant&\|F_{n}\cdot \overline{F_{n}}\|_{A}+\|G_{n}\cdot\overline{G_{n}}\|_{A}\\ &\leqslant&\|F_{n}\|_{A}\|\overline{F_{n}}\|_{A}+\|G_{n}\|_{A}\|\overline{G_{n }}\|_{A}\\ &=&\|F_{n}\|_{A}^{2}+\|G_{n}\|_{A}^{2}\\ &\leqslant&2K^{2}.\end{array} \tag{18}\]
It follows from (16) that
\[\begin{array}{rcl}\gamma^{2}&\leqslant&\inf_{x\in X}\left(|(i(F_{n}))(x)|+|( i(G_{n}))(x)|\right)^{2}\\ &=&\inf_{x\in X}(|(i(F_{n}))(x)|^{2}+2|(i(F_{n}))(x)|\cdot|(i(G_{n}))(x)|+|(i(G_{ n}))(x)|^{2})\\ &\leqslant&2\inf_{x\in X}\left(|(i(F_{n}))(x)|^{2}+|(i(G_{n}))(x)|^{2}\right), \end{array}\]
hence
\[\sup_{x\in X}\left(|(i(F_{n}))(x)|^{2}+|(i(G_{n}))(x)|^{2}\right)\geqslant\inf_{x \in X}\left(|(i(F_{n}))(x)|^{2}+|(i(G_{n}))(x)|^{2}\right)\geqslant\frac{\gamma^ {2}}{2}>0. \tag{19}\]
By (6) and (19) we obtain
\[\||F_{n}|^{2}+|G_{n}|^{2}\|_{A}\geqslant\frac{1}{C}\cdot\frac{\gamma^{2}}{2}>0. \tag{20}\]
It then follows from (12), (14)-(15) and (19) that
\[\begin{split}\|F_{n+1}\|_{A}&\leqslant\|F_{n}\|_{A }+\|H_{n}\|_{A}\|G_{n}\|_{A}\left\|\frac{1}{|F_{n}|^{2}+|G_{n}|^{2}}\right\|_{A }\\ &\leqslant\left(\frac{K}{2}+1-2^{-n}\right)+\|H_{n}\|_{A}K\left\| \frac{1}{|F_{n}|^{2}+|G_{n}|^{2}}\right\|_{A}.\end{split} \tag{21}\]
Since \(A\) admits norm-controlled inversion in \(C(X)\), we follows from (18), (19), (20) that
\[\begin{split}\left\|\frac{1}{|F_{n}|^{2}+|G_{n}|^{2}}\right\|_{A }&\leqslant\frac{1}{\||F_{n}|^{2}+|G_{n}|^{2}\|_{A}}\cdot\psi \Big{(}\||F_{n}|^{2}+|G_{n}|^{2}\|_{A}\cdot\left\|\big{(}|F_{n}|^{2}+|G_{n}|^{ 2}\big{)}^{-1}\right\|_{\infty}\Big{)}\\ &\leqslant\frac{2C}{\gamma^{2}}\cdot\psi\bigg{(}2K^{2}\cdot \frac{2}{\gamma^{2}}\bigg{)}=\widehat{T}.\end{split} \tag{22}\]
Combining (21)-(22) with (17) and taking into account that \(\varepsilon\in(0,1)\), \(\gamma\in(0,1]\), \(K\geqslant 2\), \(C\geqslant 1\), and \(T\geqslant 1\), we obtain
\[\begin{split}\|F_{n+1}\|_{A},\|G_{n+1}\|_{A}& \leqslant\frac{K}{2}+1-2^{-n}+K\widehat{T}\cdot\varepsilon\cdot 2^{-n} \cdot\frac{\gamma}{CK^{3}T^{2}}\\ &\leqslant\frac{K}{2}+1-2^{-n}+2^{-n}\cdot\frac{1}{2}\\ &=\frac{K}{2}+1-2^{-n-1}.\end{split} \tag{23}\]
Thus, (ii) is fulfilled for \(n+1\).
In order to verify (iii), since \(\varepsilon\in(0,1)\), \(\gamma\in(0,1]\), \(K\geqslant 2\), \(C\geqslant 1\), and \(T\geqslant 1\), it follows from (12), (6), (15), (17) and (22) that for \(x\in X\) we have
\[\begin{split}|(i(F_{n}))(x)|&\leqslant|(i(F_{n+1}))(x )|+|(i(H_{n}))(x)|\frac{|(i(G_{n}))(x)|}{|(i(F_{n}))(x)|^{2}+|(i(G_{n}))(x)|^{2} }\\ &\leqslant|(i(F_{n+1}))(x)|+C\|H_{n}\|_{A}\|G_{n}\|_{A}\left\| \frac{1}{|F_{n}|^{2}+|G_{n}|^{2}}\right\|_{A}\\ &\leqslant|(i(F_{n+1}))(x)|+C\cdot\varepsilon\cdot 2^{-n}\frac{ \gamma}{CK^{3}T^{2}}\cdot K\widehat{T}\\ &<|(i(F_{n+1}))(x)|+2^{-n}\cdot\frac{\gamma}{K^{2}}\\ &<|(i(F_{n+1})(x)|+2^{-n}\cdot\frac{\gamma}{4}.\end{split}\]
Consequently,
\[\big{|}(i(F_{n+1}))(x)\big{|}>\big{|}(i(F_{n}))(x)\big{|}-2^{-n-2}\gamma\quad(x \in X). \tag{24}\]
In the same way we observe that
\[|(i(G_{n+1}))(x)|>|(i(G_{n}))(x)|-2^{-n-2}\gamma\quad(x\in X). \tag{25}\]
We conclude from (16) and (24)-(25) that
\[\inf_{x\in X}\left(|(i(F_{n+1}))(x)|+|(i(G_{n+1}))(x)|\right) \geqslant\inf_{x\in X}\left(|(i(F_{n}))(x)|+|(i(G_{n}))(x)|\right)- 2\cdot 2^{-n-2}\gamma\] \[\geqslant\gamma+\gamma\cdot 2^{-n}-\gamma\cdot 2^{-n-1}\] \[=\gamma+\gamma\cdot 2^{-n-1},\]
so (iii) is fulfilled for \(n+1\).
Finally, for (iv), by (14)-(15), (17) and (22), for \(\varepsilon\in(0,1)\), \(\gamma\in(0,1]\), \(K\geqslant 2\), and \(C\geqslant 1\), we then have
\[\|H_{n+1}\|_{A} \leqslant \|H_{n}\|_{A}^{2}\|\overline{F_{n}}\|_{A}\|\overline{G_{n}}\|_{A }\left\|\frac{1}{|F_{n}|^{2}+|G_{n}|^{2}}\right\|_{A}^{2}\] \[= \|H_{n}\|_{A}^{2}\|F_{n}\|_{A}\|G_{n}\|_{A}\left\|\frac{1}{|F_{n}| ^{2}+|G_{n}|^{2}}\right\|_{A}^{2}\] \[\leqslant \left(\varepsilon\cdot 2^{-n}\cdot\frac{\gamma}{CK^{3}T^{2}} \right)^{2}\cdot K^{2}\cdot\widehat{T}^{2}\] \[\leqslant \varepsilon\cdot 2^{-n}\cdot\frac{\gamma}{CK^{3}T^{2}}\cdot\frac{ \gamma}{CK^{3}T^{2}}\cdot K^{2}\cdot\widehat{T}^{2}\] \[\leqslant \varepsilon\cdot 2^{-n}\cdot\frac{\gamma}{CK^{3}T^{2}}\cdot\frac{1}{K}\] \[\leqslant \varepsilon\cdot 2^{-n-1}\cdot\frac{\gamma}{CK^{3}T^{2}},\]
which verifies (iv) for \(n+1\).
It follows from (6) and (iv) that
\[\lim_{n\to\infty}|(i(H_{n}))(x)|\leqslant C\lim_{n\to\infty}\|H_{n}\|_{A} \leqslant\varepsilon\cdot\frac{\gamma}{K^{3}T^{2}}\lim_{n\to\infty}2^{-n}=0 \quad(x\in X). \tag{26}\]
Suppose that \(m,n\in\mathbb{N}\), \(m>n\). For \(\varepsilon\in(0,1)\), \(\gamma\in(0,1]\), \(K\geqslant 2\), \(C\geqslant 1\), and \(T\geqslant 1\), by (12), (15), (17), (22), we observe that
\[\begin{split}\sum_{n=0}^{\infty}\|F_{n+1}-F_{n}\|_{A}& \leqslant\sum_{n=0}^{\infty}\|H_{n}\|_{A}\|G_{n}\|_{A}\left\|\frac{1}{|F_{n}| ^{2}+|G_{n}|^{2}}\right\|_{A}\\ &\leqslant\sum_{n=0}^{\infty}\frac{1}{2^{n}}\cdot\frac{ \varepsilon\gamma}{CK^{3}T^{2}}\cdot K\widehat{T}\\ &\leqslant\varepsilon\cdot\frac{1}{K^{2}}\sum_{n=0}^{\infty}2^{-n }\\ &<\varepsilon\cdot\frac{1}{2}\cdot\sum_{n=0}^{\infty}\frac{1}{2^{n }}<\varepsilon.\end{split} \tag{27}\]
_Case 1._ From (27) for any \(\varepsilon_{1}>0\) there exist \(N\) such that for \(m,n\in\mathbb{N},m>n>N\) holds
\[\|F_{m}-F_{n}\|_{\infty}\leqslant\sum_{j=n}^{m-1}\|F_{j+1}-F_{j}\|_{\infty}< \varepsilon_{1}, \tag{28}\]
which means that the sequence \((F_{n})_{n=1}^{\infty}\) is uniformly Cauchy, so it converges uniformly to some continuous function \(f\). Similarly, there exists a continuous function \(g\) that is the limit of the uniformly convergent sequence \((G_{n})_{n=1}^{\infty}\).
In particular we obtain
\[\lim_{n\to\infty}F_{n}(x)=f(x)\quad\text{ and }\quad\lim_{n\to\infty}G_{n}(x)=g(x). \tag{29}\]
Using (29), (i) and (26), we see that
\[\begin{array}{rcl}f(x)\cdot g(x)&=&\lim_{n\to\infty}\big{(}F_{n}(x)\cdot G_{ n}(x)\big{)}\\ &=&\lim_{n\to\infty}\big{(}F_{n}(x)\cdot G_{n}(x)+H_{n}(x)\big{)}\\ &=&F(x)\cdot G(x)+H(x).\end{array} \tag{30}\]
Moreover, from (27) we have
\[\|f-F\|_{\infty}\leqslant\sum_{n=0}^{\infty}\|F_{n+1}-F_{n}\|_{\infty}<\varepsilon. \tag{31}\]
We show that \(\|g-G\|_{\infty}<\varepsilon\) in the same way.
_Case 2._\(A\) is a dual Banach algebra with \(A=E^{*}\) that shares with \(X\) densely many points as witnessed by some dense set \(Q\subset X\).
In view of (ii), the sequences \((F_{n})_{n=0}^{\infty}\) and \((G_{n})_{n=0}^{\infty}\) are uniformly bounded by constant \(K\). Let \(\mathcal{U}\) be a non-principal ultrafilter on \(\mathbb{N}\). By the Banach-Alaoglu theorem, \((F_{n})_{n=0}^{\infty}\) and \((G_{n})_{n=0}^{\infty}\) converge to some elements \(f,g\in A\), \(\|f\|,\|g\|\leqslant K\) with respect to \(\sigma(A,E)\) along \(\mathcal{U}\). Using (i) and (26), we see that for \(x\in Q\)
\[\begin{array}{rcl}(i(f))(x)\cdot(i(g))(x)&=&\langle\delta_{x},i(fg)\rangle \\ &=&\langle i^{*}(\delta_{x}),fg\rangle\\ &=&\lim_{n\to\mathcal{U}}\langle i^{*}(\delta_{x}),F_{n}G_{n}\rangle\\ &=&\lim_{n\to\mathcal{U}}\langle\delta_{x},i(F_{n}G_{n})\rangle\\ &=&\lim_{n\to\mathcal{U}}\big{(}(i(F_{n}))(x)\cdot(i(G_{n}))(x)\big{)}\\ &=&\lim_{n\to\mathcal{U}}\big{(}(i(F_{n}))(x)\cdot(i(G_{n}))(x)+(i(H_{n}))(x) \big{)}\end{array} \tag{32}\]
nonetheless, it follows from (i) that
\[\|i\big{(}F_{n}G_{n}+H_{n}-(FG+H)\big{)}\|_{\infty}\leqslant C\cdot\|F_{n}G_ {n}+H_{n}-(FG+H)\|_{A}=0,\]
hence for any \(x\in Q\) we have
\[i\big{(}f(x)g(x)\big{)}=i\big{(}F(x)G(x)+H(x)\big{)}. \tag{33}\]
Since \(Q\) is dense subset of \(X\), by continuity of \(i\) and elements belonging to \(C(X)\), there are equal everywhere. This means that
\[fg=FG+H, \tag{34}\]
because \(i\) is injective. Similarly, for \(x\in Q\)
\[\begin{split}(i(f))(x)-(i(F))(x)&=\langle\delta_{x},i (f-F)\rangle\\ &=\langle i^{*}(\delta_{x}),f-F\rangle\\ &=\lim_{n\to\mathcal{U}}\langle i^{*}(\delta_{x}),F_{n}-F\rangle \\ &=\lim_{n\to\mathcal{U}}\langle\delta_{x},i(F_{n}-F)\rangle\\ &=\lim_{n\to\mathcal{U}}\big{(}(i(F_{n}))(x)-(i(F))(x)\big{)}\\ &=\lim_{n\to\mathcal{U}}\sum_{j=0}^{n}\big{(}(i(F_{j+1}))(x)-(i( F_{j}(x)\big{)}\end{split} \tag{35}\]
but from (27) we know that
\[\sum_{n=0}^{\infty}\big{\|}i(F_{n+1})-i(F_{n})\big{\|}_{\infty}\leqslant C \cdot\sum_{n=0}^{\infty}\|F_{n+1}-F_{n}\|_{A},\]
so for any \(x\in Q\)
\[(i(f))(x)-(i(F))(x)=\sum_{n=0}^{\infty}\big{(}(i(F_{n+1}))(x)-(i(F_{n}))(x) \big{)},\]
hence, again by density of \(Q\) in \(X\), continuity of \(i\) and elements belonging to \(C(X)\), we have this equality everywhere. Moreover, since \(i\) is injective, we obtain
\[f-F=\sum_{n=0}^{\infty}\big{(}F_{n+1}-F_{n}\big{)},\]
so from (27) we have
\[\|f-F\|_{A}\leqslant\sum_{n=0}^{\infty}\|F_{n+1}-F_{n}\|_{A}<\varepsilon. \tag{36}\]
We show that \(\|g-G\|_{A}<\varepsilon\) in the same way.
In each of the above cases, we have obtained the appropriate functions \(f\) and \(g\), which, to simplify the notation, have been marked with the same symbols. So, for every \(H\in A\) satisfying (10), there exist \(f\) and \(g\) in \(A\) such that
\[\|f-F\|_{A}<\varepsilon,\quad\|g-G\|_{A}<\varepsilon\]
(see respectively (31) or (36)) and \(FG+H=fg\) (see respectively (30) or (34)). This means that
\[B_{A}(F\cdot G,\delta)\subset B_{A}(F,\varepsilon)\cdot B_{A}(G,\varepsilon)\]
with \(\delta:=\varepsilon\cdot\frac{\gamma}{CK^{3}T^{2}}\). Hence, the multiplication in \(A\) is locally open at the pair \((F,G)\in A^{2}\).
Suppose now that \(i\) has dense range in \(C(X)\). By inverse-closedness of \(A\), \(A\) has topological stable rank \(1\) if and only if \(C(X)\) has topological stable rank \(1\). Consequently, if \(C(X)\)
fails to have dense invertibles (which happens exactly when \(\dim X>1\)), then \(A\) does not have open multiplication.
Applying Lemma 1.1 we obtain the following conclusion that proves Theorem A.
**Lemma 3.2**.: _Suppose that \(A\) is a unital symmetric Banach *-algebra such that there exists an injective *-homomorphism \(i\colon A\to C(X)\) for some compact space \(X\) such that \(A\) is a differential subalgebra of \(C(X)\). Let us consider either case:_
* \(A=C(X)\)_,_
* \(A=E^{*}\) _is a dual Banach algebra that shares with_ \(X\) _densely many points._
_Then multiplication in \(A\) is open at all pairs of jointly non-degenerate elements._
**Corollary 3.3**.: _Let \(A\) be a (complex) reflexive Banach space with a \(K\)-unconditional basis \((e_{\gamma})_{\gamma\in\Gamma}\)\((K\geqslant 1)\). Then \(A\) is naturally a Banach *- algebra when endowed with multiplication_
\[a\cdot b=\sum_{\gamma\in\Gamma}a_{\gamma}b_{\gamma}e_{\gamma}\quad(a=\sum_{ \gamma\in\Gamma}a_{\gamma}e_{\gamma},b=\sum_{\gamma\in\Gamma}b_{\gamma}e_{ \gamma}\in A).\]
_and coordinate-wise complex conjugation. Let \(A^{\#}\) denote the unitisation of \(A\). Then \(A^{\#}\) has open multiplication._
Proof.: It is clear any pair of elements of \(A^{\#}\) is jointly non-degenerate. Since the basis \((e_{\gamma})_{\gamma\in\Gamma}\) is \(K\)-unconditional, we have
\[\|ab\|_{A} =\left\|\sum_{\gamma\in\Gamma}a_{\gamma}b_{\gamma}e_{\gamma} \right\|_{A}\] \[\leqslant K\bigg{\|}\sum_{\gamma\in\Gamma}a_{\gamma}\cdot\|b\|_{ \ell_{\infty}(\Gamma)}\cdot e_{\gamma}\bigg{\|}_{A}\] \[=K\|a\|_{A}\|b\|_{\ell_{\infty}(\Gamma)}\] \[\leqslant K(\|a\|_{A}\|b\|_{\ell_{\infty}(\Gamma)}+\|a\|_{\ell_{ \infty}(\Gamma)}\|b\|_{A}).\]
This means that \(A^{\#}\) is a differential subalgebra of \(c(\Gamma)\), the unitisation of the algebra of functions that vanish at infinity on \(\Gamma\). Since the formal inclusion from \(A^{\#}\) to \(c(\Gamma)\) has dense range, the conclusion follows.
Since the bidual of \(C(X)\) is isometric to \(C(Z)\) for some compact, zero-dimensional space (in particular, \(C(X)^{**}\) has uniformly open multiplication), using Lemmas 2.1 and 2.2 we may record the following corollary.
**Corollary 3.4**.: _Suppose that \(A\) is an Arens-regular Banach *-algebra that is densely embedded as a differential subalgebra of \(C(X)\) for some compact space \(X\). Then \(A^{**}\) has open multiplication at all pairs of jointly non-degenerate elements._
We now turn our attention to Theorem B.
Proof of Theorem B.: By Lemma 2.5, there exists an ultrafilter \(\mathcal{U}\) such that \(\mathbb{Z}^{(\mathbb{R})}\) embeds into \(G^{\mathbb{U}}\). As \(\mathbb{Z}^{(\mathbb{R})}\) is a free Abelian group, it admits a surjective homomorphism \(\varphi\) onto \(\mathbb{Q}^{(\mathbb{N})}\). Since \(\mathbb{Q}^{(\mathbb{N})}\) is divisible, it is an injective object in the category of Abelian groups, so \(\varphi\) extends to a homomorphism \(\overline{\varphi}\colon G^{\mathbb{U}}\to\mathbb{Q}^{(\mathbb{N})}\). In particular, the infinite-dimensional space \(\widehat{\mathbb{Q}^{(\mathbb{N})}}\cong\mathbb{T}^{\mathbb{N}}\) embeds topologically into \(\widehat{G^{\mathbb{U}}}\).
Consequently, \(\dim\widehat{G^{\mathbb{U}}}=\infty>1\). By [16, Corollary 4.10], multiplication in \(\ell_{1}(G^{\mathbb{U}})\) is not open. However, \(\ell_{1}(G^{\mathbb{U}})\) is a quotient of the Banach-algebra ultrapower \((\ell_{1}(G))^{\mathbb{U}}\) ([16, Section 2.3.2]), so by [16, Corollary 3.3], convolution in \(\ell_{1}(G)\) is not uniformly open.
## 4. Proof of Theorem C
The present section is devoted to the proof of Theorem C. We start by proving a special case of \(X=[0,1]\); the argument is a slightly improved version of a proof due to Behrends. We are indebted for his permission to include it here.
**Theorem 4.1**.: _The (complex) algebra \(C[0,1]\) has uniformly open multiplication._
In order to prove Theorem 4.1 we require a number of auxiliary results.
Anywhere below \(\Delta\) will denote a set of all \((\alpha,\beta,\gamma)\in\mathbb{C}^{3}\) such that \(|\gamma|=1\) and the polynomial \(\gamma z^{2}+\beta z+\alpha\) has two roots of different absolute value. In particular, in this situation, there is a uniquely determined root, so we can introduce the following definition
**Definition 4.2**.: We denote by \(Z\colon\Delta\to\mathbb{C}\) the map that assigns to \((\alpha,\beta,\gamma)\) the root of the quadratic polynomial \(\gamma z^{2}+\beta z+\alpha\) with the smaller absolute value.
_Remark 2_.: The root function is locally analytic, so the function \(Z\) is continuous.
Let us now fix a non-degenerate interval \([a_{0},b_{0}]\).
**Lemma 4.3**.: _Let \(f,g\in C[a_{0},b_{0}]\). If \(|f|\geqslant\eta\) for some \(\eta>0\) and \(|g|=1\) then for every \(\varepsilon>0\) there is \(\delta>0\) such that if \(d\in C[a_{0},b_{0}]\) and \(\|d\|\leqslant\delta\) there is \(\phi\in C[a_{0},b_{0}]\) with \(\|\phi\|\leqslant\varepsilon\) and_
\[f(t)\phi(t)+g(t)\phi^{2}(t)=d(t)\quad(t\in[a_{0},b_{0}]).\]
Proof.: Fix \((\alpha,\beta,\gamma)\in\mathbb{C}^{3}\) satisfying \(|\beta|\geqslant\eta\), \(|\gamma|=1\) and arbitrary, strictly positive \(\eta,\varepsilon\). It is enough to find \(\delta>0\) such that if \(|\alpha|\leqslant\delta\) then \((\alpha,\beta,\gamma)\in\Delta\) and \(|Z(\alpha,\beta,\gamma)|\leqslant\varepsilon\). Indeed, by Remark 2, this allows us to define function \(\phi\) as \(\phi(t):=Z(-d(t),f(t),g(t))\) for \(t\in[a_{0},b_{0}]\).
Denote by \(z_{1},z_{2}\) the roots of the polynomial \(\gamma z^{2}+\beta z+\alpha\). By Vieta's formulas
\[\gamma\left(z_{1}+z_{2}\right)=-\beta,\]
hence either \(|z_{1}|\geqslant\eta/2\) or \(|z_{2}|\geqslant\eta/2\). Without loss of generality we may assume that \(|z_{1}|\geqslant\eta/2\).
Again, by Vieta's formulae,
\[\gamma z_{1}z_{2}=\alpha,\]
so that \(z_{2}=\alpha/(\gamma z_{1})\), hence \(|z_{2}|\leqslant 2|\alpha|/\eta\). Thus, it suffices to choose \(|\alpha|\leqslant\delta\) where \(\delta>0\) satisfies \(2\delta/\eta\leqslant\varepsilon\) and \(2\delta/\eta<\eta/2\). Then \(|z_{2}|<|z_{1}|\) and \(|z_{2}|\leqslant\varepsilon\), so conclusion follows by the definition of \(Z\)
**Lemma 4.4**.: _For nonzero complex numbers \(z\) and \(w\) define \(c:=i\overline{w}z/|\overline{w}z|\). Then \(|c|=1\) and \(|z+cw|^{2}=|z|^{2}+|w|^{2}\)._
Proof.: The proof is trivial.
We denote by \(\overline{I}\) (respectively \(I^{o}\)) the closure (respectively, the interior) of an, interval.
**Lemma 4.5**.: _Suppose that \(I_{1},\ldots,I_{k}\) are open subintervals of \([a_{0},b_{0}]\) with pairwise disjoint closures. Then for any continous function \(\phi\colon\,[a_{0},b_{0}]\setminus\bigcup_{j}I_{j}\to\mathbb{T}\) there exists a continuous extension \(\psi\colon\,[a_{0},b_{0}]\to\mathbb{T}\)._
Proof.: Since we know the values of \(\psi\) at the endpoints of the intervals \(\overline{I_{j}}\) for \(j=1,2,\ldots,k\), we may connect them with any path in \(\mathbb{T}\) to define \(\psi\).
We observe that is has been crucial to worth with complex numbers in the proof of Lemma 4.5 as there is no analogue of this lemma in the real case.
**Lemma 4.6**.: _For any function \(h\in C[a_{0},b_{0}]\) and arbitrary \(\eta_{2}>\eta_{1}>0\) there are pairwise disjoint closed subintervals \(J_{1},\ldots,J_{k}\) of \([a_{0},b_{0}]\) such that_
\[\{t\in[a_{0},b_{0}]\colon\,|h(t)|\leqslant\eta_{1}\}\subset\bigcup_{j=1}^{k}J _{j}\subset\{t\in[a_{0},b_{0}]\colon\,|h(t)|<\eta_{2}\}\,.\]
Proof.: Define set \(K:=\{t\colon\,|h(t)|\leqslant\eta_{1}\}\) and open set \(O:=\{t\colon\,|h(t)|<\eta_{2}\}\,.\) Since \(K\subset O\) we may find for any \(t\in K\) an open subinterval \(I_{t}\) in such a way that \(t\in I_{t}\subset\overline{I_{t}}\subset O\). Moreover, since \(K\) is compact, it is possible to cover \(K\) with finitely many of them. Hence their closures are desired intervals \(J_{j}\) (it might be necessary to pass to unions if they are not disjoint).
**Lemma 4.7**.: _Let \(h_{1},h_{2}\in C\,[a_{0},b_{0}]\). Suppose that \(h_{1}\) and \(h_{2}\) are jointly \(\eta^{2}\)-non-degenerate for some \(\eta>0\). Then there are continuous \(\beta_{1},\beta_{2}\colon\,[a_{0},b_{0}]\to\mathbb{T}\) such that_
\[|h_{1}(t)\beta_{1}(t)+h_{2}(t)\beta_{2}(t)|\geqslant\eta\quad(t\in[a_{0},b_{0 }]).\]
Proof.: By compactness, we may find \(\eta_{0}>0\) such that that \(h_{1}\) and \(h_{2}\) are jointly \((\eta^{2}+\eta_{0}^{2})\)-non-degenerate; we also will assume that \(2\eta_{0}^{2}<\eta^{2}\). Next we choose, with a \(\tau\in[0,1]\) that will be fixed later, pairwise disjoint closed intervals \(J_{1},\ldots,J_{k}\) and pairwise disjoint closed intervals \(J_{k+1},\ldots,J_{l}\) such that
\[\Big{\{}t\colon\,|h_{1}(t)|\leqslant\tau\cdot\frac{\eta_{0}}{2}\Big{\}}\subset \bigcup_{j=1}^{k}J_{j}\subset\{t\colon\,|h_{1}(t)|<\tau\cdot\eta_{0}\}\]
and
\[\Big{\{}t\colon\,|h_{2}(t)|\leqslant\tau\cdot\frac{\eta_{0}}{2}\Big{\}}\subset \bigcup_{j=k+1}^{l}J_{j}\subset\{t\colon\,|h_{2}(t)|<\tau\cdot\eta_{0}\}\]
(see Lemma 4.6). As a consequence of \(2\eta_{0}^{2}<\eta^{2}\) no \(J_{j}\) with \(j\leqslant k\) intersects \(J_{j^{\prime}}\) with \(j^{\prime}>k\) : the family \(\left(J_{j}\right)_{j=1}^{l}\) comprises disjoint intervals.
We now define \(\beta_{1}\) and \(\beta_{2}\). The function \(\beta_{1}\) is the function constantly equal one, and \(\beta_{2}\) is constructed as follows. On \(\left[a_{0},b_{0}\right]\backslash\bigcup_{j=1}^{l}J_{j}^{o}\) we put
\[\beta_{2}(t):=i\frac{h_{1}(t)\overline{h_{2}(t)}}{\left|h_{1}(t)\overline{h_{ 2}(t)}\right|}.\]
The values are in \(\mathbb{T}\) so that, by Lemma 4.5, we may find a \(\mathbb{T}\)-valued continuous extension to all of \(\left[a_{0},b_{0}\right]\) that will be also denoted by \(\beta_{2}\).
We _claim_ that \(\beta_{1}\) and \(\beta_{2}\) have the desired properties. By construction both functions are continuous and they satisfy \(\left|\beta_{1}(t)\right|=\left|\beta_{2}(t)\right|=1\) for all \(t\). For \(t\in\left[a_{0},b_{0}\right]\backslash\bigcup_{j=1}^{l}J_{j}^{o}\) Lemma 4.4 implies that
\[\left|h_{1}(t)\beta_{1}(t)+h_{2}(t)\beta_{2}(t)\right|^{2}=\left|h_{1}(t) \right|^{2}+\left|h_{2}(t)\right|^{2}>\eta^{2}.\]
Now let a \(t\) in one of the \(J_{j}\) with \(j\leqslant k\) be given. Then \(\left|h_{1}(t)\right|^{2}<\tau^{2}\eta_{0}^{2}\) so that \(\left|h_{2}(t)\right|^{2}\geqslant\eta^{2}+\left(1-\tau^{2}\right)\eta_{0}^{2}\), and it follows that \(\left|h_{2}(t)\right|\geqslant\sqrt{\eta^{2}+\left(1-\tau^{2}\right)\eta_{0}^ {2}}\). We choose \(\tau\) so small so that \(\sqrt{\eta^{2}+\left(1-\tau^{2}\right)\eta_{0}^{2}}-\tau\eta_{0}\geqslant\eta\). We may then continue our estimation as follows:
\[\left|h_{1}(t)\beta_{1}(t)+h_{2}(t)\beta_{2}(t)\right| \geqslant\left|h_{2}(t)\right|-\left|h_{1}(t)\right|\] \[\geqslant\sqrt{\eta^{2}+\left(1-\tau^{2}\right)\eta_{0}^{2}}-\tau \eta_{0}\] \[\geqslant\eta.\]
The argument for \(\bigcup_{j=k+1}^{l}J_{j}\) is analogous.
**Lemma 4.8**.: _Let \(h_{1},h_{2}\in C\left[a_{0},b_{0}\right]\). Suppose that \(h_{1},h_{2}\) are jointly non-degenerate. Then for every \(\varepsilon>0\) there is a positive \(\delta\) such that for every \(d\in C\left[a_{0},b_{0}\right]\) satisfying \(\left\|d\right\|\leqslant\delta\) there are \(z_{1},z_{2}\in C\left[a_{0},b_{0}\right]\) such that_
* \(\left|z_{1}(t)\right|,\left|z_{2}(t)\right|\leqslant\varepsilon\) _and_
* \(h_{1}(t)z_{1}(t)+h_{2}(t)z_{2}(t)+z_{1}(t)z_{2}(t)=d(t)\)__\((t\in[a_{0},b_{0}])\)_._
Proof.: Choose \(\beta_{1},\beta_{2}\) as in the preceding lemma and put \(f:=h_{1}\beta_{1}+h_{2}\beta_{2}\) and \(g:=h_{1}h_{2}\). Choose \(\delta>0\) according to Lemma 4.3 for \(\beta_{1}\) and \(\beta_{2}\): for every \(d\in C[a_{0},b_{0}]\) with \(\left\|d\right\|\leqslant\delta\) we may find \(\phi\) with \(\left\|\phi\right\|\leqslant\varepsilon\) and \(f\phi+g\phi^{2}=d\). Thus it suffices to set \(z_{1}:=\beta_{1}\phi\) and \(z_{2}:=\beta_{2}\phi\).
**Lemma 4.9**.: _Let \(\varepsilon>0\) and \(\psi\in C[a,b]\) with \(\left\|\psi\right\|\leqslant\varepsilon^{2}\). Suppose there are \(Z_{a},W_{a},\hat{Z}\in\mathbb{C}\) such that \(Z_{a}W_{a}=\psi(a)\), \(\left|Z_{a}\right|,\left|W_{a}\right|\leqslant\varepsilon\) and \(\hat{Z}^{2}=\psi(b)\). Then may construct \(Z_{1},Z_{2}\in C[a,b]\) with the following properties:_
* \(Z_{1}(a)=Z_{a},Z_{2}(a)=W_{a},\)__
* \(\left|Z_{1}(t)\right|,\left|Z_{2}(t)\right|\leqslant\varepsilon\) _and_ \(Z_{1}(t)Z_{2}(t)=\psi(t)\) _for all_ \(t,\)__
* \(Z_{1}(b)=Z_{2}(b)=\hat{Z}.\)__
_A similar statement holds if \(Z_{b},W_{b}\) are prescribed at \(b\) and \(\hat{Z}\) at \(a\)._
Proof.: Without loss of generality we may suppose that \(\left|Z_{a}\right|\geqslant\left|W_{a}\right|\) so that \(\left|Z_{a}\right|\geqslant\sqrt{\left|\psi(a)\right|}\). Choose \(Z_{1}\in[a,b]\) with \(Z_{1}(a)=Z_{a},Z_{1}(b)=\hat{Z}\) and \(\varepsilon>\left|Z_{1}(t)\right|\geqslant\sqrt{\left|\psi(t)\right|}\) for all \(t\); note that \(\left|\hat{Z}\right|\geqslant\sqrt{\left|\psi(b)\right|}\)1.
Footnote 1: Here again it is important that we work in \(\mathbb{C}\) and not in \(\mathbb{R}\).
We define
\[Z_{2}(t):=\begin{cases}0&\text{ if }Z_{1}(t)=0,\\ \psi(t)/Z_{1}(t)&\text{ otherwise.}\end{cases}\]
Then \(Z_{1}\) and \(Z_{2}\) will have the claimed properties. Indeed, the continuity of \(Z_{2}\) at points \(t_{0}\) with \(Z_{1}\left(t_{0}\right)=0\) is proved as follows. If \(Z_{1}\left(t_{0}\right)=0\) then \(\psi\left(t_{0}\right)=0\). Thus, by continuity of \(\psi\), if \(t_{n}\to t_{0}\), then \(\sqrt{\left|\psi\left(t_{n}\right)\right|}\to 0.\) Hence \(\left|Z_{2}\left(t_{n}\right)\right|=\left|\psi\left(t_{n}\right)/Z_{1}\left( t_{n}\right)\right|\leqslant\sqrt{\left|\psi\left(t_{n}\right)\right|}\) will tend to zero as well.
**Lemma 4.10**.: _Let \(\varepsilon>0\) and \(\psi\in C[a,b]\). Suppose that \(\inf_{t\in[a,b]}\left|\psi(t)\right|\leqslant\varepsilon^{2}\). If there are \(Z_{a},W_{a},Z_{b},W_{b}\in\mathbb{C}\) such that \(Z_{a}W_{a}=\psi(a),Z_{b}W_{b}=\psi(b)\) and \(\left|Z_{a}\right|,\left|W_{a}\right|,\left|Z_{b}\right|,\left|W_{b}\right|\leqslant\varepsilon\), then there are \(Z_{1},Z_{2}\in C[a,b]\) with the following properties:_
* \(Z_{1}(a)=Z_{a},Z_{2}(a)=W_{a},Z_{1}(b)=Z_{b},Z_{2}(b)=W_{b},\)__
* \(\left|Z_{1}(t)\right|,\left|Z_{2}(t)\right|\leqslant\varepsilon\) _and_ \(Z_{1}(t)Z_{2}(t)=\psi(t)\) _for all_ \(t.\)__
Proof.: Choose any \(b^{\prime}\) between \(a\) and \(b\) and a \(\hat{Z}\) with \(\hat{Z}^{2}=\psi\left(b^{\prime}\right)\). It remains to apply the preceding lemma to the intervals \([a,b^{\prime}]\) and \([b^{\prime},b]\) (with \(Z_{1}\left(b^{\prime}\right)=Z_{2}\left(b^{\prime}\right)=\hat{Z}\) in either case) and to glue the \(Z_{1},Z_{2}\) that are defined on these subintervals together.
Proof of Theorem 4.1.: Let \(\varepsilon_{0}>0\). We have to find \(\delta_{0}>0\) with the following property: whenever \(d\colon[0,1]\to\mathbb{C}\) is a prescribed continuous function with \(\left\|d\right\|\leqslant\delta_{0}\) it is possible to find \(d_{1},d_{2}\in C[0,1]\) with \(\left\|d_{1}\right\|,\left\|d_{2}\right\|\leqslant\varepsilon_{0}\) and \(\left(f+d_{1}\right)\left(g+d_{2}\right)=fg+d\) (_i.e._, \(fd_{2}+gd_{1}+d_{1}d_{2}=d\)) for any \(f,g\in C[0,1]\). Fix \(f,g\in C[0,1]\).
The idea is to determine such \(d_{1},d_{2}\) by using Lemma 4.8 (Lemma 4.10, respectively) on the subintervals where the functions \(f\) and \(g\) are are jointly non-degenerate (respectively, jointly degenerate) and to glue the pieces together.
With an \(\varepsilon_{1}>0\) that will be fixed later we apply Lemma 4.6 with \(h:=\left|f\right|^{2}+\left|g\right|^{2}\) and \(\eta_{1}:=\varepsilon_{1}^{2},\eta_{2}:=4\varepsilon_{1}^{2}\). Write the intervals \(J_{j}\) (\(j=1,\ldots,k\)) as \(J_{j}=[a_{j},b_{j}]\), where, without loss of generality, \(0\leqslant a_{1}<b_{1}<a_{2}<b_{2}<\cdots<a_{k}<b_{k}\). Note that \(h(t)\leqslant 4\varepsilon_{1}^{2}\) on each \([a_{j},b_{j}]\) and \(h(t)>\varepsilon_{1}^{2}\) on the intervals \([b_{j},a_{j+1}]\).
Let us consider the intervals \([b_{j},a_{j+1}]\) and apply Lemma 4.8 with \([a_{0},b_{0}]:=[b_{j},a_{j+1}]\), \(\eta:=\varepsilon_{1}\), and \(\varepsilon:=\varepsilon_{1}\). Choose \(\delta\) as in the lemma; without loss of generality we may assume that \(\delta\leqslant\varepsilon_{1}^{2}\). We consider any \(d\in C[0,1]\) with \(\left\|d\right\|\leqslant\delta\). Lemma 4.8 provides continuous \(z_{1},z_{2}\colon\left[b_{j},a_{j+1}\right]\to\mathbb{C}\) with \(f(t)z_{1}(t)+g(t)z_{2}(t)+z_{1}z_{2}=d(t)\) and \(\left|z_{1}(t)\right|,\left|z_{2}(t)\right|\leqslant\varepsilon_{1}\) for \(t\in[b_{j},a_{j+1}]\). We define \(d_{1}\) (\(d_{2}\), respectively ) on \([b_{j},a_{j+1}]\) by \(z_{2}\) (\(z_{1}\), respectively). Then \(\left(f+d_{1}\right)\left(g+d_{2}\right)=fg+d\) on these subintervals. (It should be noted here that the \(\delta\) in Lemma 4.8 does only depend on \(\eta\) and \(\varepsilon\) but not on \(a_{0},b_{0}\).)
Now \(d_{1},d_{2}\) are suitably defined on the union of the \([b_{j},a_{j+1}]\). The gaps will be filled with the help of Lemma 4.10. Consider any \([a_{j},b_{j}]\). For a \(t\) in such an interval
we know that \(|f(t)|,|g(t)|\leqslant 2\varepsilon_{1}\) so that \(|f(t)g(t)|\leqslant 4\varepsilon_{1}^{2}\).
It follows that \(\psi\colon\left[a_{j},b_{j}\right]\to\mathbb{C},t\mapsto f(t)g(t)+d(t)\) satisfies \(|\psi(t)|\leqslant 5\varepsilon_{1}^{2}\leqslant(5\varepsilon_{1})^{2}\). We apply Lemma 4.10 with this function \(\psi\) and
\[Z_{a}:=\left(f+d_{1}\right)\left(a_{j}\right),W_{a}:=\left(g+d_{2}\right) \left(a_{j}\right),Z_{b}:=\left(f+d_{1}\right)\left(b_{j}\right),W_{b}:=\left( g+d_{2}\right)\left(b_{j}\right)\]
and \(\varepsilon:=5\varepsilon_{1}\). It remains to use the functions \(Z_{1},Z_{2}\) found by the lemma to define \(d_{1},d_{2}\) on \([a_{j},b_{j}]\). Here \(Z_{1}\) (respectively \(Z_{2}\)) plays the role of \(f+d_{1}\) (\(g+d_{2}\)) so that we may set \(d_{1}(t):=Z_{1}(t)-f(t)\) and \(d_{2}(t):=Z_{2}(t)-g(t)\) for \(t\in[a_{j},b_{j}]\). At the endpoints this assignment is compatible with the previous one: at \(a_{j}\), e.g., \(d_{1}\) was already defined, but as a consequence of \(Z_{1}(a)=Z_{a}=f\left(a_{j}\right)+d\left(a_{j}\right)\) the new definition of \(d_{1}\left(a_{j}\right)\) as \(\left(Z_{1}-f\right)\left(a_{j}\right)\) leads to the same value.
We observe that \(|d_{j}(t)|\leqslant(2+5)\varepsilon_{1}=7\varepsilon_{1}\) for \(j=1,2\) so that we may summarise the above calculations as follows: if one starts with \(\varepsilon_{1}:=\varepsilon_{0}/7\), then \(\delta_{0}:=\delta\) with the \(\delta\) that we have just found has the desired properties.
It should be noted that our proof is not yet complete since, when considering the \([a_{j},b_{j}]\), our argument used the fact that the functions \(d_{1},d_{2}\) were already defined at \(a_{j}\) and \(b_{j}\), so we are to consider the cases \(a_{1}=0\) or \(b_{k}=1\). If, _e.g._, \(a_{1}=0\) we choose any \(Z_{a},W_{a}\) with \(\left|Z_{a}\right|,\left|W_{a}\right|\leqslant\varepsilon\) and \(Z_{a}W_{a}=\psi(a)\); we proceed similarly for \(b_{k}=1\).
### Uniform openness of multiplication in \(C(x)\)
The next result is crucial for establishing the only non-trivial implication in Theorem C.
**Theorem 4.11**.: _Let \(X\) be a compact space of covering dimension at most \(1\). Then multiplication in \(C(X)\) is uniformly open._
Proof.: _Case 1:_\(X\) is a topological realisation of a graph in the complex plane.
We _claim_ that \(C(X)\) has uniformly open multiplication and \(\delta(\varepsilon)\) does not depend on \(X\) in the class of such graphs, that is, multiplications in \(C(X)\) are equi-uniformly open for all graphs \(X\).
For this, let us consider a partition of \(X\) into finitely many intervals, \(\bigcup_{j=1}^{k}[a_{j},b_{j}]\). We define a finer partition of this graph into intervals as follows. If the intervals \([a_{j},b_{j}]\) and \([a_{i},b_{i}]\) intersect at \(c\) for some \(j,i\in\{1,\ldots k\}\), then \(c\) must be the endpoint of the intervals, _i.e._, we replace the interval \([a_{j},b_{j}]\) by sub-intervals \([a_{j},c]\) and \([c,b_{j}]\) whenever \(c\in(a_{j},b_{j})\) (analogously for the interval \([a_{i},b_{i}]\)). For each interval in the new partition \(P=\bigcup_{j=1}^{K}[a_{j},b_{j}]\) we apply analogous procedure as in the proof of Theorem 4.1.
More precisely, denote for any function \(F\colon P\to\mathbb{C}\) its restriction to the interval \([a_{j},b_{j}]\) by \(F^{j}.\) Then for \(\varepsilon_{0}>0\) find a positive \(\delta_{0}\) with the following property: whenever \(d\in C(P)\), \(\left\|d\right\|\leqslant\delta_{0}\) for every restriction \(d^{j}\in C[a_{j},b_{j}]\) (\(j\in\{1,\ldots,K\}\)) we may \(d_{1}^{j},d_{2}^{j}\in C[a_{j},b_{j}]\) with \(\left\|d_{1}^{j}\right\|,\left\|d_{2}^{j}\right\|\leqslant\varepsilon_{0}\) and \(\left(f^{j}+d_{1}^{j}\right)\left(g^{j}+d_{2}^{j}\right)=f^{j}g^{j}+d^{j}\) for any \(f,g\colon P\to\mathbb{C}.\) We glue the functions \(d_{1}^{j}\) for all \(j\in\{1,\ldots,K\}\) to obtain function \(d_{1}\colon P\to\mathbb{C}\) (analogously for function \(d_{2}\)). Note that due to the choice of partition P, these functions are uniquely defined at
the endpoints of the intervals, because at the intersection points of the intervals we always take the same value of the function. It should be noted also (again) that the \(\delta\) in Lemma 4.8 does only depend on \(\eta\) and \(\varepsilon\) and not on \(a_{0},b_{0}.\)
_Case 2:_\(X\) is a compact metric space of covering dimension at most 1.
It is known that for a zero-dimensional (not necessarily metrisable) compact space \(X\), \(C(X)\) has uniformly open multiplication with \(\delta(\varepsilon)=\varepsilon^{2}/4\) ([16, Proposition 4.6]). In the light of Case 1, by taking minimum if necessary, we may suppose that \(\delta(\varepsilon)\) is the same for all zero-dimensional spaces as well as all graphs in the plane. However, every one-dimensional compact metric space \(X\) is the projective limit of an inverse sequence \((K_{i},\pi_{i}^{j})\) of at most one-dimensional 'polyhedra' (this is a theorem of Freudenthal [19]; see [18, Theorem 1.13.2] for modern exposition), _i.e._, finite sets and graphs in the plane. Such an inverse sequence gives rise to a direct system \((C(K_{i}),h_{\pi_{i}^{j}}),\) where \(h_{\pi_{i}^{j}}\) is a *-homomorphic embedding of \(C(K_{i})\) into \(C(K_{j})\) (\(i\leqslant j\)) given by
\[h_{\pi_{i}^{j}}f=f\circ\pi_{i}^{j}\quad(f\in C(K_{i})).\]
As \(C(X)\) is naturally *-isomorphic to the completion of the chain \((C(K_{i}),h_{\pi_{i}^{j}})\) (_i.e._, the C*-direct limit; see [32, Section 1] for more details) in which multiplications are equi-uniformly open, by [16, Corollary 3.6], \(C(X)\) has uniformly open multiplication and \(\delta(\varepsilon)\) depends only on \(\varepsilon\) but not the compact metric space \(X\) considered.
_Case 3_: \(X\) is an arbitrary compact space of covering dimension at most 1.
By [28, Theorem 1] every compact space \(X\) is an inverse limit of a well-ordered system of metrisable compacta \(X_{\alpha}\) with \(\dim X_{\alpha}\leqslant\dim X.\) As proved in Claim 2, \(C(X_{\alpha})\) have equi-uniformly open multiplications, meaning that \(\delta(\varepsilon)\) is the same for all items of the inverse system considered, so multiplication in \(C(X)\) is uniformly open ([16, Corollary 3.6]).
## 5. Open problems
In the light of Theorem A let us pose the following question.
_Question 1_.: What are further examples of (dual) Banach algebras that are approximable by jointly non-degenerate elements? What about algebras of Lipschitz functions on zero-dimensional compact spaces?
As the case of convolution algebras on discrete groups having at most one-dimensional dual groups, we ask the following question.
_Question 2_.: Can the group algebra of a group with bounded exponent have (uniformly) open convolution?
More generally:
_Question 3_.: Is there an infinite group \(G\) for which \(\ell_{1}(G)\) has open convolution? |
2303.06717 | On the Number of Distinct Tilings of Finite Subsets of $\mathbb{Z}^{d}$
With Tiles of Fixed Size | In this work, we study the number of finite tiles $A\subset\mathbb{Z}^{d}$ of
size $\alpha$ that translationally tile a finite $C\subset\mathbb{Z}^{d}$. We
consider two tiles $A$ and $A'$ to be congruent if and only if one can be
transformed into the other via some translation. We make several significant
contributions to the study of this problem. For any $\alpha\in\mathbb{Z}^{+}$
and $C=[x_{1}]\times[x_{2}]\times\ldots [x_{d}]$ where
$x_{1},\ldots,x_{d}\in\mathbb{Z}^{+}$ (which we refer to as a finite contiguous
$C$), we give an efficient method for enumerating all elements of
$\mathcal{T}(\alpha,C)$, where $(A,B)\in \mathcal{T}(\alpha,C)$ if and only if
$A,B\subset\mathbb{Z}^{d}$, the Minkowsji sum of $A$ and $B$ equals $C$, the
size of $A$ equals $\alpha$, and $|C|=\alpha|B|$. We then use this to prove a
partial order on $|\mathcal{T}(\alpha,C)|$ with respect to $\alpha$ for any
finite contiguous $C$.
We then study the extremal question as to the the growth rate of
$\text{max}_{\alpha,C}[|\mathcal{T}(\alpha,C)|]$ with respect to $|C|$. For
finite contiguous $C$, we improve the trivial lower and upper bounds of $\log
n$ and ${n\choose n/2}$ respectively to an upper bound of
\[n^{\frac{(1+\epsilon)\log n}{\log\log n}}\] and an infinitely often
super-polynomial lower bound such that, for all constants $c$ and some infinite
$N\subset\mathbb{Z}^{+}$, \[\forall n\in N,
\exists\alpha\in\mathbb{Z}^{+}(|\mathcal{T}(\alpha,C)|>n^{c}),\] where $n=|C|$.
We conjecture that the number of tilings of any finite contiguous $C$ by
tiles of size $\alpha$ is an upper bound on the number of tilings of any finite
$C'\subset \mathbb{Z}^{d}$ by tiles of size $\alpha$. To begin working towards
this, we prove that any $A$ of size $\alpha$ that tiles some finite contiguous
$C$ itself has at most as many tilings by tiles of size $\alpha'$ as there are
tilings of $[\alpha]$ by tiles of size $\alpha'$. | Jesse Stern | 2023-03-12T17:47:21Z | http://arxiv.org/abs/2303.06717v2 | # On The Number of Distinct Tilings of Finite Subsets of \(\mathbb{Z}^{d}\)
###### Abstract
In this work, we study the number of non-congruent finite tiles \(A\subset\mathbb{Z}^{d}\) of size \(\alpha\) that can translationally tile \(C\subset\mathbb{Z}^{d}\). Under these restrictions the tile \(A\) can be translated any number of times to cover exactly \(C\), but cannot be rotated or reflected. Further we consider two tiles \(A\) and \(A^{\prime}\) to be congruent (i.e. not distinct) if and only if one can be transformed into the other via some translation. We make several significant contributions to the study of this problem. For any \(\alpha\in\mathbb{Z}^{+}\) and \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) where \(x_{1},\ldots,x_{d}\in\mathbb{Z}^{+}\), we classify exactly which \(A\) of size \(\alpha\) can tile \(C\). More specifically, we give an efficient1 method for enumerating all elements of \(\mathcal{T}(\alpha,C)\), where \((A,B)\in\mathcal{T}(\alpha,C)\) if and only if
Footnote 1: By efficient, we mean polynomial time with respect to \(|\mathcal{T}(\alpha,C)|\).
1. \(A,B\subset\mathbb{Z}^{d}\) 3. \(|A|=\alpha\)
2. \(A+B=C\) 4. \(|C|=\alpha|B|\)
and where we assume \((A,B)\) is some canonical representative (to be formally defined later) of the class of all \((A^{\prime},B^{\prime})\) such that \(A\) is congruent to \(A^{\prime}\).
We then study the extremal question as to the the growth rate of \(\max_{\alpha,C}[|\mathcal{T}(\alpha,C)|]\) with respect to \(|C|\). The trivial bounds for this value, even for restricted classes of \(C\) (such as \(C=[n]\) for \(d=1\)), are very poor, with a lower bound of roughly \(\log n\) and an upper bound of \(n\) choose \(n/2\). We improve these bounds in the case of \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) to the relatively tight upper and lower bounds of
\[n^{\frac{(1+\alpha)\log n}{\log\log n}}\]
and \(n^{\omega(1)}\) respectively (where \(n=|C|\)).
We use analysis from the case where \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) to set the ground work for the extension of the aforementioned results to general \(C\). We do this by defining a mapping that we conjecture injecitly maps tilings from \(\mathcal{T}(\alpha,C)\) to those in \(\mathcal{T}(\alpha,[n])\). Proving this would not only prove that \(|\mathcal{T}(\alpha,C)|\leq|\mathcal{T}(\alpha,[n])|\), but would also allow for efficient (relative to \(\mathcal{T}(\alpha,[n])\)) enumeration of the elements of \(\mathcal{T}(\alpha,C)\). Lastly, we generalize the definition of the sumset cover problem to multisets. As in the case of general \(C\), we define a mapping that we conjecture injecitly maps coverings in this more general setting to tilings of \(\mathcal{T}(\alpha,[n])\).
## 1 Introduction
Translationally tiling \(\mathbb{Z}^{d}\) is a natural problem that has been studied in many prior works [3, 5, 7], including from a variety of more restricted angles such as in the case of \(d=1\)[2, 6, 8]. While the vast majority of work in this area seeks to tile the infinite set \(\mathbb{Z}^{d}\), Bodini and Rivals [1] initiated the study of tiling finite intervals of \(\mathbb{Z}\) by outlining necessary and sufficient conditions for such tilings. In this work, we progress the study of tiling finite subsets of \(\mathbb{Z}^{d}\) by allowing control over \(|A|\), bounding the number of such tilings, and generalizing these results to arbitrary \(d\).
To enumerate the elements of \(\mathcal{T}(\alpha,C)\) and bound its size, we begin by defining two functions for counting the tilings in the restricted case of \(C=[n]\) (but for any \(\alpha\)). Each gives us the value of \(|\mathcal{T}(\alpha,[n])|\) for any \(\alpha,n\in\mathbb{Z}^{+}\) and the proof of correctness for each also allows each function to be used as an efficient method for the enumeration of the elements of \(\mathcal{T}(\alpha,[n])\). Each of these functions has distinct benefits. The first function is useful as it is easier to prove correctness for directly and because it yields our upper bound in a
fairly straightforward manner. The second function is less direct in its approach to counting \(|\mathcal{T}(\alpha,[n])|\) and has an "inclusion-exclusion based" format that more naturally allows for our novel lower bound. While a \(\log n\) style lower bound can be achieved by simply considering the family of cases such that \(C=[2^{k}]\) and \(\alpha=2^{k/2}\), it appears hard to do much better with any basic idea. By analyzing the second function, we are able to prove a surprisingly fast growing lower bound on \(|\mathcal{T}(\alpha,[n])|\) of \(n^{\omega(1)}\) for infinitely many \(n\).
While the aforementioned functions are initially defined and analyzed with respect to the restricted case of \(C=[n]\), we are then able to extend this analysis to the case of \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) for \(x_{1},\ldots,x_{d}\in\mathbb{Z}^{+}\) and any \(\alpha\). More specifically, we are able to fully characterize the tilings of \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) by a tile of size \(\alpha\), as well as proving that increasing \(d\) beyond \(d=1\) does not increase the upper bound on the number of tilings of \(C\) compared to the number of tilings of \([n]\) by a tile of size \(\alpha\).
Additionally, we conjecture that similar extensions of the results in the \(C=[n]\) case to general non-contiguous \(C\) and multisets \(C\) are possible and make partial progress towards these results. With regards to the these cases, we define a mapping we poset is injective (but not necessarily surjective) function from \(\mathcal{T}(\alpha,[n])\) to \(\mathcal{T}(\alpha,C)\) for any multiset \(\mathbb{C}\) with elements in \(\mathbb{Z}^{d}\) and prove a simple structural lemma to begin building towards a proof of this conjecture. If true, this mapping would serve as a tool for the efficient enumeration of \(\mathcal{T}(\alpha,C)\) as well as proving that \(|\mathcal{T}(\alpha,C)|\leq|\mathcal{T}(\alpha,[n])|\).
## 2 Definitions
For integers \(n\), \(x\), and \(y\), we let \([n]\triangleq\{1,\ldots,n\}\) and \([x,y]\triangleq\{x,x+1,x+2,\ldots,y-1,y\}\) where \(x\leq y\). We use \(x|y\) to mean \(x\) divides \(y\). Let the divisor function \(\sigma_{0}(n)\) for \(n\in\mathbb{Z}\) equal the number of positive divisors of \(n\). Throughout this work we use \(\log\) to mean \(\log_{2}\). When we take the max or min of a set of sets, we take the max or min respectively from the union over all elements of the set (e.g. \(\max\bigl{[}\bigl{\{}\{1,2\}\bigl{\}}\{5,6\}\bigr{\}}\bigr{]}=6\)). Similarly, if we use a set of sets \(S\) in a set difference operation, we treat \(S\) as the union of its elements. Let \(\operatorname{proj}_{i}(x)\) be the projection of \(x\in\mathbb{Z}^{d}\) onto the \(i^{\text{th}}\) dimension (i.e. \(\operatorname{proj}_{i}(x_{1},\ldots,x_{i},\ldots,x_{d})=x_{i}\)). Further, we let \(\operatorname{proj}_{i}(A)\) for a set \(A\) equal the set \(\{\operatorname{proj}_{i}(x)|x\in A\}\).
Formally a translational tiling of \(C\subset\mathbb{Z}^{d}\) is a pair \((A,B)\) such that
1. \(A,B\subset\mathbb{Z}^{d}\) \(A+B=C\) \(A+B=C\) \(A+B=C\) \(A+B=C\) \(A+B=A\)
where we refer to \(A\) as the **tile** and \(B\) as the **translations**. In this work, we group tilings into sets based upon both \(|A|\) and by their congruence class. We consider the congruence class of a tiling \((A,B)\) to be the set of all tilings \((A^{\prime},B^{\prime})\) such that \(A+m=A^{\prime}\) for some \(m\in\mathbb{Z}^{d}\) (we would also then refer to \(A\) and \(A^{\prime}\) as themselves congruent). Thus, to simplify notation and remove congruence classes from the rest of the discussion, we define the **canonical representative** of each congruence class of tilings to be \((A,B)\) such that \(\min[B]=(0,\ldots,0)\). For the remainder of the paper, we presume all tilings are the canonical representative of the congruence class of tilings to which they belong. As to \(|A|\), we define \(\mathcal{T}(\alpha,C)\) to be the set of all \((A,B)\) that tile \(C\) and have \(|A|=\alpha\). For \(C\subset\mathbb{Z}^{d}\) we use \(\mathcal{T}((\alpha_{1},\alpha_{2},\ldots,\alpha_{d}),C)\) to mean tilings of \(C\) such that \(|\operatorname{proj}_{i}(A)|=\alpha_{i}\).
For \(d=1\) and any \(\mathcal{T}(\alpha,C)\), let \(P_{j}\) be the \(j^{\text{th}}\) solution \((A_{j},B_{j})\), where we define a total order of solutions by \(P_{i}<P_{j}\) if and only if \(\min[A_{i}\setminus A_{j}]<\min[A_{j}\setminus A_{i}]\). To see that this is in fact a total order first notice that, if \(i\neq j\), then \(A_{i}\neq A_{j}\), as there is a unique \(B\) for tiling \(C\) with translates of any fixed \(A\). For the rest of the properties of a total order, we can see \(A_{i}\) as corresponding to an integer that is the sum of \(2^{k}\) for all \(k\in A_{i}\). Thus, this being a total order follows from any subset of the integers being totally ordered by their value. To extend this to \(d>1\), we use the same ordering as when \(d=1\) based on elements of \(A_{i}\) and \(A_{j}\) in \((1\times\ldots\times x_{k}\times\ldots\times 1)\), where \(k\) is the smallest \(k\) such that \(A_{i}\cap(1\times\ldots\times x_{k}\times\ldots\times 1)\neq A_{j}\cap(1\times \ldots\times x_{k}\times\ldots\times 1)\). We call a proposed tiling \((A,B)\) a **valid** tiling of \(C\) if and only if \(\exists j[P_{j}\in\mathcal{T}(\alpha,C):P_{j}=(A,B)]\) and invalid otherwise. We use \(\beta\) to mean \(|B|\).
The remaining definitions in this section are purely with respect to finite intervals of \(\mathbb{Z}\) (i.e \(C\) such that \(d=1\)). We define \(a_{(i,j)}\) to denote the \(i^{\text{th}}\) smallest element of \(A_{j}\). We will usually drop the subscript \(j\) from this and other notation when it is clear from context or irrelevant (e.g. writing \(a_{i}\) as opposed to \(a_{(i,j)}\)). We define \(b_{(i,j)}\) similarly to \(a_{(i,j)}\), but with reference to \(B\) instead of \(A\).
**Definition 1**.: _For some \(C\), we define the **first segment** and first **rift of the \(j^{\text{th}}\) solution** (i.e. \(s_{(1,j)}\) and \(r_{(1,j)}\) respectively) to be:_
* \(s_{1}\triangleq\{x\in A:x<\min[C\setminus A]\}\)__
* \(r_{1}\triangleq\{x\in C\setminus A:x<\min[A\setminus s_{1}]\}\)_._
_As mentioned prior, we will often drop the subscript \(j\) when which solution we are referencing is arbitrary or clear from context. For \(i>1\), we define the \(i^{\text{th}}\) segment and \(i^{\text{th}}\) rift of the \(j^{\text{th}}\) partition_ (i.e. \(s_{(i,j)}\) and \(r_{(i,j)}\) respectively) recursively as follows:_
* \(s_{i}\triangleq\big{\{}x\in A:\max[s_{i-1}]<x<\min[(C\setminus A)\setminus\{y \in C:y\leq\max[s_{i-1}]\}]\big{\}}\)__
* \(r_{i}\triangleq\Bigg{\{}x\in C\setminus A:\max[r_{i-1}]<x<\min\bigg{[}A \setminus\Big{(}\bigcup\limits_{k=1}^{i-1}s_{k}\Big{)}\bigg{]}\Bigg{\}}\)_._
To put the above more intuitively, the \(i^{\text{th}}\) segment of a partition \(P_{j}\) is the \(i^{\text{th}}\) set of consecutive (relative to \(C\)) elements of \(A_{j}\) where as the the \(i^{\text{th}}\) rift of a partition \(P_{j}\) is the elements of \(C\) between \(s_{i}\) and \(s_{i+1}\). We define \(S_{j}\) and \(R_{j}\) to be the set of all non-empty \(s_{(i,j)}\) and \(r_{(i,j)}\) respectively. For example, let \(A=\{1,2,5,6,9,10\}\), \(B=\{0,2\}\) and \(C=[12]\). Then \(s_{1}=\{1,2\}\), \(s_{2}=\{5,6\}\), and \(s_{3}=\{9,10\}\) while \(r_{1}=\{3,4\}\) and \(r_{2}=\{7,8\}\). The aforementioned segments and rifts would then be exactly the elements of \(S\) and \(R\) respectively, as all other segments and rifts are empty in this example. Let \(k_{(s,i)}\triangleq|s_{(1,i)}|\) and \(k_{(r,i)}\triangleq|r_{(1,i)}|\). We define \(\mathcal{T}(\alpha,C,(k_{s},\cdot))\) to be
\[\mathcal{T}(\alpha,C,(k_{s},\cdot))\triangleq\{(A_{i},B_{i})\in\mathcal{T}( \alpha,C):|s_{(1,i)}|=k_{s}\}\]
and define \(\mathcal{T}(\alpha,C,(k_{s},\cdot))\) to be
\[\mathcal{T}(\alpha,C,(k_{s},k_{r}))\triangleq\{(A_{i},B_{i})\in\mathcal{T}( \alpha,C):(|s_{(1,i)}|=k_{s})\wedge(|r_{(1,i)}|=k_{r})\}.\]
## 3 Functions for Enumerating \(\mathcal{T}(\alpha,[n])\)
In this section, we define two functions for counting the exact number of distinct tilings for any \(\alpha\) and \(C=[n]\). To achieve this, we do two things:
* Define necessary and sufficient conditions as to the size of segments and rifts in valid tilings.
* Group points into _meta-points_ such that each element of the meta-point is in the same segment or rift as each other element of the meta-point.
Taken together, we are able to calculate \(|\mathcal{T}(\alpha,[n])|\) by taking the sum of a small number of \(|\mathcal{T}(\alpha^{\prime},[n^{\prime}])|\) for \(n^{\prime}<n\) where \(\alpha^{\prime}\) and \(n^{\prime}\) are straightforward to calculate from \(\alpha\) and \(n\). The first recursion is more transparently connected to these concepts, but less useful for lower bound analysis then the second recursion which we later derive from the first. The proof of the correctness of the first function proceeds in the following manner:
* Prove that the restrictions to the size of the first segment and rift (i.e. \(k_{s}\) and \(k_{r}\) respectively) and the sub-cases we sum over based upon these these values are sufficient to yield a valid tiling. This proves that our function acts as a lower bound to \(|\mathcal{T}(\alpha,[n])|\).
* Prove that the restrictions to the size of the first segment and rift (i.e. \(k_{s}\) and \(k_{r}\) respectively) and the sub-cases we sum over based upon these these values are necessary to yield a valid tiling. This proves that our function acts as an upper bound to \(|\mathcal{T}(\alpha,[n])|\).
As the value produced by our function is both an upper and lower bound on \(|\mathcal{T}(\alpha,[n])|\), it follows that it calculates the exact value of \(|\mathcal{T}(\alpha,[n])|\). We can now define the first function in full detail.
**Definition 2**.: _For \(\mathcal{S}=\{k_{s}\in\mathbb{Z}^{+}:k_{s}|\alpha\}\) and_
\[\mathcal{R}_{k_{s}}=\{k_{r}\in\mathbb{Z}^{+}:(k_{s}|k_{r})\wedge(k_{s}+k_{r}|k_{ s}\beta)\wedge\big{(}(k_{s}=\alpha)\Longleftrightarrow(k_{r}=0)\big{)}\}\]
_we define the set \(\Psi_{(\alpha,[n],(k_{s},k_{r}))}\) to be_
\[\Psi_{(\alpha,[n],(k_{s},k_{r}))}\triangleq\begin{cases}\alpha\nmid n,&0\\ k_{r}=0,&1\\ \text{Otherwise},&\left|\mathcal{T}\bigg{(}\alpha/k_{s},\left[\frac{n}{k_{s}+k_ {r}}\right]\bigg{)}\right|-\left|\mathcal{T}\bigg{(}\alpha/k_{s},\left[\frac{ n}{k_{s}+k_{r}}\right],(1,\cdot)\bigg{)}\right|\end{cases}\]
Using this definition, we prove that the following method can be used to count the tilings of \([n]\) by sets of size \(\alpha\).
**Lemma 1**.: \[|\mathcal{T}(\alpha,[n])|=\sum_{k_{s}\in\mathcal{S}}\ \sum_{k_{r}\in\mathcal{R}_{k_{s}}} \Psi_{(\alpha,[n],(k_{s},k_{r}))}.\]
Proof.: Lemma 2 shows that that this sum lower bounds the size of \(|\mathcal{T}(\alpha,[n])|\), while Lemma 3 and Lemma 4 combine to show that this sum upper bounds the size of \(|\mathcal{T}(\alpha,[n])|\).
**Lemma 2**.: \[|\mathcal{T}(\alpha,[n])|\geq\sum_{k_{s}\in\mathcal{S}}\ \sum_{k_{r}\in\mathcal{R}_{k_{s}}} \Psi_{(\alpha,[n],(k_{s},k_{r}))}.\]
Proof.: The definition of \(\Psi_{(\alpha,[n],(k_{s},k_{r}))}\) gives us three cases, which are where its value equals either \(0\), \(1\), or in which its value is based on \(|\mathcal{T}(\alpha^{\prime},[n^{\prime}])|\) (where the negative term can be seen as subtracting away the case where \(k_{s}=1\)). The first case, where \(|\mathcal{T}(\alpha,C)|=0\), need not be handled for the lower bound, as such cases only reduce the value of the sum. For \(|\mathcal{T}(\alpha,C)|=1\), as \(k_{r}=0\) and \(\alpha|n\), we can always set \(A=[\alpha]\) and \(B=\{x\cdot\alpha:x\in[0,\beta-1]\}\), resulting in \(A+B=C\). Lastly, we handle the case where the value of \(\Psi_{(\alpha,[n],(k_{s},k_{r}))}\) is based upon \(|\mathcal{T}(\alpha^{\prime},[n^{\prime}])|\). To address this case, we define an injective mapping
\[f:\mathcal{T}(\alpha/k_{(s,i)},[n/(k_{(s,i)}+k_{(r,i)})])\to\mathcal{T}( \alpha,[n],(k_{(s,i)},k_{(r,i)}))\]
where \(k_{(r,i)}\) and \(k_{(s,j)}\geq 2\). We abuse notation slightly and write \(f(A_{j})=A_{i}\) (or \(f(B_{j})=B_{i}\)) if and only if \(f(P_{j})=P_{i}\) where \(P_{i}=(A_{i},B_{i})\) and \(P_{j}=(A_{j},B_{j})\). To construct such an \(f\), we map \(P_{j}\) to \(P_{i}\) such that \(m\in A_{j}+B_{j}\) if and only if \([(m-1)(k_{(s,i)}+k_{(r,i)})+1,m(k_{(s,i)}+k_{(r,i)})]\in A_{i}+B_{i}\). We call this the _key property_ of \(f\). The main idea behind the key property is that, as \(P_{i}\) tiles a larger set than \(P_{j}\), we can expand each point of \(A_{j}\) to be multiple consecutive points in \(A_{i}\).2
Footnote 2: The opposite direction also holds in that on can compress consecutive points in \(A_{i}\) down to single points of \(A_{j}\), but this follows from the upper bound, not the lower bound
First, we define \(f\) and show it is injective. Let \(P_{j}\) be an arbitrary valid tiling in \(\mathcal{T}(\alpha/k_{(s,i)},[n/(k_{(s,i)}+k_{(r,i)})])\) such that \(k_{(s,j)}\geq 2\). As \(P_{j}\) is arbitrary, maintaining the key property forces us to make sure that \([2(k_{(s,i)}+k_{(r,i)})]\subset A_{i}+B_{i}\) for any \(A_{i}\) and \(B_{i}\) such that \(f_{P_{j}}=(A_{i},B_{i})\) for some \(j\). Thus, let \([k_{(s,i)}]\) and \([k_{(s,i)}+k_{(r,i)}+1,2k_{(s,i)}+k_{(r,i)}]\) both be subsets of \(A_{i}\). Given these facts about \(A_{i}\), it follows that \(B^{*}=\{x\cdot k_{(s,i)}:x\in[0,k_{(r,i)}/k_{(s,i)}]\}\) is a subset of \(B_{i}\). This gives us that \([2(k_{(s,i)}+k_{(r,i)})]\subset A_{i}+B_{i}\) as desired. For other \(m\in A_{j}\), we let \([(m-1)(k_{(s,i)}+k_{(r,i)})+1,(m-1)(k_{(s,i)}+k_{(r,i)})+k_{(s,i)}]\) be in \(f(A_{j})=A_{i}\) to ensure that key property is maintained. To see this, notice that this implies that \(B^{*}+[(m-1)(k_{(s,i)}+k_{(r,i)})+1,(m-1)(k_{(s,i)}+k_{(r,i)})+k_{(s,i)}]=[(m-1)( k_{(s,i)}+k_{(r,i)})+1,m(k_{(s,i)}+k_{(r,i)})]\subset A_{i}+B_{i}\) as desired. For all \(b_{(\ell,j)}\in B_{j}\), we have \(A_{j}+b_{(\ell,j)}\subset A_{j}+B_{j}\) by definition. Let \(m\) be in \(A_{j}+b_{(\ell,j)}\) where \(b_{(\ell,j)}\neq 0\). Notice, that by adding
\[B^{*}+\left\{\frac{b_{\ell,j}(k_{(s,i)}+k_{(r,i)})}{k_{(s,i)}}\right\}\]
to \(f(B_{j})=B_{i}\), we get that \([(m-1)(k_{(s,i)}+k_{(r,i)})+1,m(k_{(s,i)}+k_{(r,i)})]\subset A_{i}+B_{i}\) as desired.
To give an example of the above using \(A_{i}+B_{i}=[24]\) and \(A_{j}+B_{j}=[48]\), the tiling \(P_{(k^{\prime},j)}=(\{1,3,9,11\},\{0,1,12,13\})\) would map to
\[P_{(k,i)}=(\{1,2,5,6,17,18,21,22\},\{0,2,24,26\})\]
via \(f\). In order to prove that the sum from Lemma 1 (along with the given definitions for \(\mathcal{S}\) and \(\mathcal{R}_{k_{s}}\)) act as an upper bound to the number distinct tilings of \(C=[n]\), we require the following definition.
**Definition 3**.: _We refer to blocks of \(k_{s}+k_{r}\) consecutive elements as **meta-points** of which \([k_{s}+k_{r}]\) is the first. Further, we use the terminology of segments to refer to consecutive sets of meta-points in \(C\) and refer to these as **meta-segments** (i.e. \(s_{i}^{*}\)). We extend the idea of rifts to **meta-rifts** (i.e. \(r_{i}^{*}\)) similarly. More formally we have that the \(x^{th}\) meta-point of \(C\) (i.e. \(x^{*}\)) is equal to the set \([(x-1)(k_{s}+k_{r})+1,x(k_{s}+k_{r})]\) and that_
* \(s_{1}^{*}\triangleq\big{\{}x^{*}\subset A:\max[x^{*}]<\min[C\setminus A]\big{\}}\)__
* \(r_{1}^{*}\triangleq\Big{\{}x^{*}\subset C\setminus A:\max[x^{*}]<\min\bigl{[} A\setminus s_{1}^{*}\bigr{]}\Big{\}}\)_._
* \(s_{i}^{*}\triangleq\bigg{\{}x^{*}\subset A:\max[s_{i-1}^{*}]<x<\min\Bigl{[}(C \setminus A)\setminus\big{\{}y\in C:y\leq\max[s_{i-1}^{*}]\big{\}}\Bigr{]} \bigg{\}}\)__
* \(r_{i}^{*}\triangleq\bigg{\{}x^{*}\subset C\setminus A:x^{*}\subset\left(\max [r_{i-1}^{*}],\min\biggl{[}A\setminus\Bigl{(}\bigcup\limits_{k=1}^{i-1}s_{k}^ {*}\Bigr{)}\biggr{]}\right)\bigg{\}}\)_._
**Lemma 3**.: _Suppose that \(\alpha|n\), \(k_{s}\in\mathcal{S}\) and \(k_{r}\in\mathcal{R}_{k_{s}}\setminus\{0\}\). Then it follows that_
\[\Psi_{(\alpha,[n],(k_{s},k_{r}))}=\bigg{|}T\bigg{(}\alpha/k_{s},\left[\frac{n} {k_{s}+k_{r}}\right]\bigg{)}\bigg{|}-\bigg{|}T\bigg{(}\alpha/k_{s},\left[\frac {n}{k_{s}+k_{r}}\right],(1,\cdot)\bigg{)}\bigg{|}=\big{|}\mathcal{T}(\alpha,[ n],(k_{s},k_{r}))\big{|}\big{|}.\]
_In addition, no selection of \(k_{s}\) and \(k_{r}\) such that \(k_{s}\not\in\mathcal{S}\) and \(k_{r}\not\in\mathcal{R}\) has any valid tilings associated with them._
Proof.: Once a \(k_{s}\) and \(k_{r}\) have been selected and knowing that \(r_{1}\neq\emptyset\), the only way to tile \(r_{1}\) with translates of elements of \(A\) is with translates of elements of \(s_{1}\). As elements of \(s_{1}\) are consecutive as are those of \(r_{1}\), the only way to do this is as in Lemma 2 (i.e. by defining \(B^{*}\) to be \(\{x\cdot k_{s}|x\in[0,k_{r}/k_{s}]\}\) and let \(B^{*}\) be a subset of \(B\)). This also justifies the restriction of \(\mathcal{R}\) that \(k_{s}|k_{r}\), as otherwise tiling \(r_{1}\) with translates of \(s_{1}\) would not be possible (example: for \(s_{1}=\{1,2\}\) and \(s_{2}=\{x,x+1\}\), the number of elements in \(r_{1}=[3,x-1]\) must be divisible by \(2\)).
Now we prove that \(\forall i[(|s_{i}|\neq 0)\implies(|s_{i}|=k_{s})]\). By definition, \(|s_{1}|=k_{s}\). Suppose this is the case for all \(s_{j}\) such that \(j<i\). Consider \(s_{i}\). If \(|s_{i}|=0\) the statement holds. Suppose then that \(|s_{i}|\neq 0\) and that \(|s_{i}|>k_{s}\). Then, as \(k_{s}\) is an element of \(B^{*}\), we have that \(s_{i}\cap(s_{i}+k_{s})\neq\emptyset\) which is a contradiction. Suppose \(|s_{i}|<k_{s}\). Let \(I=\left[\max[s_{i}]+1,\min[s_{i}+k_{s}]-1\right]\) and observe that \(1\leq|I|<k_{s}\). The lower bound on \(|I|\) follows directly from \(|s_{i}|<k_{s}\) and the translation of \(s_{i}\) by \(k_{s}\), while the upper bound on \(|I|\) follows from the fact that \(|s_{i}|>0\) and the fact that we are taking the max from \(s_{i}\) for lower bound \(I\), but we are taking the min from \(s_{i}+k_{s}\) the upper bound \(I\). Taken together, with the fact that \(|s_{i}|>0\), we have that \(|I|<k_{s}\). Notice that introducing any element to \(B\) smaller then any element of \(B^{*}\) would result in a collision between translates of \(s_{1}\). Given this, \(I\) must be tiled by some translate of \(s_{j}\) for \(j<i\), but \(|s_{j}|=k_{s}>|r^{*}|\) and thus, we have a contradiction. Together, these prove that \(\forall i[(|s_{i}|\neq 0)\implies(|s_{i}|=k_{s})]\) as desired. This also justifies the first restriction of \(\mathcal{S}\), as \(A\) is made up of segments, so if each segment has cardinality \(k_{s}\), then it must be the case that \(k_{s}|\alpha\).
From the above, we can conclude that \((s_{1}\cup s_{2})+B^{*}\) must tile exactly \([2(k_{s}+k_{r})]\) in any valid partition as done in Lemma 2. For example, if \(s_{1}=\{1,2\}\) and \(s_{2}=\{7,8\}\), we know that \(\{0,2,4\}\subset B\) as these are necessary to tile \(r_{1}=[3,6]\) with translates of \(s_{1}\). These elements of \(B\) then also sum with the elements \(s_{2}\) so that \((s_{1}\cup s_{2})+\{0,2,4\}=[12]\). Unless \(C=[12]\), there are two possible ways the next elements of \(C\) (i.e. \(\{13,14\}\)) can be tiled. Either these elements are in \(s_{3}\) or they are tiled by further translates of \(s_{1}\). As we will see below, this decision for \(C=[n]\) in this example ends up being akin to the choice of whether or not to include \(3\) in \(A\) for \(C=[n/(k_{s}+k_{r})]=[n/6]\) (and with the size of \(A\) reduced by a factor of \(k_{s}=2\)).
If \(n=2(k_{s}+k_{r})\), we have found the unique valid partition for this \(k_{s}\) and \(k_{r}\). In terms of meta-points, this case corresponds to tiling the set \(\{1^{*},2^{*}\}\), where \(1^{*}=[k_{s}+k_{r}]\) and \(2^{*}=[k_{s}+k_{r}+1,2(k_{s}+k_{r}]\). Consider the case of \(n=\ell(k_{s}+k_{r})\) for \(\ell>2\). There are two ways to tile \(2(k_{s}+k_{r})+1\) in \(A+B\). Either \(2(k_{s}+k_{r})+1\in s_{3}\) or \(2(k_{s}+k_{r})+1\in s_{1}+b_{i}\) for some \(b_{i}\in B\). It cannot be the case that \(2(k_{s}+k_{r})+1\in s_{2}+b_{i}\) for some \(b_{i}\in B\), as this would imply that \((s_{1}+b_{i})\cap[2(k_{s}+k_{r})]\neq\emptyset\) which would not yield a valid tiling. Suppose \(2(k_{s}+k_{r})+1\in s_{3}\). It follows that \((s_{1}\cup s_{2}\cup s_{3})+B^{*}=[3(k_{s}+k_{r})]\). The number of times we repeat this process determines \(|s_{i}^{*}|\), as each such decision to add \(s_{i}\) for a new \(i\) as soon as possible essentially adds one new point to the first meta-segment. Suppose \(2(k_{s}+k_{r})+1\in s_{1}+b_{i}\). It follows that \(\min[s_{3}]>4(k_{s}+k_{r})\). This is because the number of elements between \(s_{1}+b_{i}\) and \(s_{2}+b_{i}\) is \(2(k_{s}+k_{r})-k_{s}\), but \(s_{3}+B^{*}\) is a set of \(2(k_{s}+k_{r})\) consecutive elements. Thus, \(s_{3}\) (and by extension, \(s_{3}+B^{*}\)) cannot appear until at least \(4(k_{s}+k_{r})+1\). Thus, this decision of how to tile \(2(k_{s}+k_{r})+1\) leads to a meta-point being added to a meta-rift.
The choice between the two options outlined above as to how to tile \(2(k_{s}+k_{r})+1\) is repeated once every \(k_{s}+k_{r}\) elements. Thus, the solutions as to how to tile \([n]\) are exactly the ways to tile \([n/(k_{s}+k_{r})]\) with a valid tiling for which \(k_{(s,j)}\geq 2\) (which is accounted for by the negative term with \(k_{(s,j)}\) fixed to \(1\)). To justify the second restriction (i.e. \(k_{s}+k_{r}|k_{s}\beta\)) of \(\mathcal{R}_{k_{s}}\) notice that, due to the fact that segments are of length \(k_{s}\) and the definition of \(B^{*}\), we have that \(k_{s}+k_{r}\) elements are grouped into meta-points and are either tiled or not tiled as a group. Once \(r_{1}\) is tiled by translates of \(s_{1}\), the first \(a\cdot(k_{s}+k_{r})/k_{s}\) elements of \(C\) will be tiled. Thus, it must be that \((\alpha\cdot\mathcal{T}(k_{s}+k_{r})/k_{s})|\alpha\beta\), as otherwise translates of these \(a\cdot(k_{s}+k_{r})/k_{s}\) elements could not tile \(C\). This divisibility requirement simplifies to the restriction \((k_{s}+k_{r})|k_{s}\beta\) as required. For the last restriction to elements of \(\mathcal{R}_{k_{s}}\), its necessity follows from the definition of segments and rifts.
We now handle the other two cases for \(\Psi_{(\alpha,[n],(k_{s},k_{r}))}\) relevant to the upper bound.
**Lemma 4**.: _For some \(\mathcal{T}(\alpha,[n],(k_{s},k_{r}))\), if \(\alpha\nmid n\), then \(|\mathcal{T}(\alpha,[n],(k_{s},k_{r}))|=0\). Otherwise, if \(k_{r}=0\), then \(|\mathcal{T}(\alpha,[n],(k_{s},k_{r}))|=1\)._
Proof.: If \(a\nmid n\), then \(|C|=\alpha\beta\) is impossible and the lemma holds. If \(k_{r}=0\), then \(r_{1}=\emptyset\) and \(A=[\alpha]\). Consider trying to change \(B\) from \(B=\{x\cdot\alpha:x\in[0,b-1]\}\) as defined in Lemma 2. We attempt to do this via induction on the elements of \(B\). The base case would be to change \(0\), but this is not allowed by the definition of \(\mathcal{T}\). Suppose that \(b_{i}\) is the first element that should be adjusted and assume without loss of generality that we cannot reduce it below \(b_{i-1}+1\) or increase it to be greater then \(b_{i+1}-1\). If we increase \(b_{i}\), then \(b_{i}+1\) is no longer in \(A+B\) which is a contradiction. If we decrease \(b_{i}\), then \(A+b_{i-1}\cap A+b_{i}\neq\emptyset\). Thus, \(b_{i}\) cannot be changed while still yielding a valid tiling.
While the function from Lemma 1 is the one we use to prove our upper bound, it is not in a convenient form for the purpose of lower bound analysis. Thus, we give an alternative function that we prove to be equivalent and which we use to prove our lower bound.
**Lemma 5**.: _Let \(\mathcal{P}_{(n,k)}\) be the set of products of \(k\) distinct prime divisors of \(n\)._
\[|\mathcal{T}(\alpha,[n])|=\sum_{k\in[n]}\sum_{v\in\mathcal{P}_{(n,k)}}(-1)^{k +1}\bigg{(}|\mathcal{T}(\alpha,[n/v])|+|\mathcal{T}(\alpha/v,[n/v])|\bigg{)}.\]
Proof.: Let \(k_{(s,i)}\) and \(k_{(r,i)}\) be the size of the first segment and rift of the sumset tile \((A_{i},B_{i})\). We prove this by showing this recursion's equivalence to the recursion from Lemma 1. We break the tilings from Lemma 1 into two case: tilings of \(\mathcal{T}(\alpha,[n])\) such that \(k_{s}>1\) and tilings such that \(k_{s}=1\). We define a function \(f_{1}\) that takes as input \((\alpha,[n])\) and an element of \(\mathcal{T}(\alpha,[n/v])\), and outputs an element of \(\mathcal{T}(\alpha,[n])\) such that \(v|(k_{s}+k_{r})\) and \(k_{s}=1\). Further we prove that, for any \((A_{i},B_{i})\in\mathcal{T}(\alpha,[n])\) such that \(v|(k_{(s,i)}+k_{(r,i)})\) and \(k_{(s,i)}=1\), there exists a unique \((A_{j},B_{j})\in\mathcal{T}(\alpha,[n/v])\) such that \(f_{1}((\alpha,[n]),(A_{j},B_{j}))=(A_{i},B_{i})\). Similarly, we define a function \(f_{2}\) that takes as input \((\alpha,[n])\) and an element of \(\mathcal{T}(\alpha/v,[n/v])\), and outputs an element of \(\mathcal{T}(\alpha,[n])\) such that \(v|k_{s}\). Further we prove that, for any \((A_{i},B_{i})\in\mathcal{T}(\alpha,[n])\) such that \(v|k_{(s,i)}\), there exists a unique \((A_{j},B_{j})\in\mathcal{T}(\alpha/v,[n/v])\) such that \(f_{2}((\alpha,[n]),(A_{j},B_{j}))=(A_{i},B_{i})\).
We begin with \(f_{1}\). For any \((A_{j},B_{j})\in\mathcal{T}(\alpha,[n/v])\) such that \(f_{1}((\alpha,[n]),(A_{j},B_{j}))=(A_{i},B_{i})\), we let \(A_{i}=\{x:x/v\in A_{j}\}\) and \(B_{i}=\{y:y/v\in B_{j}\}+[0,v]\). \(k_{(s,i)}=1\) follows from the fact that \(v\geq 2\). As for \(d|(k_{(s,i)}+k_{(r,i)})\), one can see from the definition of \(B_{i}\) that the first rift must end at some multiple
of \(v\) which implies that \(v|(k_{(s,i)}+k_{(r,i)})\). For any \((A_{i},B_{i})\in\mathcal{T}(\alpha,[n])\) such that \(v|(k_{(s,i)}+k_{(r,i)})\) and \(k_{(s,i)}=1\), it follows from Lemma 1 that the single point segments of \((A_{i},B_{i})\) occur only at positions \(t\) such that \(p_{i}|(t-1)\) (this follows from the division of \(n\) by \(k_{s}+k_{r}\) in the last case of the definition of \(\Psi_{(\alpha,[n],(k_{s},k_{r}))}\)). Thus, \(f_{1}\) is invertable and the claim follows. As for \(f_{2}\), for any \((A_{j},B_{j})\in\mathcal{T}(\alpha/v,[n/v])\) such that \(f_{1}((\alpha,[n]),(A_{j},B_{j}))=(A_{i},B_{i})\), we let \(A_{i}=\{x:\lfloor x/v\rfloor\in A_{j}\}\) and \(B_{i}=\{y:y/v\in B_{j}\}\). \(v|k_{(s,i)}\) follows from the fact that \(\lfloor x/v\rfloor\) has the same value for every \(v\) consecutive values of \(x\). For any \((A_{i},B_{i})\in\mathcal{T}(\alpha,[n])\) such that \(v|k_{(s,i)}\), it follows from Lemma 1 that every segment and rift is divisible by \(v\). Thus, \(f_{2}\) is invertable and the claim follows.
Let \(\{p_{1},\ldots,p_{m}\}\) be the prime divisors of \(n\). Notice that \(f_{1}\) and \(f_{2}\) map to disjoint subsets of \(\mathcal{T}(\alpha,[n])\), but that the union of their codomains on inputs from \(\mathcal{T}(\alpha,[n/p_{i}])\) and \(\mathcal{T}(\alpha/p_{i},[n/p_{i}])\) respectively for all \(i\in[m]\) is exactly \(\mathcal{T}(\alpha,[n])\). The issue then in simply summing the size of these codomains is that an element of \(\mathcal{T}(\alpha,[n/p_{i}])\) and an element of \(\mathcal{T}(\alpha,[n/p_{j}])\) for \(i\neq j\) may map to the same element \((A_{z},B_{z})\in\mathcal{T}(\alpha,[n])\) by \(f_{1}\). By the definition of \(f_{1}\), this would imply that \(p_{i}p_{j}|(k_{(s,z)}+k_{(r,z)})\) and \(k_{(s,z)}=1\), which means we can remove the over counting by subtracting cases for which \(v\) is composed of \(2\) distinct prime factors of \(n\) (though now we may be under counting). More generally, we can apply the inclusion-exclusion principle with respect to the number of prime factors of \(v\) to arrive at an exact count as desired.
## 4 Upper and Lower Bound Calculations
In this section, we give an upper bound utilizing the recursion from Lemma 1. We then give a more detailed lower bound that holds for infinitely many \(n\) by analyzing Lemma 5.
**Corollary 1**.: \(\mathcal{T}(\alpha,[n])=\mathcal{T}(\beta,[n])\)_._
Proof.: Let \(P_{j}=(A,B)\) be a tiling of \([n]\) such that \(|A|=\alpha\) and \(|B|=\beta\). Define \(P^{\prime}_{j}\) to be \((A^{\prime},B^{\prime})\), where \(A^{\prime}=B+\{1\}\) and \(B^{\prime}=A-\{\min[A]\}\). Notice that \(P^{\prime}_{j}\) is a tiling of \([n]\) such that \(|A^{\prime}|=\beta\) and \(|B^{\prime}|=\alpha\).
**Lemma 6**.: \(|\mathcal{T}(\alpha,[n])|\leq(\sigma_{0}(a)\cdot\sigma_{0}(\beta)-\sigma_{0}( \alpha)-\sigma_{0}(\beta)+2)^{\log n}\)_._
Proof.: By Corollary 1, we can assume without loss of generality that \(\alpha\geq\beta\). We prove that the number of \(P_{j}\) that do not violate the restrictions of Lemma 2 on \(\mathcal{S}\) and \(\mathcal{R}_{k_{s}}\) is exactly \(\sigma_{0}(\alpha)\cdot\sigma_{0}(\beta)-\sigma_{0}(\alpha)-\sigma_{0}(\beta)+2\). The number of valid choices of \(k_{s}\) is exactly \(\sigma_{0}(\alpha)\). As to \(\mathcal{R}_{k_{s}}\), we first handle the case of \(k_{s}=\alpha\). As the third restriction forces \(k_{r}=0\), this results in a single valid \(k_{r}\). Next, consider when \(k_{s}=1\). In this case, \(k_{s}|k_{r}\) for any choice of \(k_{r}\), so the first restriction is satisfied. Lastly, the second restriction simplifies to \(k_{r}+1|\beta\). The number of \(k_{r}\) that satisfy this is exactly \(\sigma_{0}(\beta)-1\), as the only divisor of \(\beta\) we cannot form with the sum \(k_{r}+1\) is \(1\).
We can now handle any remaining cases. Let \(\mathit{div}(\beta)\) be the set of divisors of \(\beta\). Notice that \(\mathit{div}(k_{s}\beta)\) is exactly \(\left(k_{s}\cdot\mathit{div}(\beta)\right)\cup\mathit{div}(k_{s})\cup\mathit{ div}(\beta)\). Due to the second restriction of \(\mathcal{R}_{k_{s}}\) (i.e. \(k_{s}+k_{r}|k_{s}\beta\)) and that fact that \(k_{s}+k_{r}>k_{s}\), we can narrow down options for \(k_{r}\) from \(\mathit{div}(k_{s}\beta)\) to \(\left(k_{s}\cdot\mathit{div}(\beta)\right)\cup\mathit{div}(\beta)\). By the first restriction of \(\mathcal{R}_{k_{s}}\), we have that \(k_{s}|k_{r}\), so we can further simplify valid options for \(k_{r}\) from \(k_{s}\cdot\mathit{div}(\beta)\cup\mathit{div}(\beta)\) to \(k_{s}\cdot\mathit{div}(\beta)\). Lastly, \(k_{r}\neq k_{s}\) and thus, the number of valid options for \(k_{r}\) equals
\[|k_{s}\cdot\mathit{div}(\beta)|-1=\sigma_{0}(\beta)-1.\]
Thus, for all \(k_{s}\) such that \(k_{s}|a\) and \(k_{s}\neq a\), we have \(\sigma_{0}(\beta)-1\) options for \(\beta\). When \(k_{s}=\alpha\), we have exactly \(1\) option for \(k_{s}\). Each of these \(\sigma_{0}(\alpha)\cdot\sigma_{0}(\beta)-\sigma_{0}(\alpha)-\sigma_{0}(\beta)+2\) sub-cases then divides \(n\) by at least \(2\). Thus, if we conservatively (for the purpose of the upper bound) assume that each subcase decreases \(n\) by a factor of \(2\), we get that
\[|\mathcal{T}(\alpha,[n])|\leq(\sigma_{0}(\alpha)\cdot\sigma_{0}(\beta)-\sigma_{0 }(\alpha)-\sigma_{0}(\beta)+2)^{\log n}\]
as desired.
By Theorem 317 of Hardy and Wright [4] which is attributed to Wigert (1907), we have that for any \(\epsilon>0\) and infinitely many sufficiently large \(n\),3
Footnote 3: The upper bound holds for all sufficiently large \(n\).
\[2^{(1-\epsilon)\log(n)/\log\log(n)}<\sigma_{0}(n)<2^{(1+\epsilon)\log(n)/\log \log(n)}.\]
Thus, for infinitely many \(n\), we have that \(\sigma_{0}(n)\sim 2^{\log(n)/\log\log(n)}\).
**Corollary 2**.: _For \(C=[n]\), all \(n\), and any \(\epsilon>0\) we have that_
\[\max_{\alpha}\bigl{[}\mathcal{T}(\alpha,C)\bigr{]}\leq(2^{\frac{(1+\epsilon) \log n}{\log\log\sqrt{n}}}-2^{\frac{\log\sqrt{n}}{\log\log\sqrt{n}}+1}+2)^{ \log n}<n^{\frac{(1+\epsilon^{\prime})\log n}{\log\log n}}.\]
Proof.: Notice that the inner \(\sigma_{0}(\alpha)\cdot\sigma_{0}(\beta)-\sigma_{0}(\alpha)-\sigma_{0}(\beta)+2\) is maximized when \(\alpha=\beta=\sqrt{n}\). By Lemma 6, we have at most \((\sigma_{0}(\sqrt{n})^{2}-2\sigma_{0}(\sqrt{n})+2)^{\log n}\) valid tilings. Using the bounds of Wigert [4] yields the statement.
This is a significant improvement over the trivial upper bound of \(\binom{n}{n/2}\).4 Next, we prove a lower bound by leveraging Lemma 5.
Footnote 4: This bound comes from setting \(a\) to be \(n/2\) at which point any \(n/2\) elements of \(C\) are chosen as a potential tilings.
**Lemma 7**.: _For \(k\in\mathbb{Z}^{+}\), we have that \(|\mathcal{T}(\alpha,[2^{k}])|>2^{0.58k}\)._
Proof.: In this case, Lemma 5 simplifies to \(|\mathcal{T}(\alpha,[2^{k}])|=|\mathcal{T}(\alpha,[2^{k-1}])|+|\mathcal{T}( \alpha/2,[2^{k-1}])|\). We proceed by induction on \(k\). For \(k=1\), setting \(\alpha\) to either \(1\) or \(2\) yields a single tiling. If \(k+1\) is even, we have
\[\mathcal{T}(2^{(k+1)/2},[2^{k+1}])|= |\mathcal{T}(2^{(k+1)/2},[2^{k}])|+|\mathcal{T}(2^{(k-1)/2},[2^{ k}])|\] \[= |\mathcal{T}(2^{(k/2]},[2^{k}])|+|\mathcal{T}(2^{(\lfloor k/2 \rfloor},[2^{k}])|\] \[= 2|\mathcal{T}(2^{\lfloor k/2\rfloor},[2^{k}])|,\]
where the last equality is due to Corollary 1. We can then apply the inductive hypothesis to yield the desired result. If \(k+1\) is odd, we have that
\[\mathcal{T}(2^{\lfloor(k+1)/2\rfloor},[2^{k+1}])|= |\mathcal{T}(2^{\lfloor(k+1)/2\rfloor},[2^{k}])|+|\mathcal{T}(2 ^{\lfloor(k+1)/2\rfloor-1},[2^{k}])|\] \[= |\mathcal{T}(2^{k/2},[2^{k}])|+|\mathcal{T}(2^{(k/2)-1},[2^{k}])|\] \[> |\mathcal{T}(2^{k/2},[2^{k}])|\]
Since the number of solutions doubles every other increase of \(k\) and never decreases when increasing \(k\), it follows that \(\max_{\alpha}[|\mathcal{T}(\alpha,[2^{k}])|]>1.5^{k}>2^{0.58k}\).
From here, we want to prove that adjusting \(n\) by increasing the number of prime factors of \(n\) can have the effect of increasing the number of valid tilings by some constant factor in the exponent. We go through this proof for the case of \(|\mathcal{T}(\alpha,[2^{k}3^{2}])|\). While this case and its associated lemmas (i.e. Lemma 8 and Lemma 9) will be generalized to much greater effect, we feel these lemmas are a useful primer to the general case. We begin with a useful lower bound on the function in Lemma 5 in the case of \(C=[2^{k}3^{2}]\) that removes the negative terms.
**Lemma 8**.: _For \(k\geq 1\) and \(\alpha\) such that \(3|\alpha\) and \(3^{2}\nmid\alpha\),_
\[|\mathcal{T}(\alpha,[2^{k}3^{2}])|\geq|\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+| \mathcal{T}(\alpha/2,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/3,[2^{k}])|.\]
Proof.: Our approach to this proof will be to expand \(|\mathcal{T}(\alpha,[2^{k}3^{2}])|\) using Lemma 5, isolate the terms in the sum \(|\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k-1}3^{2}])|+| \mathcal{T}(\alpha,[2^{k}])|\) and prove that remaining elements of the inclusion-exclusion formula are non-negative. By Lemma 5, we have that
\[|\mathcal{T}(\alpha,[2^{k}3^{2}])|= |\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k- 1}3^{2}])|+|\mathcal{T}(\alpha,[2^{k}3])|+\] \[|\mathcal{T}(\alpha/3,[2^{k}3])|-|\mathcal{T}(\alpha,[2^{k-1}3] )|-|\mathcal{T}(\alpha/6,[2^{k}3])|.\]
We can now apply Lemma 5 again, this time to the term \(|\mathcal{T}(\alpha,[2^{k}3^{2}])|\) to get
\[|\mathcal{T}(\alpha,[2^{k}3^{2}])|= |\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k-1 }3^{2}])|+|\mathcal{T}(\alpha,[2^{k-1}3])|+|\mathcal{T}(\alpha/2,[2^{k-1}3])|+| \mathcal{T}(\alpha,[2^{k-1}3])|+\] \[|\mathcal{T}(\alpha,[2^{k}])|+|\mathcal{T}(\alpha/3,[2^{k}])|-| \mathcal{T}(\alpha,[2^{k-1}1])|-|\mathcal{T}(\alpha/6,[2^{k-1}1])|\] \[+|\mathcal{T}(\alpha/3,[2^{k}3])|-|\mathcal{T}(\alpha,[2^{k-1}3] )|-|\mathcal{T}(\alpha/6,[2^{k}3])|\] \[= |\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k- 1}3^{2}])|+|\mathcal{T}(\alpha,[2^{k-1}3])|+|\mathcal{T}(\alpha/2,[2^{k-1}3])|+\] \[|\mathcal{T}(\alpha/3,[2^{k}])|-|\mathcal{T}(\alpha/6,[2^{k-1}]) |+|\mathcal{T}(\alpha/3,[2^{k}3])|-\] \[|\mathcal{T}(\alpha,[2^{k-1}3])|-|\mathcal{T}(\alpha/6,[2^{k}3])|\] \[= |\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k- 1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k-1}3])|+|\mathcal{T}(\alpha/3,[2^{k}])|-\] \[|\mathcal{T}(\alpha/6,[2^{k-1}])|+|\mathcal{T}(\alpha/3,[2^{k}3] )|-|\mathcal{T}(\alpha/6,[2^{k}3])|\]
where the second line follows from the first due to \(3|a\) and \(3\nmid|C|\). Thus \(a\nmid|C|\) which implies no valid tilings in the case dropped between these two lines. We now remove the terms in \(|\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/2,[2^{k-1}3^{2}])|+| \mathcal{T}(\alpha,[2^{k}])|\), leaving us with
\[|\mathcal{T}(\alpha/2,[2^{k-1}3])|-|\mathcal{T}(\alpha/6,[2^{k-1}])|+| \mathcal{T}(\alpha/3,[2^{k}3])|-|\mathcal{T}(\alpha/6,[2^{k}3])|.\]
Notice that
\[|\mathcal{T}(\alpha/2,[2^{k-1}3])|\geq|\mathcal{T}(\alpha/6,[2^{k-1}])|\]
and
\[|\mathcal{T}(\alpha/3,[2^{k}3])|\geq|\mathcal{T}(\alpha/6,[2^{k}3])|.\]
Thus,
\[|\mathcal{T}(\alpha/2,[2^{k-1}3])|-|\mathcal{T}(\alpha/6,[2^{k-1}])|+| \mathcal{T}(\alpha/3,[2^{k}3])|-|\mathcal{T}(\alpha/6,[2^{k}3])|\geq 0\]
as required.
We now use Lemma 8 to prove an improved lower bound in the case of \(|\mathcal{T}(\alpha,[2^{k}3^{2}])|\) with \(\alpha\) such that \(3|\alpha\).
**Lemma 9**.: _For \(k\in\mathbb{Z}^{+}\), for \(a=2^{\lfloor k/2\rfloor}\cdot 3\), we have that \(|\mathcal{T}(\alpha,[2^{k}])|>2^{0.79k}\)._
Proof.: By Lemma 8, we know that
\[|\mathcal{T}(\alpha,[2^{k}3^{2}])|\geq|\mathcal{T}(\alpha,[2^{k-1}3^{2}])|+| \mathcal{T}(\alpha/2,[2^{k-1}3^{2}])|+|\mathcal{T}(\alpha/3,[2^{k}])|.\]
Consider the terms on the right side of the inequality. Notice that we can also apply Lemma 8 to the first two terms while leaving the third unchanged. If we continue to reapply Lemma 8 to the first two of the three terms that result (keeping the third term each time), we will be left with a sum of terms in the form of Yang Hui's triangle. More formally, all of the terms in the sum would be of the form
\[\binom{j}{i}|\mathcal{T}(\alpha/2^{i}3,[2^{k-j}])|,\]
where the coefficient of \(j\) choose \(i\) follows from the observation that, when visualized as Yang Hui's triangle, the term \(|\mathcal{T}(\alpha/2^{i}3,[2^{k-j}])|\) appears in the \(j^{\text{th}}\) row and \(i^{\text{th}}\) column. When \(i=\lfloor k/4\rfloor\) and \(j=\lfloor k/2\rfloor\), we have the term
\[\binom{k/2}{k/4}|\mathcal{T}(2^{\lceil k/4\rceil},[2^{\lceil k/2\rceil}])|\]
(i.e. the term in the row \(\lfloor k/2\rfloor+1\) column \(\lfloor k/4\rfloor\) when seen as Yang Hui's triangle). We know that
\[\binom{k/2}{k/4}=\Omega(2^{0.5k})\]
and, using Lemma 7, we know that
\[|\mathcal{T}(2^{\lceil k/4\rceil},[2^{\lceil k/2\rceil}])|=\Omega(2^{0.58k/2}) =\Omega(2^{0.29k}).\]
Taken together, this gives us at least \(\Omega(2^{0.79k})\) distinct tilings. As the size of \(n\) has only been adjusted by a factor of \(9\), but the coefficient in the exponent has increased, this gives us \(n^{0.79}\) distinct tilings.
Lemma 9 is essentially a finite example of a more general method for improving upon Lemma 7. At a high level, this method is simply to increase the number of distinct prime factors in \(n\), as doing so has the effect of increasing the number of valid tilings by a constant factor in the exponent (i.e. going from at least \(2^{k/2}\) solutions to \(2^{ck/2}\) for some \(c\)). Further, as one increases the number of prime factors of \(n\) in the appropriate manner, \(c\) is unbounded. Thus, \(\max_{\alpha}[|\mathcal{T}(\alpha,[n])|]\) grows at a super polynomial rate for infinitely many \(n\). Before we proceed with the proof of this, we generalize Lemma 8 to prove a lower bound on the function in Lemma 5 that removes the negative terms for a specific family of useful cases.
**Lemma 10**.: _Let \(p_{i}\) be the \(i^{\text{th}}\) prime, and let \(t\), \(t_{p_{m}}\), \(t_{1}\), and \(t_{2}\) be defined as_
\[t =\mathcal{T}\bigg{(}\alpha,\left[2^{k}\prod_{i=2}^{m}p_{i}^{2} \right]\bigg{)} t_{1} =\mathcal{T}\bigg{(}\alpha,\left[2^{k-1}\prod_{i=2}^{m}p_{i}^{2} \right]\bigg{)}\] \[t_{p_{m}} =\mathcal{T}\bigg{(}\tfrac{\alpha}{p_{m}},\left[2^{k}\prod_{i=2}^ {m-1}p_{i}^{2}\right]\bigg{)} t_{2} =\mathcal{T}\bigg{(}\tfrac{\alpha}{2},\left[2^{k-1}\prod_{i=2}^{m }p_{i}^{2}\right]\bigg{)}\]
_where we define the products to be equal to \(1\) if \(m=1\) and \(0\) if \(m=0\). It follows that_
\[|t|\geq|t_{p_{m}}|+|t_{1}|+|t_{2}|.\]
Proof.: Throughout this proof, we will use \((A_{t},B_{t})\) to refer to \(A\) and \(B\) such that \((A,B)\in t\). We prove the lemma via a combinatorial argument in which we prove that \(t_{p_{m}}\), \(t_{1}\) and \(t_{2}\) have size equal to that of three disjoint subsets of \(t\). This is easiest to see for \(t_{1}\) and \(t_{2}\). More specifically, we define mappings \(f_{p_{m}}\), \(f_{1}^{\prime}\), and \(f_{2}^{\prime}\) from \(t_{p_{m}}\), \(t_{1}\) and \(t_{2}\) respectively to \(t\), such that their codomains are disjoint. All three mappings are similar to mappings from the proof of Lemma 5. This is especially true of \(f_{1}^{\prime}\) and \(f_{2}^{\prime}\), whereas \(f_{p_{m}}\) requires a more complex piece-wise definition. After defining a mapping, we briefly analyze several properties of elements of that mappings codomain. By the fact that the elements of the codomain of each mapping have distinct mutually exclusive properties, we are able to conclude that the codomains of the mappings are disjoint sets as desired.
We begin by defining \(f_{1}^{\prime}\) using the definition of \(f_{1}\) from Lemma 5. Let \(f_{1}^{\prime}(A_{t_{1}},B_{t_{1}})=f_{1}(t,(A_{t_{1}},B_{t_{1}}))\). we know that all elements in the codomain of \(f_{1}\) are such that \(k_{(s,t)}=1\) and odd \(k_{(r,t)}\). To see this second fact, if \(k_{(s,t_{1})}>1\), then \(k_{(r,t)}=1\) by a single point being inserted between the first and second points of the first segment. Thus, suppose \(k_{(s,t_{1})}=1\). If \(k_{(r,t_{1})}\) is even, then an odd number of points are added to the first rift to get the first rift of the tiling of \(t\) and thus, \(k_{r,t}\) is odd. Conversely, if \(k_{(r,t_{1})}\) is odd, then an even number of points is added to the first rift to get the first rift of the tiling of \(t\) and thus, \(k_{(r,t)}\) is odd. Thus, in all cases, an element of the codomain of \(f_{1}\) has an odd valued \(k_{(r,t)}\).
Next we define \(f_{2}^{\prime}\) using the definition of \(f_{2}\) from Lemma 5. Let \(f_{2}^{\prime}(A_{t_{2}},B_{t_{2}})=f_{2}(t,(A_{t_{2}},B_{t_{2}}))\). This gives us a tiling of \(t\) with even \(k_{(s,t)}\) and \(k_{(r,t)}\). The first of these facts follows from the fact that each segment from the tiling from \(t_{2}\) has had its length exactly doubled. The latter observation then follows from \(k_{s}|(k_{s}+k_{r})\) and the fact that \(k_{(s,t)}\) is even. Just as in Lemma 5, as elements of the codomain of \(f_{1}\) are tilings such that \(k_{(s,t)}=1\) elements of the codomain of \(f_{2}\) have even \(k_{(s,t)}\), we know the codomains of \(f_{1}\) and \(f_{2}\) are disjoint.
We now give a piece-wise definition of \(f_{p_{m}}\), with one definition when \(k_{(s,t_{p_{m}})}\) of the input is even and another when \(k_{(s,t_{p_{m}})}\) of the input is odd. For some, \((A_{t_{p_{m}}},B_{t_{p_{m}}})\in t_{p_{m}}\), if \(k_{(s,t_{p_{m}})}\) is even, we begin by applying \(f_{1}(t,(A_{t_{p_{m}}},B_{t_{p_{m}}}))=(A^{\prime},B^{\prime})\). This increases the size of \(A_{t_{p_{m}}}+B_{t_{p_{m}}}\) by a multiplicative factor of \(p_{m}\) and does not change the size of \(A_{t_{p_{m}}}\). This would tile exactly the first \(p_{m}\) fraction of points of \(A_{t}+B_{t}\). Thus, we build our final \(A^{\prime\prime}\) from \(A^{\prime}\) by letting \(A^{\prime\prime}=A^{\prime}+\{x\cdot p_{m}|x\in[0,p_{m}]\}\). This then gives us our desired \((A_{t},B_{t})=(A^{\prime\prime},B^{\prime})\). As \(k_{(s,t_{p_{m}})}\) is even, so is \(k_{(r,t_{p_{m}})}\). As the number of points added to the first rift is \(p_{m}-1\) (which is even), we have that \(k_{(r,t)}\) is even. This makes the codomain of \(f_{p_{m}}\) disjoint from the codomain of \(f_{1}^{\prime}\) in this case. Further, it follows from the portion of the definition based on \(f_{1}\) that \(k_{(s,t)}=1\). This makes the codomain of \(f_{p_{m}}\) disjoint from the codomain of \(f_{2}^{\prime}\) in this case.
Now for the case when \(k_{(s,t_{p_{m}})}\) is odd. For some, \((A_{t_{p_{m}}},B_{t_{p_{m}}})\in t_{p_{m}}\), we begin by applying
\[f_{2}(t,(A_{t_{p_{m}}},B_{t_{p_{m}}}))=(A^{*}B^{*}).\]
This would tile exactly the first \(p_{m}\) fraction of points of \(A_{t}+B_{t}\). Thus, we build our final \(B^{**}\) from \(B^{*}\) by letting \(B^{**}=(B^{*}+\{x\cdot p_{m}|x\in[0,p_{m}]\})\cup\{x\cdot p_{m}|x\in[0,p_{m}]\}\) to get our desired \((A_{t},B_{t})=(A^{*},B^{**})\)
As \(k_{(s,t_{p_{m}})}\) is odd and we are multiplying the size of the first segment by \(p_{m}\) (which is also odd), we are left with an odd value for \(k_{(s,t)}\). This makes the codomain of \(f_{p_{m}}\) disjoint from the codomain of \(f_{2}^{\prime}\) in this case. Further, it follows from the first part of the mapping that \(k_{(s,t)}>1\). This makes the codomain of \(f_{p_{m}}\) disjoint from the codomain of \(f_{1}^{\prime}\) in this case. As we have shown the codomains of each function to be disjoint, the lemma follows.
We now prove that \(\max_{\alpha}[|\mathcal{T}(\alpha,[n])|]\) grows at a super polynomial rate for infinitely many \(n\).
**Lemma 11**.: _For all \(c\in\mathbb{R}\) there exists an \(m\in\mathbb{Z}^{+}\) such that, for \(k\in\mathbb{Z}^{+}\) and \(\alpha=2^{\lfloor k/2\rfloor}\prod_{i=2}^{m}p_{i}\), we have that_
\[\left|\mathcal{T}\bigg{(}\alpha,\Big{[}2^{k}\prod_{i=2}^{m}p_{i}^{2}\Big{]} \bigg{)}\right|>2^{ck}.\]
Proof.: First, observe that we can now use Lemma 9 in conjunction with Lemma 10 to get arbitrarily close to \(2^{k}\). If we rework the statement and proof of Lemma 9, multiplying \(n\) by \(5^{2}\) and \(a\) by \(5\), notice that we can repeat all of the arguments of the proof, except that the line
\[|\mathcal{T}(2^{\lceil k/4\rceil},[2^{\lceil k/2\rceil}])|>\Omega(2^{0.58k/2} )=\Omega(2^{0.29k})\]
is replaced by
\[|\mathcal{T}(2^{\lceil k/4\rceil}3,[2^{\lceil k/2\rceil}9])|>\Omega(2^{0.79k /2})>\Omega(2^{0.38k}).\]
This process of increasing the \(m\) (i.e. the number of primes squared in \(n\)) can be repeated any number of times. Each time, the growth of the choose function is multiplied by the previous tiling growth rate with an additional \(1/2\) multiplicative factor in the exponent. More specifically, suppose for some \(m\) that the number of tilings is \(2^{2k}\) for \(x=1-y\) and \(0<y<1\). Then for \(m+1\), the number of sumset tilings would be \(2^{(x/2+0.5)k}=2^{(1-(y/2))k}\). Thus, increasing the value of \(m\) by \(1\) increases \(c\) by \((1-c)/2\) and the limit of \(c\) as \(m\) goes to infinity is \(1\).
In the proof of Lemma 9, we chose to count the number of tilings based on the middle column of the bottom row of the version of Yang Hui's triangle produced utilizing Lemma 8. In fact, we could have chosen the middle column of any row some fraction of the way down the triangle from which to count tilings from instead. Notice that the value of \(\binom{k/2w}{k/4w}\) for \(0<w\leq 1\) is roughly equal to \(2^{kH(w/2)/2}\), where \(H\) is the binary entropy function and
\[H(w)=-w\log w-(1-w)\log(1-w).\]
Further, notice that \(1-(w/2)\) is the amount we multiply \(k\) by in \(n\). Thus, adjusting \(w\) increases the number of tilings of the \(\mathcal{T}\) instance in question, but reduces the coefficient derived from the binomial coefficient. Similarly to the prior case of \(w=1\), suppose for some \(m\) that the number of sumset tilings is \(2^{rk}\) for \(x=(H(w/2)/w)-y\) and \(0<y<H(w/2)/w\). Then for \(m+1\), the number of sumset tilings would be
\[2^{(x(1-(w/2))+H(w/2)/2)k}=2^{((H(w/2)/w)-y+(w/2)y-H(w/2)/2+H(w/2)/2)k}=2^{(H(w /2)/w)+y((w/2)-1)k}.\]
Given this, the max value of \(c\) we can achieve for a fixed \(w\) is \(\lim_{m\to\infty}c=H(w/2)/w\). Notice that \(w\) decreases asymptotically faster than \(H(w/2)\) and that, for sufficiently large \(k\), we can choose an arbitrarily small \(w\). Thus, the max value of \(c\) tends to infinity as \(w\) tends to \(0\) and \(k\) tends to infinity.
## 5 Extending to \(\mathbb{Z}^{d}\)
We now extend the above results for finite subsets of \(\mathbb{Z}\) to results for \(\mathbb{Z}^{d}\) for arbitrary \(d\in\mathbb{Z}\). To do this, we require a related structural result.
**Lemma 12**.: _Let \(C=[n]\). For \(m\in\mathbb{Z}\), either \(|A\cap(B+m)|=1\) or \(|C\cap(B+m)|<|B+m|\)._
Proof.: We proceed by induction on the \(\omega^{*}(n)\) where \(\omega^{*}\) is the prime omega function (i.e. \(\omega^{*}(n)\) equals the sum of the multiplicity of the prime factors of \(n\)). While the star superscript used in our prime omega function notation is non-standard, we use it to avoid confusion with asymptotic notation. For the base case of \(\omega^{*}(n)=1\), we have that exactly one of \(|A|\) and \(|B|\) is \(n\). Assume without loss of generality that \(|A|=n\)
then \(A=C\) and \(B+m\) is either in \(C\) and has a unique intersection with \(A\) or is not in \(C\). Thus, the base case holds. Suppose the lemma holds for \(\omega^{*}(n)=k-1\) and we prove that it holds for \(\omega^{*}(n)=k\). Notice that, as \(k>1\), either \(A\) or \(B\) has segments size greater than \(1\). Suppose without loss of generality that \(A\) has segment size greater than \(1\). Thus, we can group \(C\) into meta-points such that, for each meta-point, the consecutive elements it encompasses are either all in or all not in \(A\). Further, by the definition of \(B\), its segment size is \(1\) and its minimum rift size is the segment size of \(A\) minus \(1\). Thus, it follows that each meta-point of \(C\) is such that either its first element is in \(B\) or it has no elements in \(B\). Let \(w>1\) be the number of points per meta-point and suppose \(m\) is such that \(|C\cap(B+m)|=|(B+m)|\) (if not, we are done). As meta-points have size greater then \(1\), we can reconsider the intersection of \(|A\cap(B+m)|=1\) with respect \(n/w\) points and \(\lfloor m/w\rfloor\) and apply our inductive hypothesis to say that exactly one meta-point of \(C\) that is entirely elements of \(A\) intersects a meta-point of \(B+\lfloor m/w\rfloor\). This gives us an intersection of \(A\cap(B+\lfloor m/w\rfloor)\) at the first point of a meta point. If we replace \(\lfloor m/w\rfloor\) by \(m\) as required, this increases the shift by at most \(w-1\). As the intersection between \(A\) and \(B+\lfloor m/w\rfloor\) is in the first point of a meta-point and the meta-point in question contains \(w\) consecutive elements, \(|A\cap(B+m)|=1\) still holds and the lemma follows.
We can now prove our main lemma of the section.
**Lemma 13**.: _Let \(C=C_{1}\times C_{2}\times\ldots\times C_{d}\) for \(C_{i}\) such that \(C_{i}=[n_{i}]\) for some \(n_{i}\in\mathbb{Z}^{+}\). We have that_
\[|\mathcal{T}((\alpha_{1},\alpha_{2},\ldots,\alpha_{d}),C)|=|\mathcal{T}( \alpha_{1},C_{1})|\cdot|\mathcal{T}(\alpha_{2},C_{2})|\cdot\ldots\cdot| \mathcal{T}(\alpha_{d},C_{d})|.\]
_More specifically, \((A,B)\) is an element of valid tilings of \(\mathcal{T}((\alpha_{1},\alpha_{2},\ldots,\alpha_{d}),C)\) if and only if \(A=\{(x_{1},x_{2},\ldots,x_{d}):x_{i}\in A_{i}\}\) and \(B=\{(y_{1},y_{2},\ldots,y_{d}):y_{i}\in B_{i}\}\) where \((A_{i},B_{i})\in\mathcal{T}(\alpha_{i},C_{i})\). Further, for \(m\in\mathbb{Z}^{d}\), either \(|A\cap(B+m)|=1\) or \(|C\cap(B+m)|<|B+m|\)._
Proof.: To tile the elements of \(C_{1}\times\min[C_{2}]\times\ldots\times\min[C_{d}]\), the only elements of \(A\) that can be utilized are those in \(C_{1}\times\min[C_{2}]\times\ldots\times\min[C_{d}]\). Thus, the elements of \(A\) in \(C_{1}\times\min[C_{2}]\times\ldots\times\min[C_{d}]\) and the elements of \(B\) in \(\mathbb{Z}^{+}\times\{0\}\times\ldots\times\{0\}\) exactly correspond to the one-dimensional tiling of \(C_{1}\) with \(|A|=\alpha_{1}\).
We prove the lemma via two nested inductive arguments. We begin with induction on the dimension \(d\). For \(d=1\), this follows by definition and Lemma 12. Thus, suppose this is the case for some \(d\) and we wish to prove that the lemma still holds for \(d+1\). To do this, we prove by induction on \(z\) that, for all \(z\in[n_{d+1}]\) there exists a \(k\in[z]\) such that, \(C_{1}\times C_{2}\times\ldots\times C_{d}\times z\) is tiled by
\[(A\cap(C_{1}\times C_{2}\times\ldots\times C_{d}\times k))+\{b\in B:\text{ proj}_{d+1}=z-k\}.\]
For the case of \(z=1\) it follows from our inductive hypothesis relative to \(d\) that \(C_{1}\times\ldots\times C_{d}\times 1\) can only be tilings as described in the lemma. More specifically, one takes the \(d\) dimensional tiling, then includes \(1\) and \(0\) as the \((d+1)^{\text{th}}\) element in the tuples of each element of \(A\) and \(B\) respectively. Now, for our inductive hypothesis relative to \(z\), suppose the claim holds for the subsets of \(A\) and \(B\) that tile exactly the elements of \(C_{1}\times C_{2}\times\ldots\times C_{d}\times[z-1]\) for \(z\leq n_{d+1}\). We wish to prove this holds for \(C_{1}\times C_{2}\times\ldots\times C_{d}\times z\).
Let \(A^{\prime}\) be the elements of \(A\) in \(C_{1}\times C_{2}\times\ldots\times C_{d}\times 1\). As \((1,\ldots,1,z)\) must be tiled, this must either be because \((1,\ldots,1,z)\in A\) or \((1,\ldots,1,z)\) is covered by a translation of an element of \(A\) in \(C_{1}\times C_{2}\times\ldots\times C_{d}\times[z-1]\). First, suppose \((1,\ldots,1,z)\in A\). Let \(B^{*}\subset B\) be such that
\[A^{\prime}+B^{*}=C_{1}\times C_{2}\times\ldots\times C_{d}\times 1\]
and let \(b^{\prime}=(y_{1},\ldots,y_{d+1})\) for \(y_{d+1}\in[z-1]\) be an element of \(B\) that translates an element of \(A\cap(C_{1}\times C_{2}\times\ldots\times C_{d}\times[z-1])\) to \(C_{1}\times C_{2}\times\ldots\times C_{d}\times z\). For now, let us assume that \(y_{d+1}=z-1\). By our inductive hypothesis on \(d\), we have that either \(|A\cap(B+m)|=1\) or \(|C\cap(B+m)|<|B+m|\) for \(m\in\mathbb{Z}^{d}\). Thus, it follows that \(|((1,\ldots,1,z)+B^{*})\cap(A^{\prime}+b^{\prime})|=1\) or \(|C\cap A^{\prime}+b^{\prime}|<|A^{\prime}+b^{\prime}|\) which implies that \(C_{1}\times C_{2}\times\ldots\times C_{d}\times z\) cannot be tiled by elements of \(A^{\prime}\) if it \((1,\ldots,1,z)\in A\). Notice then that this together with our inductive hypothesis on \(z\) implies that, for \(k\in[z-1]\), \(A\cap(C_{1}\times C_{2}\times\ldots\times C_{d}\times k)=\emptyset\), or that \(A\cap(C_{1}\times C_{2}\times\ldots\times C_{d}\times k)=A^{\prime}\). This observation then allows us to circle back and inductively remove the restriction that \(y_{d+1}=z-1\) from the prior argument. Taken together, these restrictions turn the case of tiling \(C_{1}\times C_{2}\times\ldots\times C_{d}\times z\) when \((1,\ldots,1,z)\in A\) into tiling a \(d\)-dimensional space with fixed \(B^{*}\), at which point we can apply the inductive hypothesis \(d\) to conclude that the claim holds.
The argument for the case of \((1,\ldots,1,z)\not\in A\) proceeds similarly, so we merely outline the differences at a high level. To cover \((1,\ldots,1,z)\), it follows that there exists a \(b^{\prime}\) such that \((A\cap(C_{1}\times C_{2}\times\ldots\times C_{d}\times k))+b^{\prime}=(A^{ \prime}+(0,\ldots,0,k-1))+b^{\prime}\) covers \((1,\ldots,1,z)\). Adding any element \(a^{\prime}\in C_{1}\times C_{2}\times\ldots\times C_{d}\times z\) to \(A\) then creates an issue, as \(a^{\prime}=(1,\ldots,1,z)+m\) for some \(m\) such that \(\operatorname{proj}_{d+1}(m)=0\). At this point we treat \((A\cap(C_{1}\times C_{2}\times\ldots\times C_{d}\times k))+b^{\prime}\) and \(a^{\prime}+B^{*}\) as \(A\) and \(B\) respectively from the statement that, for \(m\in\mathbb{Z}^{d}\), either \(|A\cap(B+m)|=1\) or \(|C\cap(B+m)|<|B+m|\).
Lemma 13 can then be leveraged to prove that \(\max_{\alpha,C}[|\mathcal{T}(\alpha,C)|]\) is maximal when \(d=1\)
**Lemma 14**.: _Let \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) for \(x_{1},\ldots,x_{d}\in\mathbb{Z}^{+}\) and \(C^{\prime}=[n]\) for \(n=|C|\). It follows that_
\[\max_{\alpha,C}[|\mathcal{T}(\alpha,C)|]\leq\max_{\alpha^{\prime},C}[|\mathcal{ T}(\alpha^{\prime},C^{\prime})|].\]
Proof.: Recall that we use \([a]\) for the set of integers \(1\) through \(a\) and \([a,b]\) for the set of integers from \(a\) to \(b\) (inclusive of \(a\) and \(b\)). We give a mapping from any \((A,B)\in\mathcal{T}(\alpha,C)\) to a distinct \((A^{\prime},B^{\prime})\in\mathcal{T}(\alpha^{\prime},C^{\prime})\). We know by Lemma 13 that \(\prod_{i\in[d]}|\operatorname{proj}_{i}(A)|=|A|\). We define \(A^{\prime}\) in terms of \(A\) by specifying the elements of
\[A^{\prime}\cap\Big{[}\prod_{i\in[k]}x_{i}\Big{]}\]
inductively with respect to \(k\) with \(k\leq d\). For \(k=1\), we let \(A^{\prime}\cap[x_{1}]=\operatorname{proj}_{i}(A)\). Suppose that the elements of
\[A^{\prime}\cap\Big{[}\prod_{i\in[k^{\prime}]}x_{i}\Big{]}\]
are defined for all \(k^{\prime}<k\) and that \(1<k\). Then we define the elements of
\[A^{\prime}\cap\Big{[}\prod_{i\in[k-1]}x_{i},\prod_{i\in[k]}x_{i}\Big{]}\]
by
\[\bigcup_{z\in\operatorname{proj}_{k}(A)}\Bigg{(}\bigg{(}A^{\prime}\cap\Big{[} \prod_{i\in k-1}x_{i}\Big{]}\bigg{)}+(z-1)\Bigg{)}.\]
We can prove by induction on \(d\) that there exists a \(B^{\prime}\) such \((A^{\prime},B^{\prime})\). For \(d=1\), we have that \(C=C^{\prime}\) and \((A,B)=(A^{\prime},B^{\prime})\). Suppose the lemma holds up to \(d-1\). Then in the case of \(d\), our definition of \(A^{\prime}\) from \(A\) as well as the fact that
\[\prod_{i\in[d]}|\operatorname{proj}_{i}(A)|=|A|,\]
we have that \(|A|=|A^{\prime}|\). Consider \(C^{\prime}\) condensed into \(x_{d}\) meta-points of size
\[\prod_{i\in[d-1]}x_{i}\]
each. We know by the inductive hypothesis that there exists a \(B^{*}\subset B^{\prime}\) such that \(A^{\prime}+B^{*}\) tiles exactly each meta-point with at least one element of \(A^{\prime}\). This tiles exactly the meta-points indexed by elements of \(\operatorname{proj}_{d}(A)\). By Lemma 13, \(\operatorname{proj}_{d}(A)\) is such that \((\operatorname{proj}_{d}(A),B^{\prime})\in\mathcal{T}(|\operatorname{proj}_{d} (A)|,[x_{i}])\) for some \(B^{\prime}\) and thus, the lemma follows.
Note that one cannot hope to achieve an improvement of Lemma 14 that yields equality for all \(\alpha\), \(n\) and \(d\), as evidenced by the following counterexample.
**Corollary 3**.: _Let \(C=[x_{1}]\times[x_{2}]\times\ldots[x_{d}]\) for \(x_{1},\ldots,x_{d}\in\mathbb{Z}^{+}\) and \(C^{\prime}=[n]\) for \(n=|C|\). \(\exists\alpha,n,d\in\mathbb{Z}^{+}\) such that_
\[\max_{\alpha,C}[|\mathcal{T}(\alpha,C)|]<\max_{\alpha^{\prime},C}[|\mathcal{T} (\alpha^{\prime},C^{\prime})|].\]
Proof.: Let \(\alpha=2\) and \(C=[3]\times[2]\). The only valid tiling of \(C^{\prime}\) is
\[(\{(1,1),(2,1)\},\{(0,0),(0,1),(0,2)\})\]
where as \(C^{\prime}=[6]\) has the tilings \(P_{1}=(\{1,2\},\{0,2,4\})\) and \(P_{2}=(\{1,4\},\{0,1,2\})\)
## 6 Conjectures For Non-Contiguous \(C\) and Multisets
In this section, we limit our focus to the case respect to \(d=1\). For optimal \(\alpha\), we hypothesis that \(C=[n]\) has at least as many tilings as any \(\alpha^{\prime}\) and \(C^{\prime}\) such that \(|C^{\prime}|=n\). More formally, we claim the following
**Conjecture 1**.: \[\forall C\Big{[}(|C|=n)\implies\max_{\alpha}\bigl{[}|\mathcal{T}(a,C)|\bigr{]} \leq\max_{\alpha}\bigl{[}|\mathcal{T}(a,[n])|\bigr{]}\Big{]}.\]
While we do not as of yet have a complete proof of this, we wish to lay a foundation that we believe can be used to prove this. First, we generalize the definition of segments and rifts. Then, we define an injective mapping we hypothesis maps \(\mathcal{T}(\alpha,[n])\) to a subset of \(\mathcal{T}(\alpha,C)\) for any \(C\) of size \(n\). To do this, we will need to reorder the elements of \(C\) based on the lexicographically first tiling which we call the _canonical tiling order_ of \((\alpha,C)\) (i.e. \(\ell(\alpha,C)\)).
**Definition 4**.: _For some \(C\), let \(\ell(\alpha,C)\) be the ordered tuple of size \(|C|\) formed by the elements of \(A_{1}+b_{(1,1)}\) in ascending order, followed by the elements of \(A_{1}+b_{2,1}\) in ascending order and so on until we have the largest element of \(A_{1}+b_{(|B_{1}|,1)}\) as the last element of \(\ell(\alpha,C)\). Further, we let \(\ell(a,C,k)\) equal the \(k^{th}\) element of \(\ell(\alpha,C)\). More formally, we have that_
\[\ell(\alpha,C,k)=a_{((k-1)\bmod a,1)}+b_{(\lceil(k-1)/a\rceil,1)}.\]
For an ordered tuple \(T\), we us \(\mathbf{first}[T]\) and \(\mathbf{last}[T]\) to mean the first and last elements of \(T\) respectively. For an ordered tuple \(T\) and a set \(V\), we abuse set minus notation and use \(T\setminus V\) to mean the ordered tuple \(T\) with the elements of \(V\) removed. Further, for \(T\) and \(V\) such that all elements of \(V\) are in \(T\), we use \(\mathbf{first}[V,T]\) and \(\mathbf{last}[V,T]\) to mean the first and last element of \(V\) with respect to the order of \(T\). For example, if \(V=\{1,3,5,6,8,9\}\) and \(T=(7,8,9,1,3,2,4,6,5)\), then we would have \(\mathbf{first}[V,T]=8\) and \(\mathbf{last}[V,T]=5\). For a ordered tuple \(T\) as well as \(x\) and \(y\) in \(T\), we define \(x<_{T}y\) to be true if and only if \(x\) precedes \(y\) in \(T\). Using the above, we can now define segments and rifts relative to the the canonical tiling order.
**Definition 5**.: _We define the \(i^{th}\) canonical tiling order segment of the \(j^{th}\) covering and \(i^{th}\) canonical tiling order rift of the \(j^{th}\) tiling (i.e. \(s^{cto}_{(i,j)}\) and \(r^{cto}_{(i,j)}\) respectively) be as follows:_
* \(s^{cto}_{1}\triangleq\Big{\{}x\in A:x\underset{\ell(\alpha,C)}{<}\mathbf{ first}[\ell(\alpha,C)\setminus A]\Big{\}}\)__
* \(r^{cto}_{1}\triangleq\Big{\{}x\in C\setminus A:x\underset{\ell(\alpha,C)}{<} \mathbf{first}[A\setminus s^{cto}_{1}],\ell(\alpha,C)\Big{\}}\)_._
* \(s^{cto}_{i}\triangleq\Big{\{}x\in A:\mathbf{last}[s^{cto}_{i-1},\ell(\alpha, C)]\underset{\ell(\alpha,C)}{<}x\underset{\ell(\alpha,C)}{<}\mathbf{ first}[(C\backslash A)\backslash\{y\in C:y\leq\mathbf{last}[s^{cto}_{i-1},\ell(\alpha,C)]\}, \ell(\alpha,C)]\Big{\}}\)__
* \(r^{cto}_{i}\triangleq\Bigg{\{}x\in C\setminus A:\mathbf{last}[r^{cto}_{i-1}, \ell(\alpha,C)]\underset{\ell(\alpha,C)}{<}x\underset{\ell(\alpha,C)}{<} \mathbf{first}\bigg{[}A\setminus\big{(}\underset{k=1}{\bigcup}s^{cto}_{k} \bigg{)},\ell(\alpha,C)\bigg{]}\Bigg{\}}\)_._
To put cto-segments more intuitively, the \(i^{\text{th}}\) cto-segment of a partition \(P_{j}\) is the \(i^{\text{th}}\) set of consecutive (relative to \(\ell(C,a)\)) elements of \(A_{j}\). We define \(S^{\text{cto}}_{j}\) and \(R^{\text{cto}}_{j}\) to be the set of all non-empty \(s^{\text{cto}}_{(i,j)}\) and \(r^{\text{cto}}_{(i,j)}\) respectively. With these definitions in hand, we make the following conjecture:
**Conjecture 2**.: _Let \(P_{j}=(A_{j},B_{j})\) be the \(j^{\text{th}}\) element of \(\mathcal{T}(\alpha,C)\), then their exists a \(B^{\prime}\) such that \((A^{\prime},B^{\prime})\in\mathcal{T}(\alpha,[|C|])\) where \(A^{\prime}=\{x|\ell(\alpha,C,x)\in A_{j}\}\). Further, there does not exist \(P_{k}\in\mathcal{T}(\alpha,C)\) for \(k\neq j\) such that \(\{x|\ell(\alpha,C,x)\in A_{k}\}=A^{\prime}\)._
The first statement of the conjecture gives the mapping from solutions for any \(C\) to a solution in the \(C=[n]\) case, where as the second statement enforces injectivity of the mapping. With these two properties it follows that, if Conjecture 2 is true, then so is Conjecture 1. Notice that the first statement is not bidirectional and in fact, there are counter examples to the reverse direction which we will discuss briefly below. To provide some additional intuition, let us consider an example.
Let \(C=\{1,3,5,6,7,8,10,12\}\) and note that, for \(\alpha=4\), the tilings of \(C\) are as follows:
* \(P_{1}=(\{1,3,5,7\},\{0,5\})\)
* \(P_{2}=(\{1,3,6,8\},\{0,4\})\)
* \(P_{3}=(\{1,5,6,10\},\{0,2\})\).
Now we consider \(C\) in its canonical tiling order which yields \(\ell(4,C)=(1,3,5,7,6,8,10,12)\). If we build \(A^{\prime}\) based on the definition in Conjecture 2 for each of these solutions, we get the following:
* \(A^{\prime}_{1}=\{1,2,3,4\}\)
* \(A^{\prime}_{2}=\{1,2,5,6\}\)
* \(A^{\prime}_{3}=\{1,3,5,7\}\).
These correspond exactly to the \(A\) in elements of \(\mathcal{T}(4,[8])\). Critically, the mapping matches the lexicographically \(i^{\text{th}}\) tiling \(C\) to the lexicographically \(i^{\text{th}}\) tiling for \([|C|]\). That being said, while their are infinitly many \(C\) of size \(n\) such that \(|\mathcal{T}(\alpha,C)|=\mathcal{T}(\alpha,[n])\), there are also infinitely many \(C\) of size \(n\) such that this is not the case. The most obvious example are \(C\) with no tilings except when \(\alpha=1\) or \(\alpha=n\). There also less trivial examples though, such as when \(C=\{1,2,3,4,5,7\}\) and \(a=3\) in which case \(P_{1}=(\{1,2,5\},\{0,2\})\) is the only solution. This exemplifies why one cannot hope for equality in Conjecture 1 or a biconditional claim in Conjecture 2, even if we added additional restrictions such as only considering \((\alpha,C)\) such that \(|\mathcal{T}(\alpha,C)|\neq 0\).
While we are unable to resolve Conjecture 2 in this work, the most direct approach would seem to be to prove that the only tilings of \(C\) that can be valid share all of the basic properties of tilings of \(C=[n]\) relative to \(\ell(\alpha,C)\). We now give an initial result along these lines.
**Lemma 15**.: _For all \(a\), \(C\), and \(j\), we have that either \(\ell(\alpha,C,2)\in A_{j}\) or that \(\exists b^{*}\in B_{j}\) such that \(\ell(\alpha,C,1)+b^{*}=\ell(\alpha,C,2)\)._
Proof.: If \(j=1\) or their is a single valid tiling, then the lemma is vacuously true. Thus, we assume \(j\neq 1\) and that there are at least \(2\) valid tilings. First, we note that \(a_{(1,j)}=\min[C]=\ell(\alpha,C,1)\) for all \(j\). We need to prove that \(\exists b^{*}\in B_{j}\) such that \(a_{(1,j)}+b^{*}=\ell(\alpha,C,2)\). By the definition of the canonical tiling order, \(A_{1}\) is the first \(a\) bits of \(\ell(\alpha,C)\). Further, the only elements that potentially have less value then \(\ell(\alpha,C,2)\) are the elements \(\ell(\alpha,C,k|a|)\) for any \(k\in[(|C|/a)-1]\). If one of these elements were less than \(\ell(\alpha,C,2)\) and in \(A_{j}\), then \(A_{j}\) would precede \(A_{1}\) lexicographically, which is a contradiction. Thus, these elements are either not in \(A_{j}\) (in which case we cannot add \(b^{*}\) to them to yield \(\ell(\alpha,C,2)\)) or they are larger than \(\ell(\alpha,C,2)\) (which means no \(b^{*}\geq 0\) can be summed with it to equal \(\ell(\alpha,C,2)\).
Lastly, we briefly discuss the possibility of an extension of the above to multisets. For two multisets \(X\) and \(Y\), we refer to \(X+Y\) as the multiset sum of \(X\) and \(Y\) and define it by in a similar manner to a sumset, except that the result is itself a multiset and an element \(z\in X+Y\) has multiplicity equal to the number of pairs of elements in \(X\) and \(Y\) that sum to \(z\). We generalize the concepts of tiling a multiset with elements in \(\mathbb{Z}^{d}\) and the canonical tiling order in the natural way. We denote the set of tilings of a multiset \(C\) by \(\mathcal{T}^{\prime}(\alpha,C)\). Given this, we make hypothesis akin to Conjecture 2, except with respect to tiling multisets.
**Conjecture 3**.: _Let \(P_{j}=(A_{j},B_{j})\) be the \(j^{\text{th}}\) element of \(\mathcal{T}^{\prime}(\alpha,C)\), then their exists a \(B^{\prime}\) such that \((A^{\prime},B^{\prime})\in\mathcal{T}(\alpha,[|C|])\) where \(A^{\prime}\) is the lexicographically first set such that \(A^{\prime}=\{x|\ell(\alpha,C,x)\in A_{j}\}\). Further, there does not exist \(P_{k}\in\mathcal{T}^{\prime}(\alpha,C)\) for \(k\neq j\) such that \(\{x|\ell(\alpha,C,x)\in A_{k}\}=A^{\prime}\)._
## 7 Acknowledgments
I want to thank Neng Huang for numerous discussions on the above results, during which he caught several errors and gave recommendations as to how to improve clarity. I also wish to thank Andy Drucker for reading several early drafts and providing valuable feedback. |
2308.04272 | New rotation period measurements of 67,163 Kepler stars | The Kepler space telescope leaves a legacy of tens of thousands of stellar
rotation period measurements. While many of these stars show strong
periodicity, there exists an even bigger fraction of stars with irregular
variability for which rotation periods are unknown. As a consequence, many
stellar activity studies might be strongly biased toward the behavior of more
active stars with measured rotation periods. To at least partially lift this
bias, we apply a new method based on the Gradient of the Power Spectrum (GPS).
The maximum of the gradient corresponds to the position of the inflection point
(IP). It was shown previously that the stellar rotation period $P_{rot}$ is
linked to the inflection point period $P_{IP}$ by the simple equation $P_{rot}
= P_{IP}/\alpha$, where $\alpha$ is a calibration factor. The GPS method is
superior to classical methods (such as auto-correlation functions (ACF))
because it does not require a repeatable variability pattern in the time
series. From the initial sample of 142,168 stars with effective temperature
$T_{eff}\leq6500K$ and surface gravity $log g\geq4.0$ in the Kepler archive, we
could measure rotation periods for 67,163 stars by combining the GPS and the
ACF method. We further report the first determination of a rotation period for
20,397 stars. The GPS periods show good agreement with previous period
measurements using classical methods, where these are available. Furthermore,
we show that the scaling factor $\alpha$ increases for very cool stars with
effective temperatures below 4000K, which we interpret as spots located at
higher latitudes. We conclude that new techniques (such as the GPS method) must
be applied to detect rotation periods of stars with small and more irregular
variabilities. Ignoring these stars will distort the overall picture of stellar
activity and, in particular, solar-stellar comparison studies. | Timo Reinhold, Alexander I. Shapiro, Sami K. Solanki, Gibor Basri | 2023-08-08T14:12:05Z | http://arxiv.org/abs/2308.04272v1 | # New rotation period measurements of 67,163 _Kepler_ stars+
###### Abstract
Context:The _Kepler_ space telescope leaves a legacy of tens of thousands of stellar rotation period measurements. While many of these stars show strong periodicity, there exists an even bigger fraction of stars with irregular variability for which rotation periods are rarely visible or in most cases unknown. As a consequence, many stellar activity studies might be strongly biased toward the behavior of more active stars, for which rotation periods have been determined.
Aims:To at least partially lift this bias, we apply a new method capable of determining rotation periods of stars with irregular light curve variability. This effort greatly increases the number of stars with well-determined periods, especially for stars with small variabilities similar to that of the Sun.
Methods:To achieve this goal, we employ a novel method based on the Gradient of the Power Spectrum (GPS). The maximum of the gradient corresponds to the position of the inflection point (IP), i.e., the point where the curvature of the high-frequency tail of the power spectrum changes its sign. It was shown previously that the stellar rotation period \(P_{\rm rot}\) is linked to the inflection point period \(P_{\rm RP}\) by the simple equation \(P_{\rm rot}=P_{\rm RP}/\alpha\), where \(\alpha\) is a calibration factor. The GPS method is superior to classical methods (such as auto-correlation functions (ACF)) because it does not require a repeatable variability pattern in the time series, making it an ideal tool for detecting periods of stars with very short-lived spots.
Results:From the initial sample of 142,168 stars with effective temperatures \(T_{\rm eff}\leq 6500\) K and \(log\ g\geq 4.0\) in the _Kepler_ archive, we could measure rotation periods for 67,163 stars by combining the GPS and the ACF method. We further report the first determination of a rotation period for 20,397 stars. The GPS periods show good agreement with previous period measurements using classical methods, where these are available. Furthermore, we show that the scaling factor \(\alpha\) increases for very cool stars with effective temperatures below 4000 K, which we interpret as spots located at higher latitudes.
Conclusions:We conclude that new techniques (such as the GPS method) must be applied to detect rotation periods of stars with small and more irregular variabilities. Ignoring these stars will distort the overall picture of stellar activity and, in particular, solar-stellar comparison studies.
## 1 Introduction
The stellar rotation period \(P_{\rm rot}\) is a fundamental quantity in stellar astrophysics because it is closely linked to the star's activity level and its age. Skumanich (1972) first demonstrated that the average equatorial rotational velocity and the emission luminosity in the cores of the Ca ii H and K lines both decrease with stellar age \(t\) according \(P_{\rm rot}\sim t^{1/2}\). In the following years it has been shown that, on average, young stars rotate faster and are more active, whereas old stars rotate more slowly and are less active (e.g., Noyes et al. 1984). The pioneering work of Skumanich (1972) has inspired the idea of age-dating the star using its rotation period. This semi-empirical method, nowadays known as gyrochronology (Barnes 2003, 2007), calibrates the relation between the stellar mass, rotation period, and age. Hence, knowing the stellar rotation period is essential for estimating the stellar age - a fundamental quantity of the star that cannot be measured directly.
Most commonly, stellar rotation periods are measured by observing stellar brightness variations over time, and searching for repeatable patterns in long-term (photometric) time series caused by star spots rotating in and out of view. Owing to the _Kepler_ space telescope's almost uninterrupted photometric observations of \(\sim 150,000\) main-sequence stars for 4 years, rotation periods have been measured for several tens of thousands of stars (McQuillan et al. 2013a,b; Reinhold et al. 2013; Walkowicz & Basri 2013; Nielsen et al. 2013; McQuillan et al. 2014; do Nascimento et al. 2014; Garcia et al. 2014; Reinhold & Gizon 2015; Ceillier et al. 2016, 2017; Santos et al. 2019, 2021).
Among these studies, one of the largest samples of rotation periods was provided by McQuillan et al. (2014) (hereafter McQ14), measuring periodic brightness variations in 34,030 _Kepler_ stars, which remains one of the largest collections of rotation periods today, which has been used in numerous studies covering a wide range of topics from constraining stellar dynamo theories to understanding the evolution of our Galaxy (see, e.g., van Saders et al. 2019 for one of the most recent examples). Despite this huge number, McQ14 could not unambiguously detect periods in an even larger sample of 99,000 stars.
Recently, Santos et al. (2019, 2021) (hereafter S21) reanalyzed the full _Kepler_ archive and significantly increased the number of detected rotation periods to 55,232 out of 159,442 targets. Both studies (McQ14; S21) identified a decrease of the
period detection rate with increasing effective temperature. M dwarfs have detection fractions of 70-80% and K dwarfs around 50%, whereas the detection fraction drops to \(\sim 30\%\) or less for F and G dwarfs. This observation can be explained by the fact that the variability pattern changes from cooler to hotter stars. Light curves of M dwarfs show very regular periodicity over many rotation periods, whereas G-type stars exhibit more irregular variability, hardly showing any periodicity over long sections of the observations. It was suggested that the cause for this behavior is that the spot lifetimes of these stars are often shorter than the stellar rotation period, which leads to irregularities in the light curves (Giles et al., 2017; Basri et al., 2022).
As a consequence, the rotation periods of many stars around solar spectral type remain undetected in an automated period search because the rotational periodicity is not stable enough to generate a significant peak in the frequency analysis (Reinhold et al., 2021). This implies that the conclusions drawn from many studies of near-solar rotators (e.g. van Saders et al., 2019; Reinhold et al., 2020; Okamoto et al., 2021) might be strongly biased toward the behavior of more active stars, for which rotation periods could be determined. In particular, the relatively small number of stars with known rotation periods and variabilities similar to that of the Sun conveys a false picture of the Sun being unusually quiet compared to other stars with detected rotation period (Reinhold et al., 2020).
Our main goal of this study is to make use of recent developments in understanding stellar brightness variability and utilizing new methods to determine rotation periods of a larger sample of _Kepler_ stars than ever before. This extended sample of stars with determined rotation periods should at least partly remove these biases. Possible applications range from the comparison of observed period distributions in the _Kepler_ field to predictions of Galactic evolution models (van Saders et al., 2019), comparing solar and stellar variabilities (Reinhold et al., 2020), and the search for superflares on solar-like stars (Okamoto et al., 2021; Vasilyev et al., 2022).
To achieve this goal, more rotation period measurements of stars with small, solar-like variabilities are needed. Recently, Shapiro et al. (2020) showed that the correct rotation period of stars with irregular variability can reliably be detected by a novel method that considers the Gradient of the (global wavelet) Power Spectrum (GPS), instead of the power spectrum itself.
The GPS method has been successfully applied to measure the solar rotation period (Amazo-Gomez et al., 2020). Furthermore, it shows good agreement with the previously reported periods of _Kepler_ stars (Amazo-Gomez et al., 2020). We emphasize that, in contrast to classical period analysis methods, the GPS method does not require a repeatable spot pattern in the time series but is sensitive to the typical dip durations of spots crossings (s. Sect. 3). Consequently, it even works in cases when the magnetic features live shorter than the stellar rotation period such that no recoccurring transits of the same magnetic features are required. Hence, the GPS method is ideally suited to measure rotation periods for stars where classical methods failed to detected reliable periods before (Reinhold et al., 2022).
## 2 Data and sample selection
### _Kepler_ data
In this work, we analyze the long-cadence light curves processed by the latest version of the _Kepler_ pipeline (Data Release 25). The data1 are released in _quarters_ with lengths of \(\sim 90\) days, with exceptions for the quarters Q0, Q1, and Q17 that have shorter observing times between \(10-33\) days. In the following, we use all available quarters except for Q0, Q1, and Q17 because these are significantly shorter than the other quarters, which becomes important in the period analysis.
Footnote 1: The data can be retrieved at [https://archive.stsci.edu/Kepler/data_search/search.php](https://archive.stsci.edu/Kepler/data_search/search.php).
_Kepler_ data are known to suffer from various instrumental effects acting on different time scales, and affecting each observing quarter differently strongly. One of the most severe effects are drifts of stars across the detector, leaving long-term up- and downward trends in the light curves. These long-term signals can mimic the variability of slow rotators, and must be treated with caution. Previous attempts cleaned the data from instrumental signals by searching for shared signals across the detector. These so-called cotrending basis vectors are removed from the data by subtracting a linear combination of them from the time series (Kinemuchi et al., 2012; Stumpe et al., 2012; Smith et al., 2012). It was found that this approach bears the risk of underfitting because instrumental signals were not fully removed. An updated version of the _Kepler_ pipeline separates the instrumental systematics by frequency (Stumpe et al., 2014). This approach, however, at times overcorrects the data, and removes true astrophysical signals.
Even though the data used here were reduced with the latest pipeline, visual inspection showed that the reduction was far from being perfect. Many quarters still contained instrumental trends which showed an increased variability compared to the other quarters for a given star. Such instrumental trends are often found every 4th quarter because the _Kepler_ telescope rolls by 90 degrees every quarter such that a certain target falls on the same CCD every 4 quarters. To identify such cases in an automated way, we use a common metric that characterizes the light curve variability: the variability range \(R_{\rm var}\)(Basri et al., 2010, 2011). This measure computes the difference between the 95th and 5th percentile of the sorted differential intensities. Here, we compute the variability range from the 3-hours binned time series for each quarter individually, which we denote by \(R_{\rm var,\,q}\), and also compute the median of all quarters \(R_{\rm var,\,med}\). After trying different thresholds, we found that all quarters with variabilities \(R_{\rm var,\,q}>3\cdot R_{\rm var,\,med}\) should be discarded from the analysis. A table with all removed quarters can be found in the online version of the paper.
### _Sample_
The GPS method was originally developed and calibrated to measure periods of stars with near-solar effective temperatures, including the solar rotation period (s. Sect. 3). However, it was found that the method also yields reliable periods for stars of later spectral type (Amazo-Gomez et al., 2020). Thus, we use the revised stellar properties catalog of Mathur et al. (2017) and select main-sequence stars with effective temperatures \(T_{\rm eff}\leq 6500\) K and surface gravities \(log\,g\geq 4.0\). We further discard all stars matching the _Kepler_ eclipsing binary catalog2 by Kirk et al. (2016), as well as 9 stars with residual instrumental systematics (with KIC numbers 6063291, 6126271, 7627042, 7800157, 11393439, 1141728, 11515679, 11805150, 11808713). This selection leaves 142,168 stars in total. These rather loose criteria should ensure that the targets lie on the main sequence (or close to it).
Footnote 2: The catalog can be found at [http://Keplerebs.villanova.edu/Kepler/data_search/search.php](http://Keplerebs.villanova.edu/Kepler/data_search/search.php).
Recently, Berger et al. 2020 (hereafter B20) published an updated catalog of fundamental parameters of \(\sim 186,000\)_Kepler_ stars taking into account Gaia DR2 parallaxes. The fundamental parameters of both catalogs clearly show some deviation. In particular, the B20 temperatures are roughly \(200\,\mathrm{K}\) cooler and the B20 surface gravities are on average \(0.2\) dex smaller than the values given in Mathur et al. (2017). We find that \(113\),\(867\) of the selected \(142\),\(168\) stars fulfill the chosen criteria for the parameters given in the catalog of B20. We note that the main goal of this study is to measure rotation periods for as many stars as possible, and not to decide on the accuracy of the fundamental parameters, which is beyond the scope of this study.
## 3 Methods
### The GPS method
First, we prepare the final light curves used in the following analysis. The flux of each observing quarter Q2 to Q16 is divided by its median, subtracted by unity, and outliers are removed which exceed 6 times the median absolute deviation. The time series are appended and the resulting light curve is binned to 3 hours, forming the final light curves used in this analysis. The binning reduces the (photon + granulation) noise in the light curve and 3-6 hours is a typical granulation timescale, dominating the variability in this time interval. We note that rotational variability (i.e., our main focus) starts to dominate on 6 hours and longer time scales.
An example light curve of the star KIC 7831394 is shown in the top panel of Fig. 1. The light curve clearly shows variability on rotational time scales (especially after binning over 3 hours), making it a promising candidate for our period analysis. In the second panel of Fig. 1, we compute the auto-correlation function (ACF) using the IDL function A_CORRELATE. We further subtract the ACF minimum and normalize it by dividing out the maximum such that all ACF values lie between zero and one. For stars with periodic variability patterns, the ACF has proven to be a good tool for measuring the rotation period. The ACF of the full time series is shown in black. The highest peak is found at a period of \(P_{\mathrm{rot,\,ACF}}=26.28\) days. As goodness measure of the period, we compute the local peak height (LPH) as the difference between the highest peak (red asterisk) and the mean of the two troughs on either side (see, e.g., Reinhold et al. 2021 for details).
Additionally, we compute the ACF for each quarter Q2 to Q16 individually and compute the mean ACF power in \(0.1\) day bins. This function is referred to as the _local_ ACF, in contrast to the _global_ ACF described above. The local ACF is shown in red on top of the global ACF in Fig. 1. For quite periodic light curves, both functions are very similar. However, for stars with more irregular variability, the local and global ACF often differ. As a consequence, the highest peak of the one does not match the one of the other, and the period is unclear. We return to this point in Sect. 3.2.
Now we compute the wavelet power spectrum of the full time series (third panel in Fig. 1). The highest power is found at a period of \(\sim 15.17\) days. As we will see later, this period is likely an artifact of the high-pass filtering of the _Kepler_ data. The bottom panel of Fig. 1 shows the gradient of the power spectrum (GPS). The gradient is computed using eq. 3 in Shapiro et al. 2020 (for details, we refer the reader to Shapiro et al. 2020). The maximum of the gradient corresponds to the position of the inflection point (IP), i.e., the point where the curvature of the high-frequency tail of the power spectrum changes its sign. The period at this inflection point, \(P_{\mathrm{IP}}\), is linked to the stellar rotation period \(P_{\mathrm{rot}}\) by the simple equation
\[P_{\mathrm{rot}}=P_{\mathrm{IP}}/\alpha \tag{1}\]
where \(\alpha\) is a calibration factor (s. Sect. 4). The main idea behind the GPS method is that the high-frequency tail of the power spectrum is much less affected by the evolution of magnetic features than the power spectrum peak associated with the rotation period (see Fig. 3 in Shapiro et al. 2020). In this case, the inflection period is found at \(P_{\mathrm{IP}}=4.76\) days, indicated by the blue line and the red asterisk. This period is linked to the rotation period by eq. 1. Using the calibration factor \(\alpha=0.217\) derived by Reinhold et al. (2022) yields a rotation period \(P_{\mathrm{rot,\,GPS}}=21.94\) days.
This result is in good agreement with the rotation period of \(22.11\) days derived by S21. However, the period \(P_{\mathrm{rot,\,GPS}}\) shows some discrepancy to the ACF period, and is very different from the one at the highest peak of the power spectrum. As we will see later, the ACF method often fails to detect the correct rotation period, the more irregular the variability gets.
### Goodness of the GPS periods
For each light curve, the GPS method returns an inflection point period. However, it is not always clear if this period can be associated with the rotation of active features over the stellar surface. Thus, we define different goodness metrics for the derived GPS and ACF periods. Eventually, these metrics are combined to a point system that assigns a certain number of points to each star to assess the period reliability: the higher the number of points, the more periodic the signal.
Most commonly, classical period analysis methods define the highest peak of the power spectrum (or the first ACF peak) as the strongest periodicity in the data. It was found by visual inspection of many different _Kepler_ stars that the highest GPS peak nicely scales with the periodicity in the light curve. If the highest peak lies in the range \(0.5\)-\(10\) days, we save the inflection point period \(P_{\mathrm{IP}}\) and the associated peak height \(h_{\mathrm{IP}}\). The lower limit of \(0.5\) days is basically determined by the 3h binning of the data (the Nyquist frequency would be 1/6h), whereas the upper limit of \(10\) days should prevent running into problems with the data reduction (see Fig. 8 and subsequent discussion). According to Eq. 1, the considered range of inflection periods enables us to detect rotation periods in the range \(\approx 2.3-46.7\) days.
To compute the power spectra, we use the IDL function \(W\_CWT\), which returns the continuous wavelet transform, and set the keyword \(\mathrm{scale}{=}1/32\), which affects the peak height values. In this normalization, the peak height distribution \(h_{\mathrm{IP}}\) ranges from \(1-1.15\), with a median height of \(1.05\). Strictly periodic stars have large peak heights \(h_{\mathrm{IP}}>1.06\) to which we assign \(1\) point. Less periodic but still variable stars have peak heights \(1.04<h_{\mathrm{IP}}<1.06\) to which we assign \(0.5\) points. Light curves that are completely dominated by noise have even smaller peak heights and, thus, get \(0\) points.
Even though it was shown that the highest power spectrum peak itself is not necessarily a good measure of the rotation period, we can still use the power spectrum to define another goodness measure: we call the ratio between the power at the inflection point and the minimum power of the spectrum the signal-to-noise ratio SNR (see third panel in Fig. 1). Similarly to \(h_{\mathrm{IP}}\), this quantity also scales with the periodicity. By visual inspection of many light curves and the SNR distribution of all stars, we assign \(1\) point if SNR \(>50\), \(0.5\) points if \(10<\mathrm{SNR}<50\), and \(0\) points otherwise.
For the ACF, we have already defined the LPH as a goodness metric (see, e.g., Reinhold et al., 2021). These authors found that strong periodicity is usually found for \(\mathrm{LPH}>0.2\) (1 point). Less periodic time series still reach values \(0.1<\mathrm{LPH}<0.2\) (0.5 points), and purely noisy stars exhibit small \(\mathrm{LPH}<0.1\) (0 points). We note that the LPH used here always refers to the global ACF. Additionally, we compare the global and the local ACF period. If these two periods agree within 10%, we add another 0.5 points.
By visual inspection it was found that many light curves are dominated by noise and hardly show any variability, even less periodicity. Nevertheless, all metrics above will return some values since they respond to any signal (even to pure noise) in the time series. However, these very quiet stars can be identified by comparing the variabilities of the unbinned and binned time series. In Fig. 2, we show the relative fluxes of the unbinned (black) and 6-hours binned (red) data as a histogram. The left panel of Fig. 2 shows that both flux histograms nicely overlap (also see top panel in Fig. 1), which means that the light curve of the star KIC 7831394 is dominated by (rotational) variability. On the contrary, the binned and unbinned flux distributions of the star KIC 11802969 (right panel) look very different. The 6-hours binning reduced the noise in the light curve and the remaining variability is small, which means that the photometric variability of this star is completely dominated by noise.
Instead of looking at flux distributions, we can simply compute the variability ranges \(R_{var}\) of the unbinned and 6-hours binned time series, and compute their ratio. If the \(R_{var,6h}/R_{var}\) ratio is close to unity, the light curve variability is dominated by rotation. By visual inspection, we assign 1 point if \(R_{var,6h}/R_{var}>0.6\), 0.5 points if \(0.4<R_{var,6h}/R_{var}<0.6\), and 0 points otherwise. We note that also the 3-hours binned value \(R_{var,3h}\) could have been used instead but the 6h-binning reduced the noise even stronger.
Combining all metrics defined above yields points in the range 0 (pure noise) to 4.5 (highest periodicity) for each star. For the sake of clarity, the point metric is summarized in Table 1. We will return to this point system in Sect. 4.
## 4 Results
For the 142,168 stars in our sample, we could measure an inflection point period within \(0.5-10\) days for 141,151 stars. We note that a fraction of this sample was assigned zero points in the end
Figure 1: Example of a variable _Kepler_ star and different period analysis methods applied to the data. Top panel: Original (black dots) and 3h-binned (red line) light curve of the star KIC 7831394. Second panel: global (black) and local (red) auto-correlation functions (ACFs). The best ACF period is found at 26.28 days. Third panel: Global wavelet power spectrum (black). The vertical blue line indicates the position of the inflection point. The ratio between the power at the inflection point and the minimum is defined as signal-to-noise ratio (SNR), indicated by the orange arrow between the dotted lines. Bottom panel: Gradient of the Power Spectrum (GPS). The maximum is found at the inflection period \(P_{\mathrm{IP}}=4.76\) days, marked by the blue line and the red asterisk.
(compare Fig. 5). We now compare our results to a sample of stars with previously determined periods. This comparison will show whether the calibration factor \(\alpha=0.217\), previously determined for models of solar-like stars (Reinhold et al., 2022), still holds for real data. In Fig. 3, we show the measured inflection periods \(P_{\rm IP}\) against the rotation periods derived by McQ14 (upper panel) and S21 (lower panel) for the stars in common. The solid black line shows the relation given in Eq. 1 with the calibration factor \(\alpha=0.217\). Both panels clearly show the linear dependence between both periods for the vast majority of stars. However, a second branch with (slightly more than) twice the inflection period, and consequently twice the rotation period, is visible in both panels. The origin of this _double period branch_ is discussed in Sect. 4.4. We further note that the agreement of our period measurements (\(P_{\rm IP}/\alpha\)) becomes weaker for \(P_{\rm rot,\,Satons}>30\) days. This is caused by the fact that the PDC-MAP pipeline does not preserve variability on these timescales, so the periods retrieved by classical periods analysis tools (as used in McQ14 and S21) are less reliable (also see Sect. 4.1).
Fig. 3 revealed that there is indeed a linear dependence between the inflection period \(P_{\rm IP}\) and the rotation period \(P_{\rm rot}\). However, this relation is accompanied by large scatter. As mentioned in Shapiro et al. (2020), and later shown in detail by Reinhold et al. (2022), the calibration factor \(\alpha\) has an intrinsic uncertainty of \(\sim 25\%\). In Fig. 4, we show the ratio of the inflection period to the rotation period derived by McQ14 (left panel) and S21 (right panel). The distributions have a Gaussian shape centered at \(\langle\alpha\rangle=0.212\) (left) and \(\langle\alpha\rangle=0.213\) (right) with standard deviations \(\sigma_{a}=0.023\) (left) and \(\sigma_{a}=0.029\) (right). These values are in good agreement with the value \(\alpha=0.217\) derived by Reinhold et al. (2022). The small bump at twice these \(\alpha\) values is associated with the upper branch in Fig. 3.
We now turn to the point system defined at the end of Sect. 3. In Fig. 5 we show the distribution of points allocated to each star. The sample of McQ14 is shown in blue and the stars in common with S21 are shown in red. It is obvious that the number of stars with previously determined periods steeply increases with the number of points, and that the vast majority of those stars has the highest possible number of points. That means that these stars exhibit very periodic light curves where the periodicity is picked up easily with standard tools.
As with almost every frequency analysis tool, setting thresholds for the detected peaks or the period significance is quite subjective. Visual inspection of many different quiet and active stars led us to count all stars with \(\geq 3\) points as period detections. This threshold requires that at least one of the metrics has 1 point assigned. It is therefore not very conservative but a reasonable choice. We further note that below this threshold only very few rotation periods have been found in the surveys of McQ14 and S21, which makes it unlikely that many stars with measurable period have been missed. Although one cannot completely rule out the possibility that there exist stars with measurable periods below 3 points, lowering the threshold would make the periods less reliable. In total, the threshold is satisfied for \(67,515\) stars, which we focus on in the following.
In Fig. 6 we show the distribution of \(67,515\) GPS rotation periods of all stars with \(\geq 3.0\) points. Here we used \(\alpha=0.213\), derived in Fig. 4, to retrieve the rotation period. The periods determined by McQ14 and S21 are shown as blue and red curves, respectively. Compared to the previous surveys, the GPS method retrieves periods of a larger number of stars.
In particular, 20,397 new periods were detected that have not been reported before. Most of the newly determined periods are longer than \(\sim 28\) days. Furthermore, the median variability of these 20,397 stars with newly detected periods equals \(R_{\rm var,\,3h}=0.085\%\), which is very close to the solar value \(R_{\rm var,\,Sun}=0.07\%\) (compare Reinhold et al., 2020), and so much smaller than the average variability of all 67,515 stars (\(R_{\rm var,\,3h}=0.17\%\)). We emphasize that the detection of these stars with _near-solar_ rotation periods and variabilities is a clear benefit of the GPS method.
The reasons why these periods have been missed in previous surveys are manifold: McQ14 used quite conservative thresholds to detect periods, which removes many less periodic slow rotators. Moreover, these authors only considered Q3-Q14 data. Since that time, also the _Kepler_ pipeline changed several times. S21 analyzed light curves reduced with their own pipeline as well as those reduced with the latest version (DR25). These authors also combined different period analysis tools and used a machine-learning approach to finally detect periodicity. The GPS method could detect even more (and longer) periods be
\begin{table}
\begin{tabular}{c c c c} \hline points & 0 & 0.5 & 1 \\ \hline \(h_{\rm IP}\) & \(<1.04\) & \(1.04-1.06\) & \(>1.06\) \\ SNR & \(<10\) & \(10-50\) & \(>50\) \\ LPH & \(<0.1\) & \(0.1-0.2\) & \(>0.2\) \\ \(R_{var,\,6h}/R_{var}\) & \(<0.4\) & \(0.4-0.6\) & \(>0.6\) \\ \hline \end{tabular}
\end{table}
Table 1: Point system for the individual goodness metrics.
Figure 2: Distribution of the 6h-binned (red) and unbinned (black) flux values of the stars KIC 7831394 (left) and KIC 11802969 (right).
## 4 Results
Figure 3: Inflection point period \(P_{\rm IP}\) vs. rotation period derived by McQ14 for 30582 stars (upper panel) and S21 for 48264 stars (lower panel). The solid black line shows the relation given in Eq. 1 with \(\alpha=0.217\). Both panels exhibit an upper branch where the GPS method detects (slightly more than) the double of the inflection period.
cause it does not require a repeatable spot pattern as the other techniques do. The very short periods, however, cannot be accessed because we set a lower limit of 0.5 days to the inflection period, which translates into a lower rotation period limit of \(\approx\) 2.3 days.
We also checked the cases where a period was reported by McQ14 and/or S21 but not detected in this survey. There exist 11,094 periods in these surveys that do not have a period reported here. The majority of these stars were initially not considered in this study because they either had effective temperatures greater than 6500 K or \(log\,g<4.0\) dex. Only 3313 out of the 11,094 stars were considered in this study but had a point number smaller than 3.0. Of those, 1064 stars had a reported period outside the considered range of inflection periods between 0.5-10 days, and thus could not have been detected. However, the point distribution of the remaining 2249 stars continuously increases toward our lower limit of 3.0, with a mean number of 2.0 points. This result indicates that the imposed point limit might be lowered, which would eventually lead to even more period detections.
The better performance of the GPS method can also be seen in Fig. 7. Here, we show the detection rate as a function of stellar variability. In general, the detection rate increases with variability (without any variability, nothing can be detected). Interestingly, the steepest rise happens shortly after the solar mean variability (gray dashed line). This observation shows that the spot signals - compared to the photon noise in the light curves - start to dominate at this variability. We further see that the black and the red curves show very similar qualitative behavior: both curves steeply increase at small, solar-like variabilities \(R_{\rm var}>0.1\)%, and level off at a detection rate of \(\approx 93\)% for variabilities \(R_{\rm var}>1\)%. However, it is obvious that the GPS method detects much more periods for stars with smaller variability.
The detection rate of the McQ14 sample (blue curve) shows a slightly different behavior for variabilities \(R_{\rm var}>0.3\)%, where it breaks through the black and red curves, and eventually reaches almost 100% for the most variable stars. As mentioned above, a different pipeline has been used in McQ14 that better preserved stellar variability. However, this cannot be the reason for the higher detection rate because all values used in Fig. 7 have been computed from the latest pipeline used in this study. We attribute the even higher detection rate of the variable stars to an
Figure 4: Distribution of \(\alpha=P_{\rm IP}/P_{\rm rot}\) for the same stars as shown in Fig. 3 for the rotation periods \(P_{\rm rot}\) of McQ14 (left panel) and S21 (right panel). The red curve shows a Gaussian fit with mean \(\langle\alpha\rangle=0.212\) and standard deviation \(\sigma_{\alpha}=0.023\) (left panel) and \(\langle\alpha\rangle=0.213\) and \(\sigma_{\alpha}=0.029\) (right panel).
Figure 5: Distribution of the points metric for all 141,151 stars with measured inflection period (black), and the ones in common with McQ14 (blue) and S21 (red).
Figure 6: Rotation period distribution of the 67,515 stars with points \(\geq 3.0\). The rotation periods derived by McQ14 and S21 are shown in blue and red, respectively.
extensive visual light curves inspection of McQ14, in contrast to the purely automated approach applied by S21 and in this study.
### GPS vs. ACF periods
The rotation periods in McQ14, and to a large extent also those in S21, have been determined by the auto-correlation function (ACF). Thus, we also computed ACF periods for each star, with an upper period limit of 70 days. In this Section, we compare the ACF and the GPS periods with each other to test the performance of both methods, and to show their limitations.
Fig. 8 shows the ACF against the GPS periods for all stars where both periods could be measured and the local peak height (LPH) of the ACF peak at least fulfills the mild criterion LPH \(>0.1\). For periods less than 20 days there is good overlap between the two methods, as indicated by the 1:1 line (red). Also the secondary branch at twice the ACF period is visible.
A striking feature is certainly the pile-up of ACF periods around 15 days, best visible at the top histogram. We attribute these periods to the high-pass filtering of the data in the latest data reduction, and emphasize that most of these periods are of instrumental origin because such an accumulation of periods is not seen for the GPS periods. It is important to note that there are actually cases where also the GPS returns a period around 15 days, and this periodicity is clearly seen in the light curves. We conclude that the ACF periods in the range 10-20 days cannot be trusted without independent confirmation by another method. The pile-up of periods at around 15 days was also noted and dismissed by Basri et al. (2022), but without the benefit of this independent analysis.
In Fig. 9, we show the same as in Fig. 8 for different LPH thresholds. Additionally, the data are color-coded with the point metric defined above. It is apparent that the majority of stars with \(\geq 3.0\) points are located either along the 1:1 or the 2:1 line for all LPH thresholds. This tendency becomes even more evident with increasing the LPH threshold from 0.1 to 0.4 (upper left to lower right panel), which emptes most of the other plot regions. Similar to Fig. 4, we compute \(\alpha=P_{\rm IP}/P_{\rm rot,ACF}\) for the different LPH thresholds, and fit the distributions with a Gaussian. One derives very similar mean values increasing from \(\alpha=0.204\pm 0.038\) (LPH \(>0.1\)) to \(\alpha=0.214\pm 0.023\) (LPH \(>0.4\)). We further note that also the ACF peak around 15 days becomes less pronounced as LPH increases.
### Dependence of \(\alpha\) on stellar parameters
We have seen that the mean \(\alpha\) values are very similar for the different samples considered so far. In this Section, we show how \(\alpha\) depends on different stellar parameters, and what can be learned from that about the stars. In the following, we assume that the derived ACF periods are good measures of the stellar rotation period, at least for stars with a high number of points. In Fig. 10, we show \(\alpha=P_{\rm IP}/P_{\rm rot,ACF}\) as a function of the ACF period \(P_{\rm rot,ACF}\). We see that the stars with the largest number of points accumulate around \(\alpha=0.213\) (indicated by the black horizontal line) and a bit more than twice that value (i.e. the double period branch). The plot nicely shows that \(\alpha\) does not show any dependence on rotation up to periods of \(\approx 35\) days. Beyond that period, there are much less stars with a high point score and, most importantly, the ACF periods become less reliable. Thus, we conclude that the same \(\alpha\) value can be used to derive rotation periods of fast and slow rotators.
The following three figures show the dependence of \(\alpha\) on the stellar fundamental parameters effective temperature \(T_{\rm eff}\) (Fig. 11), surface gravity \(log\)\(g\) (Fig. 12), and metallicity [Fe/H] (Fig. 13). Fig. 11 reveals that \(\alpha\) shows very little dependence on effective temperature from 4000-6000 K. For cooler stars below 4000 K, \(\alpha\) seems to increase. The opposite effect is found for stars hotter than 6000 K, where \(\alpha\) decreases. To emphasize this effect, we restrict the main \(\alpha\) branch to those stars between \(0.1<\alpha<0.3\) and \(\geq 3.0\) points, and overplot the mean alpha value in the 100 K wide temperature bins as violet star symbols.
For the hot stars, the relative decrease of \(\alpha\) can be explained by the fact that the inflection period is sensitive to the spot lifetimes. Giles et al. (2017) showed that spots have shorter lifetimes on G- and F-type stars compared to later-type stars. This observation was recently confirmed by Basri et al. (2022), who used a similar approach as Giles et al. (2017) to assess the spot lifetimes. Reinhold et al. (2022) showed that the periods measured by the GPS method are shorter than the rotation period when the spot lifetimes are shorter than 2 complete rotations. As a consequence, the \(\alpha\) values are smaller then the average value for these hot stars.
For stars cooler than 4000 K, however, we attribute the increase of \(\alpha\) to another effect. Reinhold et al. (2022) further showed that the inflection point period is sensitive to the duration of a spot crossing. This dip duration depends on the spot latitude and the stellar inclination (neglecting the spot evolution on this comparatively short time scale). Spots at higher latitudes generate more sinusoidal dips in the light curves, and so have a longer dip duration than e.g. equatorial spots. The same is true for lower latitude spots on a highly-inclined star. We cannot break this degeneracy but we can argue that inclination, as a geometrical effect, is independent on effective temperature. Thus, we argue that inclination is partly responsible for the spread of \(\alpha\) along the mean value but can be ruled out as explanation for the increased \(\alpha\) here.
Instead, we propose that these very cool stars exhibit spots at higher latitudes than warmer stars. This idea is tested by a simple spot model in Sect. 4.4. Additionally, we tested if the center-to-limb variation (CLV) for cooler stars changes such that \(\alpha\) might show an increase (see appendix). However, this is not the case and can be ruled out as explanation here.
Fig. 12 shows the dependence of \(\alpha\) on \(log\,g\). One sees that there is also very little dependence on surface gravity, except for
Figure 7: Detection rate as a function of the variability range \(R_{\rm star}\). The black curve shows the detection rate of the GPS method, and the and the detection rates of the methods employed by McQ14 and S21 are shown in blue and red, respectively.
Figure 8: ACF vs. GPS periods for more than 90,000 stars with LPH \(>\) 0.1, with the associated histograms to their sides. The GPS periods have been calculated using the same \(\alpha=0.213\) as in Fig. 6. The red solid line shows the 1:1 identity.
Figure 9: ACF vs. GPS periods for different LPH thresholds indicated at the top of each panel. The data are color-coded with the points system. The black line shows the 1:1 identity.
the high and low gravity ends. These dependencies, however, are the same as those in the previous plot because of the dependence between surface gravity and effective temperature on the main sequence. For instance, the stars at \(log\,g>4.8\) with the highest \(\alpha\) values have almost exclusively temperatures below 4000 K.
In Fig. 13 we show the dependence of \(\alpha\) on metallicity. Along the main branch (black line), no dependence on metallicity is visible. This result is consistent with the latest tests of the GPS method on simulated data: Reinhold et al. (2022) found that \(\alpha\) does not show any dependence on metallicity between \(-0.4\leq\mathrm{[Fe/H]}\leq 0.4\) dex for simulated time series of solar-like stars.
### Dependence of \(\alpha\) on activity
In the previous section, we demonstrated that \(\alpha\) shows very little dependence on rotation and the fundamental stellar parameters. Here, we test the dependence of \(\alpha\) on two well-known measures of stellar activity. In Fig. 14, we show \(\alpha\) as a function of the S-index, which is defined as the ratio of the flux in the Ca II ii ii and K lines, normalized to the flux in the R and V bands (see Vaughan et al. 1978 for details). This activity indicator was chosen because it is a well-established measure of stellar chromospheric activity (see, e.g., Noyes et al. 1984). Furthermore, it is independent of the _Kepler_ data, in contrast to other photometric activity indices (such as the index \(S_{\mathrm{ph}}\) used by Mathur et al. 2014 or the measure \(MDV\) used by Basri et al. 2013). The S-indices are taken from the catalog of Zhang et al. (2022) using the calibration to the Mount Wilson scale (Eq. 6 in Zhang et al. 2022). We find 18,797 matches between the LAMOST catalog and the stars in our sample. Fig. 14 shows that at \(\alpha\) does not strongly depend on S-indices greater than 0.3. However, the spread of \(\alpha\) becomes much stronger toward smaller S-indices. Since the S-index itself is not corrected for its dependence on effective temperature this spread cannot solely be attributed to smaller activity.
Taking a look at the LAMOST S-index distribution shows that the vast majority of stars exhibits rather small S-indices between 0.1-0.3. This result is likely a selection effect because all these stars have effective temperatures greater than \(\approx 4800\) K. Moreover, the S-index distribution is quite narrow around the mean value of 0.2 (even for different effective temperature bins). We attribute this to the low resolution of the LAMOST instrument, which makes it more difficult to assess the true stellar ac
Figure 11: Dependence of \(\alpha\) on \(T_{\mathrm{eff}}\). The colors and the horizontal line are the same as in Fig. 10. The violet star symbols show the mean \(\alpha\) value in the 100 K wide temperature bins.
Figure 12: Dependence of \(\alpha\) on \(log\,g\). The colors and the horizontal line are the same as in Fig. 10.
Figure 10: ACF periods vs. \(\alpha=P_{\mathrm{H}}/P_{\mathrm{ext,ACF}}\). The data are color-coded with the points metric. The black horizontal line indicates the mean value \(\alpha=0.213\).
Figure 13: Dependence of \(\alpha\) on \(\mathrm{[Fe/H]}\). The colors and the horizontal line are the same as in Fig. 10.
tivity level, and in particular, the true dependence of \(\alpha\) on activity.
A well-known quantity closely related to activity is the photometric variability. Here, we use the quantity \(R_{\mathrm{var},\,3h}\) as a measure of the rotational variability, and show \(\alpha\) as a function of \(R_{\mathrm{var},\,3h}\) in Fig. 15. This figure clearly shows that \(\alpha\) is almost constant down to variabilities \(R_{\mathrm{var},\,3h}=0.2\%\), which almost equals the solar maximum variability (Reinhold et al. 2020), and that these stars have a high number of points. Down to smaller variabilities around \(R_{\mathrm{var},\,3h}=0.1\%\), the \(\alpha\) values start to show large spread. However, the horizontal branches extend down to very low variabilities with a moderate point number greater than 2 (green dots). This result clearly shows that the GPS method is able to detect the correct rotation period even for stars with very small variabilities. At the same time, also the ACF method yields the correct rotation period for these greenish stars (remember that we defined \(\alpha=P_{\mathrm{IP}}/P_{\mathrm{rot},\,\mathrm{ACF}}\)). For the cloud of blue dots below \(R_{\mathrm{var},\,3h}=0.1\%\), likely the ACF method returns a wrong period because we did not impose any LPH threshold here.
### The double period branch
To better understand the origin of the double period branch, we employed a simple spot model. In this toy model, circular spots of a certain size and contrast can be placed on a sphere. The sphere can then be rotated, either as rigid body or differentially, and viewed from different inclination angles \(i\). Since the GPS method is sensitive to the spot profile, we chose a simplified model with a single spot of fixed radius \(5^{\circ}\) sitting at random latitudes and longitudes (both uniformly distributed) and inclination angles (uniform in \(\cos i\)). We arbitrarily chose a rotation period of 10 days and simulated 5 complete revolutions of rigid rotation, i.e., a time series spanning 50 days.
We computed 500 models and applied the GPS method to them. The result is shown in Fig. 16. Here, we plot the inclination of the model star against the spot latitude because both quantities affect the spot profile in the light curve, and color-code each point with the derived \(\alpha\) value. There is a sharp separation between blue points in the lower right half, where the correct value of \(\alpha\approx 0.21\) is derived, and the yellow-greenish dots in the upper left part of the diagram, where roughly twice the correct \(\alpha\) value is measured. We note that there are more dots in the lower right half, which means that we can measure the correct period for a large number of possible spot and inclination angle alignments, and that the transition between the two regimes is rather discrete (also compare Fig. 14).
This result again confirms that the GPS method is sensitive to the spot profile in the light curve (Reinhold et al. 2022). It is known that both higher latitude spots and/or strongly inclined stars render the spot profiles more sinusoidal, which is equivalent to an increase of the dip duration in the light curve. Since the inflection point is proportional to the dip duration, the inflection periods become larger, reaching roughly twice the correct \(\alpha\) value when the dip duration equals one full rotation period.
So far, we mostly advertised the novel GPS method, especially when stellar variability is low and more irregular. However, also this method has its shortcomings, e.g., it sometimes detects twice the correct rotation period. Visual inspection of those light curves where GPS detects twice the period derived by McQ14 or S21 shows that this mostly happens in cases when the variability is very periodic, and so the LPH values are large.
Figure 16: GPS method applied to model light curves. The dots show the inclination and the spot latitude of the 1-spot models. The color-coding shows the derived \(\alpha\) value.
Figure 14: Dependence of \(\alpha\) on the LAMOST S-index. The colors and the horizontal line are the same as in Fig. 10.
Figure 15: Dependence of \(\alpha\) on the variability range \(R_{\mathrm{var}}\). The colors and the horizontal line are the same as in Fig. 10.
Consequently, in these cases the ACF method yields the correct period with high confidence.
To better quantify this observation, we once again consider the rotation period sample of S21. For this purpose, we define the number of stars in the main branch, \(N_{\rm min}\), as those stars where the GPS and the S21 period differ by less than 10%. Similarly, the number of stars in the double period branch, \(N_{\rm double}\), are defined such that the GPS and twice the S21 period differs by less than 10%. In the left panel of Fig. 17, we show the "double fraction" \(N_{\rm double}/(N_{\rm main}+N_{\rm double})\) as a function of the LPH. We see that this fraction is rather flat for LPH \(>0.2\) (the increase in the last bin can safely be attributed to the much smaller number of stars in both branches). This so-called _double floor_ with a median of \(\approx 6\)% accounts for the cases with very symmetric light curves. Assuming the employed toy model correctly distinguishes between the main and double period branch, this percentage must be considered as the fraction of stars with rotation axes sufficiently inclined towards Earth that their light curves look rather sinusoidal even for spots at low latitude.
A different behavior is seen for small LPH values where the double fraction increases. In this regime (LPH \(<0.2\)), the light curve variability is less periodic, and the GPS method is superior to standard methods (Reinhold et al. 2022). As a consequence, here the "double period" measured by GPS is likely the correct rotation period. We also note that in this regime much fewer stars are contained in both branches because it is more difficult to detect periods in light curves with shallow periodicity (using standard methods such as S21). This fact is accounted for by the larger error bars assuming \(\Delta N=\sqrt{N}\) for each branch.
Complementary to the LPH dependence, we show the double fraction as a function of the GPS rotation period in the right panel of Fig. 17. Also here different regimes can be observed: as mentioned previously, the GPS method is not sensitive to very short periods, and so the increase of the double periods for \(P_{\rm rot,GPS}<5\) days can be ignored. Over the wide period range from 5-30 days, the double fraction shows a shallow decrease, which may be consistent with a decrease of spot latitude with rotation period but this is rather speculative. For periods greater than 30 days, a steep increase of the double period is observed (although the number of stars also strongly decreases beyond 40 days). This period range is exactly the regime where GPS is superior to classical methods and likely returns the correct period, similarly to the LPH \(<0.2\) range in the left panel of this figure.
### Selection of final rotation period
Figure 17 nicely demonstrates the region of validity of either method, and so helps to define a _final_ rotation period. As final rotation period \(P_{\rm rot,fin}\), we use the ACF period if \(0<P_{\rm rot,ACF}\leq 10\) days and LPH \(\geq 0.1\). This decision is based on the fact that the ACF finds very accurate periods for fast rotators with rather sinusoidal shape and moderate to high peak heights.
In the period range \(10<P_{\rm rot,ACF}\leq 20\) days, we saw that the _Kepler_ pipeline induces an artificial pile-up (see Fig. 8). We compare the ACF and GPS periods in this range, and use the ACF period if they differ by less than 10% and the ACF fulfills the minimal requirement LPH \(\geq 0.1\); the GPS period is used otherwise.
For \(P_{\rm rot,ACF}>20\) days, the periodicity becomes weaker, the LPH values smaller, and so the GPS method superior to the ACF. Thus, we use the GPS periods here, and also for the few cases where no ACF period could be measured. The chosen parameters for the definition of the final period are summarized in Table 2.
We note that these period thresholds are highly subjective (as most thresholds) but relies on the expertise of the authors with various kinds of light curves and frequency analysis methods. Depending on the period and LPH thresholds, the number of final rotation periods determined by the GPS and the ACF methods varies. Using our thresholds, we detect 67,163 final rotation periods with 17,246 ACF and 49,917 GPS periods. We note that there exist 352 stars \((67,515-67,163)\) with \(\geq 3.0\) points that have a measured GPS period but no final period assigned. These stars mostly do not satisfy the very mild LPH \(\geq 0.1\) thresholds. A parameter table for all stars with measured GPS period and \(\geq 3.0\) points assigned is given in the appendix (Table 2).
## 5 Summary and Conclusions
In this study we applied the novel GPS method to the light curves of tens of thousands main-sequence stars observed by the _Kepler_ telescope. Although this huge data set has previously been combed for rotation periods (e.g., see Reinhold et al. 2013; McQuillan et al. 2014; Santos et al. 2021), we showed that the GPS method was able to measure 20,397 periods that have not been detected before. One reason for that is that the GPS method is superior to standard frequency analysis methods (such as the ACF) for detecting rotation periods of stars with small and irregular variability.
Another important result about these 20,397 new periods was that their average rotation period was found to be \(\sim 28\) days, and their variability \(R_{\rm var,3h}=0.085\)%, so both the rotation period and the variability amplitude are very close to the solar values. The detection of these "solar-like" rotation periods is certainly a benefit of the GPS method.
Another advantage of the GPS method is that the high-frequency tail of the power spectrum, i.e., the period regime GPS searches for the inflection point, is much less sensitive to instrumental residuals. In stark contrast, the ACF method reveals a pile-up of periods between 10-20 days (for low to mild LPH, i.e. \(0.1<LPH<0.2\)). We conclude that one cannot trust ACF periods in this period and LPH range without independent confirmation by the GPS method. Another option would be to apply a customized data reduction that does not undergo a high-pass filter (likely responsible for the period pile-up at \(\sim 15\) days), and then apply a standard method such as ACF.
Furthermore, our work suggests a temperature dependence of the GPS calibration factor \(\alpha\) for \(T_{\rm eff}<4000\) K. Employing a simplified spot model, we argue that this increase of the inflection point periods is caused by spots located at higher latitude for these cool (likely early M dwarf) stars. We emphasize that this information could not be extracted by other methods before. Thus, we conclude that GPS rotation periods should be combined with other spectroscopic measurements to better constrain potential spot locations on the stellar surface.
In total, we were able to measure 67,163 final rotation periods by combining the ACF and the GPS method. Compared to
\begin{table}
\begin{tabular}{|c|c|} \hline Parameter range & Method \\ \hline \(0<P_{\rm rot,ACF}\leq 10\) days \& LPH \(>0.1\) & ACF \\ \hline \(10<P_{\rm rot,ACF}\leq 20\) days \& \& \\ \(|P_{\rm rot,ACF}-P_{\rm rot,GPS}|/P_{\rm rot,ACF}<0.1\) \& LPH \(>0.1\) & ACF \\ \hline \(10<P_{\rm rot,ACF}\leq 20\) days \& \& \\ \(|P_{\rm rot,ACF}-P_{\rm rot,GPS}|/P_{\rm rot,ACF}>0.1\) or LPH \(<0.1\) & GPS \\ \hline \(P_{\rm rot,ACF}>20\) days & GPS \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the final period definition.
previous surveys, we find that 86.2% of the McQ14 and 77.4% of the S21 periods agree within 20% with our final rotation period \(P_{\rm rot,\,fin}\). For periods lower than 20 days, the difference mostly originates from the double branch, whereas for periods greater than 20 days the intrinsic uncertainty of \(\alpha\) dominates. We note that the difference in this period regime is smaller for the McQ14 sample and attribute this result to their more conservative threshold of LPH \(>0.3\). In the period range from 10-20 days, the GPS method is superior to classical ones (esp. the ACF) because it is not affected by the data reduction, leading to the unphysical pile-up of ACF periods around 15 days (see top panel in Fig. 8). We note that the ACF method usually returns more accurate periods for very periodic light curves but the shallower the periodicity gets, the more reliable the GPS period becomes, which is used in the period regime \(P_{\rm rot,\,ACF}>20\) days.
In summary, this study deals with the largest set of rotation periods known to date. This work clearly demonstrated the power of novel methods (such as GPS) to detect new rotation periods even in large data sets that have been travelled through for periods before. Other promising methods might be Gaussian processes, possibly equipped with a suitable kernel function that does not a priori require periodicity. Only these methods will reveal the true rotation periods of less active stars, and so will help to improve previous solar-stellar comparison studies.
|
2310.11353 | Hybrid quantum-classical graph neural networks for tumor classification
in digital pathology | Advances in classical machine learning and single-cell technologies have
paved the way to understand interactions between disease cells and tumor
microenvironments to accelerate therapeutic discovery. However, challenges in
these machine learning methods and NP-hard problems in spatial Biology create
an opportunity for quantum computing algorithms. We create a hybrid
quantum-classical graph neural network (GNN) that combines GNN with a
Variational Quantum Classifier (VQC) for classifying binary sub-tasks in breast
cancer subtyping. We explore two variants of the same, the first with fixed
pretrained GNN parameters and the second with end-to-end training of GNN+VQC.
The results demonstrate that the hybrid quantum neural network (QNN) is at par
with the state-of-the-art classical graph neural networks (GNN) in terms of
weighted precision, recall and F1-score. We also show that by means of
amplitude encoding, we can compress information in logarithmic number of qubits
and attain better performance than using classical compression (which leads to
information loss while keeping the number of qubits required constant in both
regimes). Finally, we show that end-to-end training enables to improve over
fixed GNN parameters and also slightly improves over vanilla GNN with same
number of dimensions. | Anupama Ray, Dhiraj Madan, Srushti Patil, Maria Anna Rapsomaniki, Pushpak Pati | 2023-10-17T15:40:26Z | http://arxiv.org/abs/2310.11353v1 | # Hybrid Quantum-Classical Graph Neural Networks for Tumor Classification in Digital Pathology
###### Abstract
Advances in classical machine learning and single-cell technologies have paved the way to understand interactions between disease cells and tumor microenvironments to accelerate therapeutic discovery. However, challenges in these machine learning methods and NP-hard problems in spatial Biology create an opportunity for quantum computing algorithms. We create a hybrid quantum-classical graph neural network (GNN) that combines GNN with a Variational Quantum Classifier (VQC) for classifying binary sub-tasks in breast cancer subtyping. We explore two variants of the same, the first with fixed pretrained GNN parameters and the second with end-to-end training of GNN+VQC. The results demonstrate that the hybrid quantum neural network (QNN) is at par with the state-of-the-art classical graph neural networks (GNN) in terms of weighted precision, recall and F1-score. We also show that by means of amplitude encoding, we can compress information in logarithmic number of qubits and attain better performance than using classical compression (which leads to information loss while keeping the number of qubits required constant in both regimes). Finally, we show that end-to-end training enables to improve over fixed GNN parameters and also slightly improves over vanilla GNN with same number of dimensions.
Anupama Ray\({}^{1}\) Dhiraj Madan\({}^{1}\) Srushti Patil\({}^{3}\) Maria Anna Rapsomaniki\({}^{2}\) Pushpak Pati\({}^{2}\)\({}^{1}\) IBM Quantum IBM Research India, \({}^{2}\)IBM Research Zurich
\({}^{3}\)Indian Institute of Science Education and Research Tirupati, India Quantum Machine Learning, Quantum Neural Networks, hierarchical Graph Neural Networks, spatial tissue modeling, histopathological image classification
## 1 Introduction
Understanding how tumor cells self-organize and interact within the tumor microenvironment (TME) is a long standing question in cancer Biology, with the potential to lead to more informed patient stratification and precise treatment suggestions. From Hematoxylin & Eosin (H&E) staining to multiplexed imaging and spatial omics, a plethora of technologies are used to interrogate the spatial heterogeneity of tumors [6]. For example, H&E histopathology images have long been used to train Convolutional Neural Networks (CNNs) in a patch-wise manner for a variety of tasks [1, 10]. More recently, geometric deep learning and in particular Graph Neural Networks (GNNs) have found promising applications in histopathology [5, 12]. Indeed, a graph representation is a natural modeling choice for TME as it is a flexible data structure to comprehensively encode the tissue composition in terms of biologically meaningful entities, such as cells, tissues, and their interactions. In a typical cell-graph representation, cells represent nodes, edges represent cell-to-cell interactions and cell-specific information can be included as node feature vectors. As a result, GNNs can elegantly integrate cellular information with tumor morphology, topology, and interactions among cells and/or tissue structures [7]. Yet, the complexity of tumor graphs and the entangled cell neighborhoods lead to sub-optimal embedding spaces of GNNs, which in turn struggle with learning clinically meaningful patterns from the data. At the same time, searching for relatively small query subgraphs over large, complex graphs is \(NP\)-hard. Although GNNs are currently being used as state-of-art networks for learning such problems from images, two severe limitations of GNNs are over-smoothing [2] and over-squashing [15]. Over-smoothing refers to the indistinguishable representations of nodes in different classes and over-squashing refers to the inefficient message passing in a longer chain of nodes in a graph. These challenges in classical GNNs provide opportunities for quantum algorithms. The main impact expected from quantum is the possibility of extending the embedding space by mapping data to the exponentially large qubit Hilbert space, which can potentially help in capturing hidden spatio-temporal correlations at the cellular and tissue level.
In this paper we create a hybrid classical-quantum network which combines a GNN with a Variational Quantum Classifier (VQC). We train this network with two approaches: (i) a serial approach, i.e., by first training the classical model and then the quantum model after the classical model has converged, and (ii) an end-to-end approach, by back-propagating loss from quantum neural network to all the layers of the classical neural network. In the first approach, we pretrain a classical graph neural network on the tissue graphs and then use the learnt representation from the GNN as input to a VQC. Since we are taking the output of the final layer of the classical GNN, we could map it with different dimensions via a linear layer. We performed ablation studies with 10-, 64-, 256-, 512- and 1024-dimensional learned GNN embeddings. For
the 10-dimensional GNN output, wherein the learnt embedding has been compressed classically, we use second-order Pauli encoding (ZZ encoding), which needs as many qubits as the number of dimensions (thus 10 qubit circuits). For all other dimensional embeddings, we use amplitude encoding to be able to fit all the information in size logarithmic in embedding dimension (thus number of qubits needed is \(\log(n)\) for \(n\)-dimensional output of GNN). A key observation of this paper is that although amplitude encoding compresses the number of qubits significantly, it does not lead to information loss, suggesting that the quantum model could be as close to state-of-art classical model. However, the quantum models with ZZ encoding are unable to learn much due to lossy compression via classical network. In the second end-to-end approach, we experiment with 10-dimensional data with ZZ encoding. We observe that not only does end-to-end training of GNN+VQC significantly improve over serial, but it even slightly outperforms classical GNN with 10-dimensional final layer.
## 2 Related Work and Background
### Quantum Computing and Quantum Machine Learning
Quantum Computing is a model of computation which enables one to perform efficient computation based on the laws of quantum mechanics. Here, the fundamental building blocks constitute qubits and gates. A single qubit \(\left|\psi\right\rangle\) can be mathematically expressed as a unit vector in a 2-dimensional Hilbert space as \(\left|\psi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle\), where \(\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1\). Here \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are the orthonormal basis states corresponding to classical bits 0 and 1. Similarly, an \(n\)-qubit state can be expressed as a unit vector in \(2^{n}\) dimensional space \(\left|\psi\right\rangle=\sum_{x\in\left\{0,1\right\}^{n}}\alpha_{x}\left|x\right\rangle\). A measurement of an \(n\)-qubit state yields one of the classical bit strings \(x\) with probability \(\left|\alpha_{x}\right|^{2}\). A quantum circuit starts from an initial state \(\left|0^{n}\right\rangle\) and performs a sequence of single and 2 qubit operations such as H, S, T, X, Y, Z, CNOT to yield a final state \(\left|\psi\right\rangle\). The above gate set also includes parameterized gates, such as \(R_{x}(\theta),R_{y}(\theta)\) and \(R_{z}(\theta)\). The produced final state can be measured to yield an output from the desired distribution corresponding to the problem ([11]).
Quantum circuits can be parameterized by learnable parameters and can also be trained to optimize a given objective function. In the context of machine learning, these are known as Variational Quantum Classifiers or VQC [8, 3], which define the objective function based on the cross-entropy loss between the sampled distribution and ground truth data for classification. Here the state is produced by first running a unitary parameterized by the input on initial state (feature map) followed by a unitary parameterized with trainable weights. Overall, we have the state \(\left|\psi(x,\theta)\right\rangle=V_{\theta}U_{\phi(x)}\left|0\right\rangle\). Some common feature maps include for example the Pauli feature map [4] and amplitude encoding [14]. The Pauli feature map maps an input \(x\) to a quantum state \(U_{\phi(x)}\left|0^{n}\right\rangle\), where \(U_{\phi(x)}=exp(i\sum_{S\in\mathcal{I}}\phi_{S}(x)\prod_{i\in S}P_{i})\). Here, \(\mathcal{I}\) in a collection of Pauli strings and \(S\) runs over the set of indices corresponding to qubits where Paulis are applied. Here \(\phi_{S}(x)=\left\{\begin{array}{ll}x_{i}&S=i\\ \prod_{j\in S}(\pi-x_{j})&\text{if }\left|S\right|>1\end{array}\right\}\).
A special case of the same is given by the ZZ Feature map. Multiple repetitions of Pauli and ZZ Feature maps can be stacked as well. Another common feature map is amplitude encoding, which encodes a vector \(x\in\mathbb{R}^{n}\) as \(\sum_{i}\frac{x_{i}}{\left\|x\right\|}\left|i\right\rangle\). This takes \(\log(n)\) qubits whereas ZZ encoding requires \(n\) qubits. One can measure the state to obtain samples from the model distribution by measuring an observable \(O\) on the state \(p(y|x;\theta)=\left\langle\psi(x,\theta)|O|\psi(x,\theta)\right\rangle\). One can take the observable to be \(ZZ\).\(ZZ\), which corresponds to measuring parity \(\in\left\{+1,-1\right\}\). The cost function can be optimized using classical routines, e.g., COBYLA, SPSA, Adam, NFT.
### Classical Neural Networks for Spatial Tissue Modeling
HACT-NET [12] is a state-of-the-art Graph Neural Network model for the hierachical analysis of digital pathology tissue images. Typically the tissue images are of large dimensions, e.g., 5000 \(\times\) 5000 pixels at 40\(\times\) magnification (0.46 \(\mu\)m/pixel). To process such images by a CNN while utilizing the complete TME context is infeasible due to the high computational overload. Therefore, a graph representation is useful to encode the necessary TME information in terms of a thousands of nodes and edges, and is much lighter than a pixel-based image representation. Building on this concept, HACT-NET constructs a hierarchical graph representation of a tissue by incorporating a low-level cell-graph, a high-level tissue-graph, and a cell-to-tissue hierarchy to comprehensively represent the tissue composition. Afterwards, the hierachical GNN backbone of HACT-NET processes the graph representation in a two-step manner to produce a cell- and tissue-aware feature embedding. A Multi-Layer Perceptron (MLP) operates on this embedding to perform downstream tissue subtyping. In this work, we pre-train the HACT-NET model for various downstream tissue classification tasks and use the pre-trained model to extract tissue embeddings for subsequently training our VQC.
## 3 Methodology
In our approach, we define a hybrid classical-quantum graph neural network, an overview of which is shown in Figure 1.
Specifically, we use a HACT-NET [12] to produce embeddings as \(Embed(x;\theta_{G})=GNN(x;\theta_{G})\in\mathbb{R}^{d}\) corresponding to the input image \(x\). These embeddings are then passed as input to a VQC which applies a feature map followed by
an ansatz \(V_{\theta_{Q}}\) and produces samples from the distribution
\[p(y|x;\theta_{G},\theta_{Q})=\left\langle\psi(x;\theta_{G},\theta_{Q})|ZZ...ZZ| \psi(x;\theta_{G},\theta_{Q})\right\rangle \tag{1}\]
\[\text{where, }\left|\psi(x;\theta_{G},\theta_{Q})\right\rangle=V_{\theta_{Q}}U_{ \phi(Embed(x;\theta_{C}))}\left|0\right\rangle. \tag{2}\]
Here \(\theta_{G}\) and \(\theta_{Q}\) refer to GNN and VQC parameters respectively.
We follow two approaches for training our Hybrid Network: (i) with a pretrained GNN (having frozen weights), and (ii) with trainable GNN parameters. In the first approach, we first pretrain HACT-NET with a classical MLP layer and then use the learnt representation of the final layer as input to the quantum network as defined in Equations 1 and 2. Here, the parameters \(\theta_{G}\) are kept fixed after the initial pre-training stage. In the second approach, both sets of parameters are updated together. We discuss the details of second approach in section 3.3 and focus on the first approach in this section.
When trained separately, the HACT-NET performed best at 64-dimensional output of GNN passed to the MLP before the final output. However, it is very difficult to get reliable results using 64 qubits in the current available quantum devices, which both have few qubits and the qubits are noisy. Thus, we experimented with a range of dimensions and different encoding schemes to use different number of qubits on the same data. We experimented with 10-dimensional output of GNN wherein we used ZZ encoding with 2 layers of repetition[14]. Here, number of qubits used equals the dimension. We also trained with higher embedding dimensions from the HACT-NET, such as 64, 256, 512 and 1024 with amplitude encoding.
Thus, we were able to encode a 64-dimensional input in 6 qubits. With this encoding, we were able to reach the state-of-art classification F1-score that the GNN achieved. Since these classical neural networks have large number of parameters, they are known to overfit at higher dimensions in presence of less data. Since data shortage in a known limitation in most tissue imaging datasets, a key research question here is **can quantum models outperform classical models at higher dimensions where classical models tend to overfit**. In order to study this, we experimented with 256-, 512- and 1024-dimensional learnt representations of the GNN which were both passed to the classical MLP as well as the quantum classifier to study the effects of high dimensions. Using amplitude encoding we were able to encode these in 8, 9, and 10 qubits, respectively.
### Dataset
For this work, we experimented on 3 binary classification tasks under the breast cancer sub-typing problem on the BReAst Cancer Subtyping (BRACS) dataset [12]. In BRACS, each image is of the order of 2048x1536 pixels and there are \(\approx\)2200 such images. We randomly split them into 1200 for training, 500 for validation and 500 for testing.
### Training details
In this subsection we explain the details of the VQC scheme. We apply parity-postprocessing after measurement (corresponding to measuring the observable \(ZZ...Z\) on the parameterized state produced) to get the desired output and pass through a cost function. We update the parameters of the ansatz to minimize the overall cost function, much like training weights of a neural network. In the current implementation, the measurement results were interpreted based on the parity of the measurement outputs, where even parity is considered as label +1 and odd parity as -1. After obtaining labels from parity post-processing, the classical optimizer calculates the cost function, and optimizes the parameters of ansatz until the classical optimization iterations complete or until the cost function converges. For inference, we use multiple shots and the most probable label is selected as the final label for each test data. We trained our models with Constrained Optimisation By Linear Approximation (COBYLA) [13] and Nakanishi-Fujii-Todo (NFT) [9] optimizers and discuss the best results across both optimizers. The maximum number of epochs was set to 100 with early stopping. All our experiments with different data sizes are run on a noiseless state vector simulator provided by IBM Quantum.
### End-to-end training
For the end-to-end training, we train the parameters of GNN namely, \(\theta_{G}\) and VQC parameters namely \(\theta_{Q}\) together using Qiskit's TorchConnector class. We trained the above with 10-dimensional GNN embeddings using ZZ encoding for VQC. Since the classical neural networks trains using gradient based backpropagation, we use Adam optimizer for training both the networks with a learning rate of \(10^{-3}\) for VQC parameters and \(10^{-6}\) for GNN parameters. We found it useful to optimize the VQC parameters less frequently (once every 10 epochs) than the GNN parameters.
Figure 1: Implementation of hybrid GNN-VQC model
## 4 Results
In this section we present the results obtained by the hybrid quantum-classical model using different feature dimensions and embedding methods and its comparison to state-of-art classical GNN. We also present detailed ablation studies to understand the impact of training data sizes in training both classical GNN and the proposed hybrid model. Figure 2 shows the performance of classical GNN (in dark green) and hybrid quantum model (in light green) on different dimensional learnt embeddings. While at lower dimensions (10 and 64) classical GNN is able to learn better than the quantum model, the quantum model is at par with classical in higher dimensions of 256, 512 and 1024.
We further experiment in this direction to understand the difficulties in learning. While keeping the number of qubits constant, we change the encoding schemes to understand the impact of data compression. Figure 3 shows impact of classical vs quantum compression by means of changing different feature dimensions and accordingly choosing encoding schemes to represent them in quantum states. When we compress the data classically by reducing the number of output neurons to 10, 9 and 8 dimensions, we observe that although we use 10, 9 and 8 qubits respectively via ZZ encoding, the quantum model is unable to learn and struggles at a weighted F1-score of 50%. This is primarily due to the information loss in the neural network that happens during the classical compression. When the data is not classically compressed and we pass a feature representation of dimension 1024, or 512 or 256 represented by same 10, 9 and 8 qubits, then the quantum model is at par with the state-of-art classical model. Here we use amplitude encoding which encodes \(n\) classical bits in \(\log(n)\) qubits but does not lose any information, enabling the quantum model to learn better from the high dimensional data.
Since classical deep learning networks are known to under-perform in low data scenarios, we wanted to study the impact of training data for both classical and quantum models. We perform a series of experiments wherein we use 0.1, 0.25, 0.5 and then full data for training both models. As expected, in both scenarios and across all dimensions, we observe that training with full data leads to the best results on the held-out test data, and the performance comparison trend is identical to the best model with full training.
We also show the test results (weighted precision, weighted recall and weighted F1-score) on end-to-end training, in comparison with classical GNN as well as separately trained GNN+VQC in Table 1. We show that end-to-end training significantly improves over separate training of VQC and GNN, and even slightly outperforms classical GNN.
## 5 Discussions and Future Work
Overall, in this work we present two ways to train hybrid quantum-classical neural networks. We show that end-to-end training is significantly better than serially training such models and demonstrate results on a real-world breast-cancer subtyping task. In detailed ablation studies we observe that quantum compression can be significantly better to qubit requirements without information loss unlike lossy classical compression. Future directions could be to explore how other such classical networks can be combined with quantum circuits to enhance their trainability and improve generalization.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Model & w-precision & w-Recall & w-F1score \\ \hline cGNN & 0.71 & 0.69 & 0.7 \\ cGNN+VQC & 0.58 & 0.57 & 0.57 \\ end-to-end GNN+VQC & 0.72 & 0.71 & 0.72 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table comparing end-to-end trainable networks vs classicalGNN and classicalGNN+VQC trained separately. All experiments on 10-dimensional ZZ encoding using 10 qubits on the simulator.
Figure 3: Classical compression vs Quantum compression
Figure 2: Graph showing performance (weighted F1-score) of classical GNN and hybrid quantum-classical model on different feature dimensions |
2303.16504 | An Over-parameterized Exponential Regression | Over the past few years, there has been a significant amount of research
focused on studying the ReLU activation function, with the aim of achieving
neural network convergence through over-parametrization. However, recent
developments in the field of Large Language Models (LLMs) have sparked interest
in the use of exponential activation functions, specifically in the attention
mechanism.
Mathematically, we define the neural function $F: \mathbb{R}^{d \times m}
\times \mathbb{R}^d \rightarrow \mathbb{R}$ using an exponential activation
function. Given a set of data points with labels $\{(x_1, y_1), (x_2, y_2),
\dots, (x_n, y_n)\} \subset \mathbb{R}^d \times \mathbb{R}$ where $n$ denotes
the number of the data. Here $F(W(t),x)$ can be expressed as $F(W(t),x) :=
\sum_{r=1}^m a_r \exp(\langle w_r, x \rangle)$, where $m$ represents the number
of neurons, and $w_r(t)$ are weights at time $t$. It's standard in literature
that $a_r$ are the fixed weights and it's never changed during the training. We
initialize the weights $W(0) \in \mathbb{R}^{d \times m}$ with random Gaussian
distributions, such that $w_r(0) \sim \mathcal{N}(0, I_d)$ and initialize $a_r$
from random sign distribution for each $r \in [m]$.
Using the gradient descent algorithm, we can find a weight $W(T)$ such that
$\| F(W(T), X) - y \|_2 \leq \epsilon$ holds with probability $1-\delta$, where
$\epsilon \in (0,0.1)$ and $m = \Omega(n^{2+o(1)}\log(n/\delta))$. To optimize
the over-parameterization bound $m$, we employ several tight analysis
techniques from previous studies [Song and Yang arXiv 2019, Munteanu, Omlor,
Song and Woodruff ICML 2022]. | Yeqi Gao, Sridhar Mahadevan, Zhao Song | 2023-03-29T07:29:07Z | http://arxiv.org/abs/2303.16504v1 | # An Over-parameterized Exponential Regression
###### Abstract
Over the past few years, there has been a significant amount of research focused on studying the ReLU activation function, with the aim of achieving neural network convergence through over-parametrization. However, recent developments in the field of Large Language Models (LLMs) have sparked interest in the use of exponential activation functions, specifically in the attention mechanism.
Mathematically, we define the neural function \(F:\mathbb{R}^{d\times m}\times\mathbb{R}^{d}\to\mathbb{R}\) using an exponential activation function. Given a set of data points with labels \(\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{n},y_{n})\}\subset\mathbb{R}^{d}\times \mathbb{R}\) where \(n\) denotes the number of the data. Here \(F(W(t),x)\) can be expressed as \(F(W(t),x):=\sum_{r=1}^{m}a_{r}\exp(\langle w_{r},x\rangle)\), where \(m\) represents the number of neurons, and \(w_{r}(t)\) are weights at time \(t\). It's standard in literature that \(a_{r}\) are the fixed weights and it's never changed during the training. We initialize the weights \(W(0)\in\mathbb{R}^{d\times m}\) with random Gaussian distributions, such that \(w_{r}(0)\sim\mathcal{N}(0,I_{d})\) and initialize \(a_{r}\) from random sign distribution for each \(r\in[m]\).
Using the gradient descent algorithm, we can find a weight \(W(T)\) such that \(\|F(W(T),X)-y\|_{2}\leq\epsilon\) holds with probability \(1-\delta\), where \(\epsilon\in(0,0.1)\) and \(m=\Omega(n^{2+o(1)}\log(n/\delta))\). To optimize the over-parametrization bound \(m\), we employ several tight analysis techniques from previous studies [Song and Yang arXiv 2019, Munteanu, Omlor, Song and Woodruff ICML 2022].
###### Contents
* 1 Introduction
* 1.1 Our Results
* 2 Related Work
* 2.1 Training over-parameterized neural network
* 2.2 Attention Theory
* 3 Technique Overview
* 4 Preliminary
* 4.1 Notations
* 4.2 Data points
* 4.3 Initialization Weights
* 4.4 Basic Algebra
* 4.5 Probability Tools
* 5 Problem Formulation
* 6 Initialization and Perturbation
* 6.1 A list of tools
* 6.2 Bounding changes between discrete and continuous
* 6.3 Given \(w\) within a small ball bounding changes of \(H\)
* 6.4 Controlling the Loss at initialization
* 7 Convergence
* 7.1 Main Result
* 7.2 Induction Part 1. For Weights
* 7.3 Induction Part 2. For Loss
* 7.4 Induction Part 3. For Gradient
* 8 Induction Part 1: For Weight
* 8.1 Definition of \(D\)
* 8.2 Bounding the gradient at any time
* 9 Induction Part 2: For Loss
* 9.1 Decomposition for \(\|y-F(t+1)\|_{2}^{2}\)
* 9.2 Choice of Parameters
* 9.3 Bounding the first order term
* 9.4 Bounding the second order term
* 9.5 Bounding \(\|F(t+1)-F(t)\|_{2}^{2}\)
Introduction
Neural networks have proven to be effective in a range of different applications, such as image recognition [11, 12] and speech recognition [13]. Overparametrization, the use of more parameters than necessary, is believed to be crucial to the success of deep learning [1, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Surprisingly, even when the data is improperly labeled and the target function is non-smooth and non-convex, over-parameterized neural networks trained with first-order methods can fit all training data, due to the modern architecture with ReLU activations. Furthermore, over-parameterized networks can improve generalization in practice, which contradicts traditional VC-dimension theory.
Large language models (LLMs) have proven to be more effective in processing natural language compared to smaller models and traditional algorithms. Examples of these models include Transformer [10], BERT [11], GPT-3 [2], PaLM [12], and OPT [13].
The attention matrix is the key technical foundation of LLMs, as highlighted in previous research [10, 11, 12, 13, 14, 15, 16, 17]. This square matrix has rows and columns that correspond to words or "tokens" in natural language, with entries representing the correlations between them. It is used to determine the importance of each token in a sequence when generating an output. In the attention mechanism, each input token is assigned a weight or score based on its relevance to the current output. These scores are calculated using a similarity function that compares the input and output states.
The exponential activation function [14, 10] is a commonly used activation function in neural networks. It maps the input to the output using the exponential function. One of the main advantages of the exponential activation function is that it can produce positive outputs for any input, which can be useful in certain types of neural networks, such as those used for regression tasks. Additionally, the exponential activation function is continuously differentiable, which is important for backpropagation during training.
Another application of the exponential activation function is in the generation of natural language [10, 11, 12, 13]. Specifically, it has been used in language models such as GPT-3 [2] to generate text that closely mimics human writing. The exponential function can help to weight the importance of different words in a given context, leading to more accurate and coherent language generation.
[15] present GPT-4, a multimodal model capable of producing text outputs from both image and text inputs. While GPT-4 falls short of human-level performance in some real-world scenarios, it achieves human-like results on various professional and academic benchmarks, including scoring in the top 10% on a simulated bar exam. Based on the Transformer architecture, GPT-4 is pre-trained to predict the next token in a document. Post-training alignment improves its accuracy in factual content and adherence to desired behavior. [15] also developed infrastructure and optimization methods that scale predictably, allowing for accurate predictions of GPT-4's performance using models trained on just 1/1000 th of its computational capacity.
In this work, we consider a natural question,
_Is that possible to prove an over-parameterization bound for exponential activation in neural network learning?_
In this work, we provide a positive answer for this question. Assuming a set of data points \(\{(x_{i},y_{i})\}_{i=1}^{n}\subset\mathbb{R}^{d}\times\mathbb{R}\), we use \(\lambda\) as the minimum eigenvalue of the neural tangent kernel with respect to the exponential activation function, \(n\) as the number of points and \(m\) as the number of neurons. Moreover, we use \(F(t)\) to represent our two-layer neural network with the exponential activation function at the \(t\)-th step.
Based on our analysis of the perturbation of the weights, we can derive a bound on the prediction loss \(\|y-F(t)\|_{2}^{2}\). By choosing \(m\) to be large enough, specifically \(\Omega(\lambda^{-2}\log(n/\delta)n^{2+o(1)})\), and setting the learning rate \(\eta=\Theta(\lambda/(mn^{2+o(1)}))\), we can ensure the convergence of the neural network with exponential activation.
### Our Results
Our primary result is presented below.
**Theorem 1.1** (Main result, formal version of Theorem 7.1).: _Let \(\delta\in(0,0.1)\) denote the failure probability. Let \(\epsilon\in(0,0.1)\) denote the accuracy. If the following conditions hold_
* _Let_ \(\lambda>0\) _denote the minimum eigenvalue of neural tangent kernel with respect to exponential activation._
* _Let_ \(m=\Omega(\lambda^{-2}\log(n/\delta)n^{2+o(1)})\) _represent the number of neurons._
* _Let_ \(w_{r}\) _be random Gaussian weights and_ \(a_{r}\) _be the random_ \(\{-1,+1\}\) _weights._
* _Let_ \(\eta=\Theta(\lambda/(mn^{2+o(1)}))\) _denote the learning rate of gradient descent algorithm_
* _Let_ \(T=\Omega(\lambda^{-2}n^{2+o(1)}\cdot\log(n/\epsilon))\) _denote the number of iterations of gradient descent algorithm_
_Then, we have after running algorithm with \(T\) iterations. And with probability at least \(1-\delta\), we obtain a \(w(T)\) such that_
\[\|F(T)-y\|_{2}^{2}\leq\epsilon.\]
In order to demonstrate the convergence, we begin by selecting a sufficiently large value of \(m\) to regulate changes in weights \(w\) and gradients over the training. We then assume that \(w\) is contained within a small ball, allowing us to complete the proof of convergence.
## 2 Related Work
### Training over-parameterized neural network
Convergence[20] demonstrate that for a shallow neural network with ReLU activation consisting of \(m\) hidden nodes and trained on \(n\) data points, as long as \(m\) is sufficiently large and no two inputs are parallel, randomly initialized gradient descent will converge to an optimal global solution at a linear rate of convergence for the quadratic loss function. This is due to the fact that over-parametrization and random initialization work together to ensure that every weight vector remains close to its initial value throughout all iterations.
[2] have proven that under two assumptions, simple algorithms such as stochastic gradient descent (SGD) can discover Global Minima on the training objective of Deep Neural Networks (DNNs) in Polynomial Time. The two assumptions are that the inputs are non-degenerate, and the network is over-parameterized, meaning the number of hidden neurons is sufficiently large and is polynomial in \(L\), the number of DNN layers, and in \(n\), the number of training samples. [2] study recurrent neural networks (RNNs) used in natural language processing in [2]. They demonstrate that with enough neurons, SGD can minimize the regression loss at a linear rate, showing RNNs can memorize data. [2] also develop a perturbation theory for analyzing first-order approximation of multi-layer networks.
Over-parametrization bound, bound on \(m\)In deep learning theory, [14] enhance the size of over-parametrization beyond three previous notable results [13], [15], and [1].
In neural network training, it is common to initialize all weights as independent Gaussian vectors. However, [12] observed that initializing the weights as independent pairs, where each pair consists of two identical Gaussian vectors, can improve the convergence analysis significantly.
Using data structure to speedup cost per iterationPreprocessing plays a critical role in the training of over-parameterized neural networks. [14] demonstrates that the cost per iteration of training can be reduced by pre-processing the initial weights of the neural network or pre-processing the input data points. Specifically, pre-processing the initial weights can result in a cost of \(\widetilde{O}(m^{1-\Theta(1/d)}nd)\) per iteration, while pre-processing the input data points can further reduce the cost to \(\widetilde{O}(m^{4/5}nd)\) per iteration. [1] also propose a new preprocessing method that employs a tree data structure to detect neuron firing during each iteration and achieves \(o(nmd)\) time per iteration and requires \(O(nmd)\) time in preprocessing. By using \(m^{2}\) cost only in the initialization phase, [14, 15] reach the cost of \(m^{2-\Omega(1)}\) per iteration.
Some works focused on the faster second-order optimization algorithms. Although second-order algorithms have a remarkable convergence rate, their high computational cost per iteration renders them impractical. Recent work by [13, 12] which focused on the second-order algorithms has mitigated this computational overhead, resulting in an \(O(mn^{2})\)-time second-order algorithm for training two-layer over-parameterized neural networks. [1] further accelerates the algorithm of [12] to achieve an \(\widetilde{O}(mn)\)-time backpropagation algorithm for training mildly over-parameterized ReLU networks.
By utilizing data structures, certain methods can decrease the cost per iteration, resulting in faster performance. [11] analyze the convergence guarantee of adversarial training on a two-layer neural network with shifted ReLU activation, finding that only \(o(m)\) neurons are activated per input data per iteration which reached the training cost time cost of \(o(mnd)\) per iteration. [10] introduce a novel training approach for a standard neural network with \(m=\operatorname{poly}(n)\) parameters and a batch of \(n\) input data points in \(\mathbb{R}^{d}\). By treating neural networks as a collection of binary search trees and making selective modifications to a subset of nodes at each iteration, with \(\alpha\in(0.01,1)\) fixed, their method achieves a time complexity of \(m^{1-\alpha}nd+n^{3}\) in the overparametrized regime.
### Attention Theory
Fast computation and optimizationIn the field of optimazation, [14] focused the role of adaptive methods in attention models and by [14] focused on analyzing the dynamics of a single-head attention head to approximate the learning of a Seq2Seq architecture. According to [11], the attention mechanism is a versatile tool that can be used to execute complex, general-purpose programs even in shallow transformer models. [14] for the role of adaptive methods in attention models and by [14] for analyzing the dynamics of a single-head attention head to approximate the learning of a Seq2Seq architecture.
The computation of attention is a crucial aspect of training large language models. Given three matrices \(Q,K,V\in[-B,B]^{n\times d}\) as input, the objective is to construct the matrix \(\textsc{Att}(Q,K,V):=\operatorname{diag}(A\mathbf{1}_{n})^{-1}AV\in\mathbb{R} ^{n\times d}\), where \(A=\exp(QK^{\top}/d)\) is the 'attention matrix'. In [1], the authors investigate whether faster algorithms are possible by implicitly utilizing the matrix \(A\). They present two results that show a sharp transition occurs at \(B=\Theta(\sqrt{\log n})\). A recent study conducted by Zandieh, Han, Dairi, and Karbasi [11] introduced the first algorithm with provable guarantees for attention approximation. Their algorithm employs techniques from locality sensitive
hashing (LSH) [10]. [11, 12] are mainly focusing on the static version of attention computation problem. [13] define and study the dynamic version of attention computation problem. [13] provides both algorithmic result and hardness result. [14] introduce the Structured State Space (S4) sequence model, which utilizes a novel parametrization for the SSM. The study demonstrates that the S4 model can be computed with significantly greater efficiency than previous approaches, while still retaining their theoretical advantages. [15] study the regularized exponential regression problem, and provides an algorithm that runs in input sparsity time.
Expressivity for transformerExpressivity has been studied by [1, 1] for self-attention blocks and by [16, 17, 18] for Transformers. Research has demonstrated that fine-tuning language models on a set of datasets expressed as instructions can enhance model performance and improve generalization to unfamiliar tasks. [19] investigate instruction fine-tuning with a specific emphasis on expanding the number of tasks, increasing the size of the model, and fine-tuning on chain-of-thought data. [1] offers a thorough theoretical examination of the inductive biases associated with self-attention modules. The goal is to establish, with rigor, the types of functions and long-range dependencies that self-attention blocks are inclined to represent.
A recent study conducted by [10] delved into this matter by exploring learning automata, which are discrete dynamic systems that are well-suited for recurrent modeling and expressing algorithmic tasks.
The study conducted by [17] commences by introducing a simple weight construction that establishes the likeness between data transformations generated by a single linear self-attention layer and gradient-descent (GD) implemented on a regression loss. [15] offer a detailed mechanistic explanation of how transformers learn "semantic structure," defined as the ability to capture the co-occurrence patterns of words.
In-context learningMany works focused on the in-context learning in recent years. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences, a topic being actively studied in the community. To address this limitation, [18] propose Nystromformer, a model that exhibits favorable scalability as a function of sequence length. During testing, in-context learning takes place as the language model (LM) deduces a common underlying concept among the examples given in a prompt. In a study by [13], it was demonstrated that this phenomenon can occur even when there is a difference in the distribution of prompts and the pretraining data. The study was conducted in a scenario where the pretraining data had a combination of hidden Markov models (HMMs).
[1] explore the possibility that transformer-based in-context learners implicitly execute standard learning algorithms by encoding smaller models within their activations and updating these implicit models as new examples are introduced in the context.
In order to gain a better understanding of in-context learning, [12] examine the well-defined problem of training a model to in-context learn a function class (such as linear functions). Their study investigates whether a model can be trained to in-context learn "most" functions from this class when given data derived from some functions in the class.
The key finding of [1] demonstrates that Transformer networks with bounded-norm have the ability to "create sparse variables." Specifically, a single self-attention head can represent a sparse function of the input sequence, and the sample complexity scales logarithmically with the context length. [15] demonstrate that saturated transformers surpass the known limitations
of hard-attention transformers. [20] subsequently establish that saturated transformers, which utilize floating-point values, can be replicated through constant-depth threshold circuits, which restricts the class of formal languages they can identify.
[14] introduce a watermarking framework designed specifically for proprietary language models. This watermark can be embedded with minimal impact on text quality, and can be detected using an efficient open-source algorithm that does not require access to the language model API or parameters.
Other applications and theories of transformerSome analysis focused on the Transformer computing on hardware. One of the key principles missing in attention algorithms is their lack of IO awareness, i.e., not accounting for the reads and writes between different levels of GPU memory. In order to address the aforementioned issue, the authors of the paper referenced as [13] have introduced a new attention algorithm called FlashAttention. This algorithm is designed to be precise, taking into account input-output (IO) operations, while also leveraging tiling to minimize the number of times data needs to be transferred between on-chip SRAM and the GPU's high bandwidth memory (HBM).
[15] examines the ability of neural networks to learn a \(k\)-sparse parity of n bits, a well-known discrete search problem that is statistically simple but computationally difficult. Through empirical investigations, the authors observe that various neural networks are capable of successfully learning sparse parities, and they note discontinuous phase transitions in the training curves.
[16] propose modifications to generative model learning algorithms that provide strong bounds on the probability of sampling protected content. These modifications are made in an efficient and black box manner.
Roadmap.Our techniques are outlined briefly in Section 3, while Section 4 covers our preliminary tools and notations. In Section 5, we introduce the problem of interest and define a two-layer neural network with exponential activation functions. In Section 6, we demonstrate that when the width \(m\) is sufficiently large, the continuous and discrete versions of the input data's Gram matrix are spectrally close to each other. Section 7 establishes that an over-parameterized neural network achieves linear convergence of the training error to 0. In Section 8, we simplify the problem by defining \(D_{\mathrm{cts}},H(s)\) and providing a gradient bound through induction. Then, in Section 9, we establish a similar induction-based bound for the loss \(\|y-F(t+1)\|_{2}^{2}\) at any time.
## 3 Technique Overview
This paper presents a proof demonstrating that a two-layer neural network employing the exponential activation function which can achieve a desired small loss value after sufficient iterations, given a large enough number of neurons \(m\), an appropriate learning rate \(\eta\), and the initialization method specified in Definition 4.3.
By bounding the difference of the weights over the training and choosing a proper learning rate \(\eta\), we bound the loss by induction. We will introduce how we bound the loss under the assumption for the small perturbation on the weights. And then we will introduce how we bound the weights and gradients respectively.
Bounding the loss by inductionTo establish this result, we begin by bounding the summation of the differences between the weights at the current step and their initial values, assuming that \(w\) is in a small range such that \(\Delta w_{r}(t)\leq R\in(0,0.01)\) where \(t\) denote the step here.
To establish an upper bound on the prediction loss \(\|y-F(t+1)\|_{2}^{2}\), we first assume that the weight \(w\) is within a small range, namely \(\Delta w_{r}(t)\leq R\in(0,0.01)\). We decompose the loss into four parts:
* The loss at the previous step \(\|y-F(t)\|_{2}^{2}\)
* \(C_{1}:=-2m\eta(F(t)-y)^{\top}H(t)(F(t)-y)\)
* \(C_{2}:=2(F(t)-y)^{\top}H_{\text{asy}}(t)(F(t)-y)\)
* \(C_{3}:=\|F(t+1)-F(t)\|_{2}^{2}\)
Using terms that involve the parameters \(m\), \(\eta\), \(n\), and \(B\) multiplied by \(\|y-F(t)\|_{2}^{2}\), we can compute the upper bounds \(C_{1}\), \(C_{2}\), and \(C_{3}\) respectively. Then, by induction and choosing a proper value for \(m\), we can bound \(\|y-F(t)\|_{2}^{2}\) where the loss is \(\|y-F(0)\|_{2}^{2}=\|y\|_{2}^{2}\) at initialization under the given assumption.
Bounding the weights by inductionFinally, to complete the proof, we establish an upper bound for \(\Delta w_{r}(t)=\|w_{r}(t)-w_{r}(0)\|_{2}\). To do this, we transform \(\Delta w_{r}(t)\) into a form that is a multiple of \(\exp(B+R)\sqrt{n}\) and the current loss \(\|y-F(t)\|_{2}\).
Bounding the gradients by inductionBased on the result above, we will continue our work on bounding the changing of the weights over the training. By appropriately choosing the learning rate \(\eta\), we can ensure that \(\Delta w_{r}(t)\) is small enough, namely \(0.01\). We have the following definition
\[H(w)_{i,j}:=\frac{1}{m}\langle x_{i},x_{j}\rangle\sum_{r\in[m]}\exp(\langle w_ {r},x_{i}\rangle)\cdot\exp(\langle w_{r},x_{j}\rangle)\]
where \(r\in[m]\) as the index the neurons, and \(x_{i}\) denotes the data where \(i\in[n]\) and \(j\in[n]\).
Next, we will constrain the changes of \(H\) under the assumption that \(w\) is located within a small ball. In addition, we must bound the discrepancy between discrete and continuous functions. Drawing upon the conclusions reached regarding perturbations in weight \(w\), we can guarantee the convergence of the over-parametrized neural network with an exponential activation function.
## 4 Preliminary
In Section 4.1, we provide several basic notations and definitions. In Section 4.2, we present our assumptions for data points. In Section 4.3, we outline our assumptions regarding weight initialization. Section 4.4 provides some basic algebraic concepts used throughout the paper. Finally, Section 4.5 discusses various probability tools used in our work.
### Notations
In our notation, \([n]\) represents the set \(\{1,2,\cdots,n\}\). The \(\exp\) activation function is denoted by \(\phi(x)=\exp(x)\).
The \(\ell_{2}\) norm of a vector \(y\in\mathbb{R}^{n}\) is denoted by \(\|y\|_{2}:=(\sum_{i=1}^{n}y_{i}^{2})^{1/2}\) and represents the element-wise square root of the sum of squares of each entry in the vector.
The spectral norm of a matrix \(B\) is denoted by \(\|B\|\). We also define the Frobenius norm \(\|B\|_{F}=(\sum_{i}\sum_{j}B_{i,j}^{2})^{1/2}\) and the \(\ell_{1}\) norm \(\|B\|_{1}=\sum_{i}\sum_{j}|B_{i,j}|\) of matrix \(B\).
For any symmetric matrix \(B\in\mathbb{R}^{k\times k}\), we define its eigenvalue decomposition as \(U\Lambda U^{\top}\), where \(\Lambda\) is a diagonal matrix. Let \(\lambda_{1},\cdots,\lambda_{k}\) denote the entries on diagonal of \(\lambda\in\mathbb{R}^{k\times k}\). We say \(\lambda_{i}\) is the \(i\)-th eigenvalue. Usually, we write it as \(\lambda_{i}(B)\) where \(i\in[k]\).
We define
\[\lambda_{\min}(B):=\min_{i\in[k]}\{\lambda_{1},\cdots,\lambda_{k}\}.\]
We use \(\mathcal{N}(\mu,\Sigma)\) to denote a \(d\)-dimensional Gaussian distribution with mean \(\mu\in\mathbb{R}^{d}\) and covariance matrix \(\Sigma\in\mathbb{R}^{d\times d}\).
### Data points
**Definition 4.1**.: _We assume the data points satisfy_
* \(\|x_{i}\|_{2}\leq 1\)_, for all_ \(i\in[n]\)__
* \(|y_{i}|\leq 1\)_, for all_ \(i\in[n]\)__
### Initialization Weights
The following weights initialization are very standard in the literature, e.g., see [1, 1, 1, 2].
**Definition 4.2**.: _We choose weights as follows_
* \(\forall r\in[m]\)_,_ \(a_{r}\) _is randomly and uniformly sampled from the set_ \(\{-1,+1\}\)_._
* _We sample_ \(w_{r}\) _from_ \(\mathcal{N}(0,\sigma^{2}I)\) _for each_ \(r\in[m]\)__
To improve the initialization bound \(\|y-F(0)\|_{2}^{2}\), we will use an idea from [12]. The definition 4.2 and Definition 4.3 are the same up to constant factor. So for convenient of analysis, in most places we use Definition 4.2. We only use Definition 4.3 for bounding the initialization \(\|y-u(0)\|_{2}^{2}=0\).
**Definition 4.3**.: _For each \(r\in[m/2]\), we choose weights as follows_
* _We sample_ \(a_{2r-1}\) _from_ \(\{-1,+1\}\) _uniformly at random._
* _We sample_ \(w_{2r-1}\) _from Gaussian distribution_ \(\mathcal{N}(0,\sigma^{2}I)\) _._
* _We choose_ \(a_{2r}=-a_{2r-1}\)_._
* _We choose_ \(w_{2r-1}=w_{2r}\)_._
### Basic Algebra
**Fact 4.4** (Taylor series).: _We have_
* \(\exp(x)=\sum_{i=0}^{\infty}\frac{1}{i!}x^{i}\)__
* \(\cosh(x)=\sum_{i=0}^{\infty}\frac{1}{(2i)!}x^{2i}\)__
* \(\sinh(x)=\sum_{i=0}^{\infty}\frac{1}{(2i+1)!}x^{2i+1}\)__
**Fact 4.5** (Cauchy Schwarz).: _For any two vectors \(x,y\in\mathbb{R}^{n}\), we have_
\[\langle x,y\rangle\leq\|x\|_{2}\cdot\|y\|_{2}.\]
**Fact 4.6**.: _We have_
* \(\|B\|\leq\|B\|_{F}\)__
* \(\forall B\in\mathbb{R}^{n\times n}\)_,_ \(\|B\|_{F}\leq n\|B\|_{\infty}\)__
* \(\forall x\in R^{n}\)_,_ \(x^{\top}Bx\leq\|x\|_{2}^{2}\cdot\|B\|\)__
* \(\lambda_{\min}(A)\geq\lambda_{\min}(B)-\|A-B\|\)__
**Fact 4.7**.: _We have_
* _For any_ \(|x|\leq 0.1\)_, then we have_ \(|\exp(x)-1|\leq 2x\)_._
* _For any_ \(|x|\leq 0.1\)_, then we have_ \(|\cosh(x)-1|\leq x^{2}\)__
* _For any_ \(|x|\leq 0.1\)_, then we have_ \(\exp(x)=1+x+\Theta(1)x^{2}\)_._
* _For any_ \(|x|\leq 0.1\)_, then we have_ \((1-x)^{1/2}\leq 1-0.5x\)__
* _For any_ \(x\in(0,0.1)\)_, we have_ \(\sum_{i=0}^{\infty}x^{i}\leq\frac{1}{1-x}\)__
### Probability Tools
We state the standard Bernstein inequality,
**Lemma 4.8** (Bernstein inequality [1]).: _If the following condition holds_
* \(Z_{1},\cdots,Z_{n}\) _be independent zero-mean random variables_
* \(|Z_{i}|\leq M\) _almost surely_ \(\forall i\in[n]\)__
* _Let_ \(Z=\sum_{i=1}^{n}Z_{i}\)_._
* \(\operatorname{Var}[Z]=\sum_{j=1}^{n}\mathbb{E}[Z_{j}^{2}]\)_._
_Then, for all positive \(t\),_
\[\Pr\left[Z>t\right]\leq\exp\left(-\frac{t^{2}/2}{\operatorname{Var}[Z]+Mt/3} \right).\]
We state the standard Hoeffding inequality,
**Lemma 4.9** (Hoeffding inequality [11]).: _If the following conditions hold_
* _Let_ \(Z_{1},\cdots,Z_{n}\) _denote_ \(n\) _independent variables_
* \(Z_{i}\in[\alpha_{i},\beta_{i}]\)_, for all_ \(i\in[n]\)__
* _Let_ \(Z=\sum_{i=1}^{n}Z_{i}\)
_Then we have_
\[\Pr[|Z-\mathbb{E}[Z]|\geq t]\leq 2\exp\left(-\frac{2t^{2}}{\sum_{i\in[n]}( \beta_{i}-\alpha_{i})^{2}}\right).\]
We state a standard tool from literature (see Lemma 1 on page 1325 of [10]),
**Lemma 4.10** (Laurent and Massart [10]).: _Suppose \(X\) follows a chi-squared distribution with \(k\) degrees of freedom denoted by \(\mathcal{X}_{k}^{2}\). The mean of each variable is zero, and the variance is \(\sigma^{2}\). Then,_
\[\Pr[X-k\sigma^{2}\geq(2\sqrt{kt}+2t)\sigma^{2}]\leq \ \exp{(-t)}\] \[\Pr[k\sigma^{2}-X\geq 2\sqrt{k}t\sigma^{2}]\leq \ \exp{(-t)}\]
_Further if \(k\geq\Omega(\epsilon^{-2}t)\) and \(t\geq\Omega(\log(1/\delta))\), then we have_
\[\Pr[|X-k\sigma^{2}|\leq\epsilon k\sigma^{2}]\leq\delta.\]
## 5 Problem Formulation
In previous formulation in [1, 1], they have normalization factor \(\frac{1}{\sqrt{m}}\) and only work for ReLU activation function. In [1], they don't have normalization factor and only work for ReLU activation function.
The statement refers to a specific type of neural network that has two layers. The hidden layer of the network consists of \(m\) neurons, and the activation function is the exponential function.
\[F(W,x,a):=\sum_{r=1}^{m}a_{r}\phi(w_{r}^{\top}x),\]
To simplify optimization, we only focus on optimizing \(W\) and not both \(a\) and \(W\) simultaneously, where \(x\in\mathbb{R}^{d}\) represents the input, \(w_{1},\cdots,w_{m}\in\mathbb{R}^{d}\) are weight vectors in the first layer, and \(a_{1},\cdots,a_{m}\in\mathbb{R}\) are weights in the second layer.
\(\forall r\in[m]\), given the function \(\phi(x)=\exp(x)\), we have
\[\frac{\partial F(W,x,a)}{\partial w_{r}}=a_{r}x\exp(\langle w_{r},x\rangle). \tag{1}\]
**Definition 5.1** (\(F(t)\), dynamic prediction).: _For any timestamp \(t\), we define_
\[F_{i}(t):=\sum_{r=1}^{m}a_{r}\exp(\langle w_{r}(t),x_{i}\rangle)\]
**Definition 5.2** (Loss function over time).: _The objective function \(L\) is defined as follows:_
\[L(W(t)):=\frac{1}{2}\sum_{i\in[n]}(F_{i}(t)-y_{i})^{2}.\]
Thus, we define
**Definition 5.3** (\(\Delta w_{r}(t)\)).: _We define \(\Delta w_{r}(t)\in\mathbb{R}^{d}\), \(\forall r\in[m]\) in the following:_
\[\Delta w_{r}(t):=\sum_{i=1}^{n}(F_{i}(t)-y_{i})a_{r}x_{i}\exp( \langle w_{r}(t),x_{i}\rangle).\]
**Definition 5.4** (gradient descent update equation).: _The typical approach for optimizing the weight matrix \(W\) involves applying the gradient descent algorithm in the following:_
\[W(t+1)=W(t)-\eta\Delta W(t).\]
_where \(\Delta W(t)\in\mathbb{R}^{d\times m}\) and \(\Delta w_{r}(t)\in\mathbb{R}^{d}\) is the \(r\)-th column of \(\Delta W(t)\in\mathbb{R}^{d\times m}\)._
Initialization and Perturbation
In Section 6, we propose assumptions on initialization and analyze perturbations. In Section 6.1, some tools utilized in this paper are presented. In Section 6.2, we provide a bound on \(\|H^{\mathrm{dis}}-H^{\mathrm{cts}}\|_{F}\) (the distinction between discrete and continuous) under the choice of \(m\). In Section 6.3, we demonstrate an upper bound on the difference between continuous and discrete under the assumption that \(w\) is in a small ball. In Section 6.4, we show how to control the loss \(\|y-F(0)\|_{2}^{2}=\|y\|_{2}^{2}\) at initialization by forcing \(a_{2r}=-a_{2r-1}\).
### A list of tools
Within this section, we demonstrate that the spectral proximity exists between the continuous and discrete versions of the gram matrix of input data.
**Definition 6.1**.: _Let \(C>10\) denote a sufficiently large constant._
_We define parameter \(B\) as follows_
\[B:=C\sigma\sqrt{\log(n/\delta)}.\]
**Lemma 6.2**.: _If the following conditions hold_
* _Let_ \(B>0\) _denote a parameter be defined as Definition_ 6.1_._
* _Let_ \(w_{r}\) _denote random Gaussian vectors from_ \(\mathcal{N}(0,\sigma^{2}I_{d})\)_._
* _Let_ \(v_{r}\) _be the vector where_ \(\|v_{r}-w_{r}\|_{2}\leq R\)_,_ \(\forall r\in[m]\)__
* _Let_ \(x_{i}\) _be the vector where_ \(\|x_{i}\|_{2}\leq 1\)_,_ \(\forall i\in[n]\)__
* _Let_ \(R\in(0,0.01)\)__
_Then, we have with probability \(1-\delta\)_
* _Standard inner product_
* _Part 1._ \(|\langle w_{r},x_{i}\rangle|\leq B\)_,_ \(\forall i\in[n]\)_,_ \(\forall r\in[m]\)__
* _Part 2._ \(|\langle v_{r},x_{i}\rangle|\leq B+R\)_,_ \(\forall i\in[n]\)_,_ \(\forall r\in[m]\)__
* _Part 3._ \(|\langle w_{r}-v_{r},x_{i}+x_{j}\rangle|\leq 2R\)__
* \(\exp\) _function_
* _Part 4._ \(\exp(\langle w_{r},x_{i}\rangle)\leq\exp(B)\)_,_ \(\forall i\in[n]\)_,_ \(\forall r\in[m]\)__
* _Part 5._ \(\exp(\langle v_{r},x_{i}\rangle)\leq\exp(B+R)\)_,_ \(\forall i\in[n]\)_,_ \(\forall r\in[m]\)__
* _Part 6._ \(|\exp(\langle w_{r}-v_{r},x_{i}+x_{j}\rangle)-1|\leq 4R\)__
Proof.: **Proof of Part 1, 2, 4 and 5.**
The proof is trivially follows from Gaussian tail bound.
**Proof of Part 3 and 6.**
Because \(x_{i}\) and \(x_{j}\) are independent and \(\|\Delta w_{r}\|_{2}\leq R\), we can have
\[|\langle\Delta w_{r},(x_{i}+x_{j})\rangle|\leq 2R\leq 0.1 \tag{2}\]
Then, we have
\[|\exp(\langle\Delta w_{r},(x_{i}+x_{j})\rangle)-1)| \leq 2|\langle\Delta w_{r},(x_{i}+x_{j})\rangle|\] \[\leq 4R\]
where the first step follows from Fact 4.7, and the last step follows from Eq. (2).
### Bounding changes between discrete and continuous
In Section 6.2, we establish a bound on \(\|H^{\mathrm{dis}}-H^{\mathrm{cts}}\|_{F}\) for the chosen value of \(m=\Omega(\lambda^{-2}\cdot n^{2}\cdot\exp(2B)\cdot\sqrt{\log(n/\delta)})\). The following lemma can be viewed as a variation of Lemma 3.1 in [1], a variation of Lemma C.3 in [1], a variation of Lemma C.1 in [1], a variation of Lemma C.1 in [1].
**Lemma 6.3**.: _As per the definition, \(H^{\mathrm{cts}}\) and \(H^{\mathrm{dis}}\) are two matrices of size \(n\times n\) that we specify as follows:_
\[H^{\mathrm{cts}}_{i,j} :=\operatorname*{\mathbb{E}}_{w\sim\mathcal{N}(0,I)}\left[( \langle x_{i},x_{j}\rangle)\cdot\exp(\langle w_{r},x_{i}\rangle)\cdot\exp( \langle w_{r},x_{j}\rangle)\right],\] \[H^{\mathrm{dis}}_{i,j} :=\frac{1}{m}\sum_{r\in[m]}\left[(\langle x_{i},x_{j}\rangle) \cdot\exp(\langle w_{r},x_{i}\rangle)\cdot\exp(\langle w_{r},x_{j}\rangle) \right].\]
_We define \(\lambda:=\lambda_{\min}(H^{\mathrm{cts}})\)._
_If the following conditions hold_
* \(\lambda>0\)_._
* \(d=\Omega(\log(1/\delta))\)_._
* \(m=\Omega(\lambda^{-2}\cdot n^{2}\cdot\exp(2B)\cdot\sqrt{\log(n/\delta)})\)_._
_Then, we have_
* _Part 1_ \(\|H^{\mathrm{dis}}-H^{\mathrm{cts}}\|_{F}\leq\frac{\lambda}{4}\)_._
* _Part 2._ \(\lambda_{\min}(H^{\mathrm{dis}})\geq\frac{3}{4}\lambda\)_._
_hold with probability at least \(1-\delta\)._
Proof.: **Proof of Part 1.** For any given pair \((i,j)\), \(H^{\mathrm{dis}}_{i,j}\) is computed as the mean of a set of independent random variables, denoted as:
\[H^{\mathrm{dis}}_{i,j}=\ \frac{1}{m}\sum_{r\in[m]}(\langle x_{i},x_{j}\rangle) \cdot\exp(\langle w_{r},x_{i}\rangle)\cdot\exp(\langle w_{r},x_{j}\rangle).\]
Then the expectation of \(H^{\mathrm{dis}}_{i,j}\) is
\[\operatorname*{\mathbb{E}}[H^{\mathrm{dis}}_{i,j}]=\frac{1}{m}\sum_{r=1}^{m} \operatorname*{\mathbb{E}}_{w_{r}\sim\mathcal{N}(0,\sigma^{2}I_{d})}\left[( \langle x_{i},x_{j}\rangle)\cdot\exp(\langle w_{r},x_{i}\rangle)\cdot\exp( \langle w_{r},x_{j}\rangle)\right]\]
\[= \sum_{i\in[n]}\sum_{j\in[n]}|H^{\rm dis}_{i,j}-H^{\rm cts}_{i,j}|^{2}\] \[\leq \frac{1}{m}n^{2}\exp(2B)\sqrt{\log(n/\delta)}\] \[\leq \lambda^{2}/16 \tag{4}\]
The last step in the derivation follows directly from our choice of \(m\).
**Proof of Part 2**
Then we have
\[\lambda_{\rm min}(H^{\rm dis}) \geq \lambda_{\rm min}(H^{\rm cts})-\|H^{\rm dis}-H^{\rm cts}\|\] \[\geq \lambda_{\rm min}(H^{\rm cts})-\|H^{\rm dis}-H^{\rm cts}\|_{F}\] \[\geq \lambda-\lambda/4\] \[\geq 3\lambda/4\]
where the first step is due to Fact 4.6, and the second step is from Fact 4.6, the third step is from Eq. (4), the fourth step is because of adding terms.
### Given \(w\) within a small ball bounding changes of \(H\)
Under the assumption that \(w\) is contained in a small ball, in Section 6.3, we can restrict the discrepancy between the continuous and discrete versions
**Definition 6.4**.: _We define_
\[\Delta w_{r}:=\widetilde{w}_{r}-w_{r}\]
_and_
\[\|\Delta w_{r}\|_{2}\leq R\]
**Definition 6.5**.: _We define_
\[z_{i}:=\widetilde{w}_{r}^{\top}x_{i}\]
**Definition 6.6**.: _For \(i\in[n],j\in[n]\), \(r\in[m]\), we define_
\[s_{r,i,j}:=\exp(\widetilde{w}_{r}^{\top}x_{i})\cdot\exp( \widetilde{w}_{r}^{\top}x_{j})-\exp(w_{r}^{\top}x_{i})\cdot\exp(w_{r}^{\top}x _{j}).\]
_By fixing \(i\) and \(j\), \(s_{r,i,j}\) is simplified to \(s_{r}\) with \(s_{r}\) as a random variable that depends solely on \(\widetilde{w}_{r}\). The set of random variables \(s_{r=1}^{\ m}\) are independent of each other, because \(\{\widetilde{w}_{r}\}_{r=1}^{m}\) are independent._
The following Lemma can be viewed as a variation of Lemma 3.2 in [11], a variation of Lemma C.4 and C.5 in [1] a variation Lemma C.2 in [11] and a variation of Lemma G.2 in [11].
**Lemma 6.7** (perturbed \(w\)).: _Suppose \(\widetilde{w}_{1},\cdots,\widetilde{w}_{m}\) are independent and identically distributed with a normal distribution \(\mathcal{N}(0,\sigma^{2}I)\), and let \(B\) be defined as in Definition 6.1. For any set of weight vectors \(w_{1},\cdots,w_{m}\in\mathbb{R}^{d}\) that satisfy \(\|\widetilde{w}_{r}-w_{r}\|_{2}\leq R\) where \(R\in(0,0.001)\), for any \(r\in[m]\), we define the function \(H:\mathbb{R}^{m\times d}\rightarrow\mathbb{R}^{n\times n}\) as follows_
\[H(w)_{i,j}=\frac{1}{m}x_{i}^{\top}x_{j}\sum_{r\in[m]}\exp(w_{r}^ {\top}x_{i})\cdot\exp(w_{r}^{\top}x_{j}).\]
_Therefore we can conclude that with probability at least \(1-(n^{2}\cdot\exp(-mR/10)+\delta)\), we have_
\[\|H(w)-H(\widetilde{w})\|_{F}\leq 3nR\exp(2B),\]
Proof.: The random variable we care is
\[\sum_{i\in[n]}\sum_{j\in[n]}|H(\widetilde{w})_{i,j}-H(w)_{i,j}|^{2}\] \[\leq\frac{1}{m^{2}}\sum_{i\in[n]}\sum_{j\in[n]}\left(\sum_{r\in[ m]}\exp((\widetilde{w}_{r},x_{i}))\cdot\exp((\widetilde{w}_{r},x_{j}))-\exp( \langle w_{r},x_{i}\rangle)\cdot\exp(\langle w_{r},x_{j}\rangle)\right)^{2}\] \[=\frac{1}{m^{2}}\sum_{i\in[n]}\sum_{j\in[n]}\Big{(}\sum_{r\in[m]} s_{r,i,j}\Big{)}^{2},\]
where the last step is due to \(\forall r,i,j\).
For simplicity, we drop the index of \(i,j\) in \(s_{r,i,j}\). We only keep the \(r\), i.e., using \(s_{r}\) to denote \(s_{r,i,j}\).
We define \(s_{r}\) as follows
\[s_{r}:=\exp(\widetilde{w}_{r}^{\top}x_{i})\cdot\exp(\widetilde{w}_{r}^{\top}x_ {j})-\exp(w_{r}^{\top}x_{i})\cdot\exp(w_{r}^{\top}x_{j}).\]
Using Lemma 6.2, we have
\[\Pr[\forall r\in[m],s_{r}\leq\exp(2B)]\geq 1-\delta. \tag{5}\]
In the rest of the proof, we will condition on the above event is holding.
We can obtain the upper bound of \(\mathbb{E}_{\widetilde{w}_{r}}[s_{r}]\), we have
\[\underset{\widetilde{w}_{r}}{\mathbb{E}}[s_{r}] \leq \underset{\widetilde{w}_{r}}{\mathbb{E}}[\exp(\widetilde{w}_{r}^ {\top}x_{i})\cdot\exp(\widetilde{w}_{r}^{\top}x_{j})-\exp(w_{r}^{\top}x_{i}) \cdot\exp(w_{r}^{\top}x_{j})|] \tag{6}\] \[\leq \underset{\widetilde{w}_{r}}{\mathbb{E}}[\exp((w_{r}+\Delta w_{ r})^{\top}x_{i})\cdot\exp((w_{r}+\Delta w_{r})^{\top}x_{j})-\exp(w_{r}^{\top}x_{i}) \cdot\exp(w_{r}^{\top}x_{j})|]\] \[\leq \underset{\widetilde{w}_{r}}{\mathbb{E}}[(\exp(\Delta w_{r}^{ \top}(x_{i}+x_{j}))-1)\cdot\exp(w_{r}^{\top}x_{i})\cdot\exp(w_{r}^{\top}x_{j})]\] \[\leq \exp(w_{r}^{\top}x_{i})\cdot\exp(w_{r}^{\top}x_{j})\cdot\underset {\widetilde{w}_{r}}{\mathbb{E}}[|(\exp(\Delta w_{r}^{\top}(x_{i}+x_{j}))-1)|]\]
where the 1st step is due to Definition 6.6, the 2nd step is because of Definition 6.4, the 3rd step is by selecting the same term \(\exp(w_{r}^{\top}x_{i})\cdot\exp(w_{r}^{\top}x_{j})\), and the last step follows from the reason that \(i,j\) are fixed.
We have
\[\underset{\widetilde{w}_{r}}{\mathbb{E}}[s_{r}] \leq \exp(w_{r}^{\top}x_{i})\cdot\exp(w_{r}^{\top}x_{j})\cdot\underset {\widetilde{w}_{r}}{\mathbb{E}}[|(\exp(\Delta w_{r}^{\top}(x_{i}+x_{j}))-1)|] \tag{7}\] \[\leq \exp(2B)\cdot\underset{\widetilde{w}_{r}}{\mathbb{E}}[|\exp( \Delta w_{r}^{\top}(x_{i}+x_{j}))-1|]\] \[\leq \exp(2B)\cdot\mathbb{E}[4R]\] \[\leq 4R\cdot\exp(2B)\]
where the 1st step is due to Eq. (6), the 2nd step is due to Lemma 6.2, and the 3rd step is due to Lemma 6.2, and the last step follows from simple algebra.
We have,
\[\underset{\widetilde{w}_{r}}{\mathbb{E}}\left[\left(s_{r}- \underset{\widetilde{w}_{r}}{\mathbb{E}}[s_{r}]\right)^{2}\right] = \underset{\widetilde{w}_{r}}{\mathbb{E}}[s_{r}^{2}]-(\underset{ \widetilde{w}_{r}}{\mathbb{E}}[s_{r}])^{2} \tag{8}\] \[\leq \underset{\widetilde{w}_{r}}{\mathbb{E}}[s_{r}^{2}]\] \[= \underset{\widetilde{w}_{r}}{\mathbb{E}}\left[(\exp(\widetilde{w }_{r}^{\top}x_{i})\cdot\exp(\widetilde{w}_{r}^{\top}x_{j})-\exp(w_{r}^{\top}x_ {i})\cdot\exp(w_{r}^{\top}x_{j}))^{2}\right]\] \[\leq \underset{\widetilde{w}_{r}}{\mathbb{E}}\left[(\exp(4B)(\exp( \Delta w_{r}^{\top}(x_{i}+x_{j}))-1))^{2}\right]\] \[\leq 16R^{2}\cdot\exp(4B)\]
where the 1st step is due to the definition of variance, the 2nd step is due to simple algebra, the 3rd step is due to the Definition 6.6, the 4th step follows from Lemma 6.2, the 5th step follows from Lemma 6.2.
In the rest of the proof, let us condition on the above event is holding.
Combining Eq. (5) Eq. (7), we also have
\[|s_{r}-\mathop{\mathbb{E}}_{\widetilde{w}_{r}}[s_{r}]| \leq(1+4R)\cdot\exp(2B)\] \[\leq 2\exp(2B),\]
where the second step is from \(4R\leq 1\).
Using Bernstein inequality (Lemma 4.8), we have
\[\Pr\left[Z>t\right]\leq\exp\left(-\frac{t^{2}/2}{\operatorname{Var}[Z]+Mt/3} \right).\]
where
\[Z :=\,\sum_{r=1}^{m}s_{r}-\mathbb{E}[s_{r}],\] \[\operatorname{Var}[Z] :=\,16mR^{2}\exp(4B),\] \[M :=\,2\exp(2B).\]
Replacing \(t=mR\exp(2B)\), we know that
\[\operatorname{Var}[Z]+Mt/3 =16mR^{2}\exp(4B)+2\exp(2B)mR\exp(2B)/3\] \[\leq\,mR\exp(4B)\]
Thus,
\[\frac{t^{2}/2}{\operatorname{Var}[Z]+Mt/3}\geq\frac{(mR\exp(B))^{2}}{mR\exp(4 B)}=mR\]
and
\[\Pr\left[X\geq mR\exp(2B)\right]\leq\,\exp\left(-mR\right) \tag{9}\]
Using the bound on expectation
\[\Pr\left[\frac{\sum_{r\in[m]}s_{r}}{m}\geq 3R\cdot\exp(2B)\right]\leq\,\exp \left(-Rm/10\right)\]
### Controlling the Loss at initialization
The initialization of choice \(a\in\{-1,+1\}^{m}\) is purely random in [2, 3]. That initialization will make \(\|F(0)\|_{2}^{2}\) is a bit large. We can use the initialization idea in [13]. The idea is forcing \(a_{2r}=-a_{2r-1}\). This will gives us that \(\|F(0)\|_{2}^{2}=0\).
**Claim 6.8**.: _We have_
\[\|y-F(0)\|_{2}^{2}=\|y\|_{2}^{2}.\]
Proof.: We also duplicate the weights such that \(w_{2r}=w_{2r-1}\). Therefore, the proof is straightforward.
Convergence
In Section 7, when the neural network is excessively over-parametrized, we observe a linear decrease in the training error, leading to its ultimate convergence to \(0\). In Section 7.1, we present our paper's main result. Section 7.2 outlines the induction lemma for weights, while Section 7.3 introduces the induction lemma for loss. Finally, in Section 7.4, we provide the induction lemma for gradient.
### Main Result
**Theorem 7.1** (Main result, formal version of Theorem 1.1).: _If the following conditions hold_
* _Let_ \(\lambda=\lambda_{\min}(H^{\mathrm{cts}})>0\)__
* \(m=\Omega(\lambda^{-2}n^{2}\exp(4B)\log^{2}(n/\delta))\)__
* _Let_ \(w_{r}\) _and_ \(a_{r}\) _be defined as Definition_ 4.3_._
* _Let_ \(\eta=0.01\lambda/(mn^{2}\exp(4B))\)__
* _Let_ \(T=\Omega((m\eta\lambda)^{-1}\log(n/\epsilon))=\Omega(\lambda^{-2}n^{2}\exp(4B) \cdot\log(n/\epsilon))\)__
_Then, we have running algorithm with \(T\) iterations_
\[\|F(T)-y\|_{2}^{2}\leq\epsilon\]
Proof.: We choose \(\sigma=1\).
We have proved \(\|F(0)-y\|_{2}^{2}\leq n\).
Using the choice of \(T\), then it directly follows from alternatively applying Lemma 7.3 and Lemma 7.2.
Since \(\exp(\Theta(B))=n^{o(1)}\), then in Theorem 1.1, we can simplify the \(n^{2}\exp(\Theta(B))=n^{2+o(1)}\).
### Induction Part 1. For Weights
**Lemma 7.2** (Induction Part 1 for weights).: _If the following condition hold_
* _General Condition 1. Let_ \(\lambda=\lambda_{\min}(H^{\mathrm{cts}})>0\)__
* _General Condition 2._ \(\eta=0.01\lambda/(mn^{2}\exp{(4B)})\)__
* _General Condition 3. Let_ \(D\) _be defined as Definition_ 8.1__
* _General Condition 4. Let_ \(w_{r}\) _and_ \(a_{r}\) _be defined as Definition_ 4.3_._
* _General Condition 5._ \(D<R\)__
* **Weights Condition.**__\(\|w_{r}(i)-w_{r}(0)\|_{2}\leq R\) _for all_ \(i\in[t]\)__
* **Loss Condition.**__\(\|F(i)-y\|_{2}^{2}\leq\|F(0)-y\|_{2}^{2}\cdot(1-m\eta\lambda/2)^{i}\)_,_ \(\forall i\in[t]\)__
* **Gradient Condition.**__\(\eta\|\Delta w_{r}(i)\|_{2}\leq 0.01\) _for all_ \(r\in[m]\)_, for all_ \(i\in[t]\)__
_For \(t+1\) and \(\forall r\in[m]\), it holds that:_
\[\|w_{r}(t+1)-w_{r}(0)\|_{2}\leq D.\]
Proof.: We have
\[\eta\sum_{i=0}^{\infty}(1-n\lambda/2)^{i/2}\] \[=\eta\sum_{i=0}^{\infty}(1-\eta\lambda/4)^{i}\] \[\leq\eta\frac{1}{\eta\lambda/4}\] \[\leq 8/\lambda \tag{10}\]
where the 1st step is due to Fact 4.7, the 2nd step is due to Fact 4.7, the last step is due to simple algebra.
Our approach involves utilizing the gradient's norm as a means of constraining the distance as follows:
\[\|w_{r}(0)-w_{r}(t+1)\|_{2}\] \[\leq\eta\sum_{i=0}^{t}\|\Delta w_{r}(i)\|_{2}\] \[\leq\eta\sum_{i=0}^{t}\exp(B+R)\cdot\sqrt{n}\cdot\|F(i)-y\|_{2}\] \[\leq\eta\sum_{i=0}^{t}(1-\eta\lambda/2)^{i/2}\cdot\exp(B+R)\cdot \sqrt{n}\cdot\|F(0)-y\|_{2}\] \[\leq 8\sqrt{n}\cdot\lambda^{-1}\cdot\exp(B+R)\|F(0)-y\|_{2}\] \[=D\]
where the 1st is from \(w_{r}(s+1)-w_{r}(s)=\eta\cdot\Delta w_{r}(s)\), the 2nd step is due to Lemma 8.5 for \(mt\) times, the 3rd step is due to Condition 3 in Lemma statement, the forth step is due to simple algebra, and the forth step is due to Eq. (10), the last step is due to Condition 2 in Lemma statement.
### Induction Part 2. For Loss
Now, we present our next induction lemma.
**Lemma 7.3** (Induction Part 2. For Loss).: _Let \(t\) be a fixed integer._
_If the following conditions hold_
* _General Condition 1. Let_ \(\lambda=\lambda_{\min}(H^{\mathrm{cts}})>0\)__
* _General Condition 2._ \(\eta=0.01\lambda/(mn^{2}\exp\left(4B\right))\)__
* _General Condition 3. Let_ \(D\) _be defined as Definition_ 8.1__
* _General Condition 4. Let_ \(w_{r}\) _and_ \(a_{r}\) _be defined as Definition_ 4.3_._
* _General Condition 5._ \(D<R\)__
* **Weight Condition.**__\(\|w_{r}(t)-w_{r}(0)\|_{2}\leq D<R\)_,_ \(\forall r\in[m]\)__
* **Loss Condition.**\(\|F(i)-y\|_{2}^{2}\leq(1-m\eta\lambda/2)^{i}\cdot\|F(0)-y\|_{2}^{2}\), _for all \(i\in[t]\)_
* **Gradient Condition.**\(\eta\|\Delta w_{r}(i)\|_{2}\leq 0.01\ \forall r\in[m]\)_, \(\forall i\in[t]\)_
_Then we have_
\[\|F(t+1)-y\|_{2}^{2}\leq(1-m\eta\lambda/2)^{t+1}\cdot\|F(0)-y\|_{2}^{2}.\]
Proof.: Recall the update rule (Definition 5.4),
\[w_{r}(t+1)=w_{r}(t)-\eta\cdot\Delta w_{r}(t)\]
\(\forall i\in[n]\), it follows that
\[F_{i}(t+1)-F_{i}(t)\] \[= \sum_{r\in[m]}a_{r}\cdot(\exp(\langle w_{r}(t+1),x_{i}\rangle)- \exp(\langle w_{r}(t),x_{i}\rangle))\] \[= \sum_{r\in[m]}a_{r}\cdot\exp(\langle w_{r}(t),x_{i}\rangle)\cdot (\exp(-\eta\langle\Delta w_{r}(t),x_{i}\rangle)-1)\] \[= \sum_{r\in[m]}a_{r}\cdot\exp(w_{r}(t)^{\top}x_{i})\cdot(-\eta \langle\Delta w_{r}(t),x_{i}\rangle+\Theta(1)\eta^{2}\langle\Delta w_{r}(t),x _{i}\rangle^{2})\] \[= v_{1,i}+v_{2,i}\]
where the third step follows from \(|\eta\Delta w_{r}(t)^{\top}x_{i}|\leq 0.01\) and Fact 4.7, the last step is from
\[v_{1,i}:= \sum_{r=1}^{m}a_{r}\cdot\exp(\langle w_{r}(t),x_{i}\rangle)\cdot( -\eta\langle\Delta w_{r}(t),x_{i}\rangle)\] \[v_{2,i}:= \sum_{r=1}^{m}a_{r}\cdot\exp(\langle w_{r}(t),x_{i}\rangle)\cdot \Theta(1)\cdot\eta^{2}\cdot\langle\Delta w_{r}(t),x_{i}\rangle^{2}\]
Here \(v_{1,i}\) is linear in \(\eta\) and \(v_{2,i}\) is quadratic in \(\eta\). Thus, \(v_{1,i}\) is the first order term, and \(v_{2,i}\) is the second order term.
Recall the definition of \(H\) over timestamp \(t\) (see Definition 8.2)
\[H(t)_{i,j}=\frac{1}{m}\sum_{r\in[m]}x_{i}^{\top}x_{j}\exp(\langle w_{r}(t),x_{ i}\rangle)\cdot\exp(\langle w_{r}(t),x_{j}\rangle),\]
Further, we define \(C_{1},C_{2},C_{3}\)
\[C_{1}= -2\eta(F(t)-y)^{\top}H(t)(F(t)-y),\] \[C_{2}= -2(F(t)-y)^{\top}v_{2},\] \[C_{3}= \|F(t+1)-F(t)\|_{2}^{2}.\]
Then we can rewrite
\[\|y-F(t+1)\|_{2}^{2}=\|y-F(t)\|_{2}^{2}+C_{1}+C_{2}+C_{3}\]
We have
\[\|F(t)-y\|_{2}^{2}\leq\|F(t-1)-y\|_{2}^{2}\cdot(1-m\eta\lambda/2)\]
where the first step follows is due to Lemma 9.2.
Thus, we complete the proof.
### Induction Part 3. For Gradient
**Lemma 7.4** (Induction Part 3. For Loss).: _Let \(t\) be a fixed integer._
_If the following conditions hold_
* _General Condition 1. Let_ \(\lambda=\lambda_{\min}(H^{\mathrm{cts}})>0\)__
* _General Condition 2._ \(\eta=0.01\lambda/(mn^{2}\exp{(4B)})\)__
* _General Condition 3. Let_ \(D\) _be defined as Definition_ 8.1__
* _General Condition 4. Let_ \(w_{r}\) _and_ \(a_{r}\) _be defined as Definition_ 4.3_._
* _General Condition 5._ \(D<R\)__
* **Weight Condition.**__\(\|w_{r}(t)-w_{r}(0)\|_{2}\leq D<R\)_,_ \(\forall r\in[m]\)__
* **Loss Condition.**__\(\|F(i)-y\|_{2}^{2}\leq\|F(0)-y\|_{2}^{2}\cdot(1-m\eta\lambda/2)^{i}\)_,_ \(\forall i\in[t]\)__
* **Gradient Condition.**__\(\eta\|\Delta w_{r}(i)\|_{2}\leq 0.01\)__\(\forall r\in[m]\)_,_ \(\forall i\in[t]\)__
_Then we have_
\[\eta\|\Delta w_{r}(t+1)\|_{2} \leq 0.01,\forall r\in[m]\]
Proof.: We have
\[\eta\|\Delta w_{r}(t+1)\|_{2} = \eta\left\|\sum_{i=1}^{n}a_{r}x_{i}\cdot(y_{i}-F_{i}(t+1))\cdot \exp(\langle w_{r}(t+1),x_{i}\rangle)\right\|_{2}\] \[\leq \eta\exp(B+R)\cdot\sum_{i=1}^{n}|y_{i}-F_{i}(t+1)|\] \[\leq \eta\exp(B+R)\cdot\sqrt{n}\cdot\|y-F(s)\|_{2}\] \[\leq \eta\exp(B+R)\cdot\sqrt{n}\cdot\|y-F(0)\|_{2}\] \[\leq \eta\exp(B+R)\cdot n\] \[\leq 0.01\]
where the 1st step follows from Definition 5.3, the 2nd step is due to Lemma 6.2, the 3rd step is due to Cauchy-Schwartz inequality, the 4th step follows is due to **Loss Condition**, the 5th step follows from \(\|y-F(0)\|_{2}=\sqrt{n}\), the sixth step is due to the choice of \(\eta\).
Induction Part 1: For Weight
In Section 8, we present the weight bound, which helps us complete the proof. Section 8.1 introduces various definitions used throughout the paper, while Section 8.2 proposes the bounding gradient lemma and its corresponding proof.
### Definition of \(D\)
To simplify the notation, we present the definition as follows.
**Definition 8.1**.: _We define \(D_{\mathrm{cts}}\)_
\[D:=8\cdot\lambda^{-1}\cdot\exp(B+R)\cdot\frac{\sqrt{n}}{m}\cdot\|y-F(0)\|_{2}.\]
We define the kernel with respect to timestamp \(s\).
**Definition 8.2**.: _Let \(H(s)\in\mathbb{R}^{n\times n}\) be a matrix defined for any \(s\) in the interval \([0,t]\)._
\[H(s)_{i,j}:=\frac{1}{m}\sum_{r\in[m]}x_{i}^{\top}x_{j}\cdot\exp\left(\langle w _{r}(s),x_{i}\rangle\right)\cdot\exp\left(\langle w_{r}(s),x_{j}\rangle\right).\]
**Definition 8.3**.: _For any matrix \(P\in[-1,1]^{m\times n}\), we define asymmetric matrix \(H_{\mathrm{asy}}\)_
\[H_{\mathrm{asy}}(s)_{i,j}:=\frac{1}{m}\sum_{r\in[m]}x_{i}^{\top}x_{j}\cdot p_{ i,r}\cdot\exp\left(\langle w_{r}(s),x_{i}\rangle\right)\cdot\exp\left(\langle w _{r}(s),x_{j}\rangle\right).\]
**Claim 8.4**.: _We have_
\[\|H_{\mathrm{asy}}(s)\|_{\infty}\leq\exp(2(B+R)).\]
_holds with probability \(1-\delta\)._
Proof.: \[\|H(P)\|_{\infty}=\max_{i\in[n],j\in[n]}\{\frac{1}{m}\sum_{r\in[m]}x_{i}^{\top }x_{j}\cdot p_{i,r}\cdot\exp\left(w_{r}(s)^{\top}x_{i}\right)\cdot\exp\left(w_ {r}(s)^{\top}x_{j}\right)\}\]
where the first step is from Definition 8.3.
It is sufficient to make a bound for each \(i\in[n]\) and \(j\in[n]\).
We have
\[\frac{1}{m}\sum_{r\in[m]}x_{i}^{\top}x_{j}\cdot p_{i,r}\cdot\exp \left(\langle w_{r}(s),x_{i}\rangle\right)\cdot\exp\left(\langle w_{r}(s),x_{ j}\rangle\right)\] \[\leq \frac{1}{m}\sum_{r\in[m]}\exp\left(\langle w_{r}(s),x_{i}\rangle \right)\cdot\exp\left(\langle w_{r}(s),x_{j}\rangle\right)\] \[\leq \frac{1}{m}\sum_{r\in[m]}\exp\left(2(R+B)\right)\] \[= \exp(2(B+R))\]
the 1st step is from \(\|x_{i}\|_{2}\leq 1\) and \(|p_{i,r}|\leq 1\), the second step is due to Lemma 6.2.
### Bounding the gradient at any time
In this section, we bound the gradient at any time.
**Lemma 8.5**.: _It the following condition hold,_
* \(\|w_{r}(s)-w_{r}(0)\|_{2}\leq R\)__
_For any timestamp at time \(s\), we have_
\[\|\Delta w_{r}(s)\|_{2}\leq\exp(B+R)\sqrt{n}\|y-F(s)\|_{2}\]
Proof.: We have
\[\|\Delta w_{r}(s)\|_{2} =\,\left\|\sum_{i=1}^{n}(y_{i}-F_{i})a_{r}x_{i}\cdot\exp(w_{r}(s) ^{\top}x_{i})\right\|_{2}\] \[\leq\,\exp(B+R)\cdot\sum_{i=1}^{n}|y_{i}-F_{i}(s)|\] \[\leq\,\exp(B+R)\cdot\sqrt{n}\cdot\|y-F(s)\|_{2}\]
where the first step follows from Definition 5.3, the second step follows from Lemma 6.2, the third step follows from Cauchy-Schwartz inequality.
**Lemma 8.6**.: _It the following condition hold,_
* \(\eta=0.01\lambda/(mn^{2}\exp(4B))\)__
* \(\|w_{r}(s)-w_{r}(0)\|_{2}\leq R\)__
_For any timestamp at time \(s\), we have_
\[\eta\|\Delta w_{r}(s)\|_{2}\leq 0.01\]
Proof.: This trivially follows from choice of \(\eta\).
Induction Part 2: For Loss
In Section 9, we establish a bound for the loss at any time. To accomplish this, we decompose the loss \(\|y-F(k+1)\|_{2}^{2}\) into three parts, namely \(C_{1},C_{2}\), and \(C_{3}\), which are defined and discussed in Section 9.1. We provide our choices for \(m\) and \(\eta\) in Section 9.2, while Section 9.3, Section 9.4, and Section 9.5 respectively establish bounds for \(C_{1},C_{2}\), and \(C_{3}\).
### Decomposition for \(\|y-F(t+1)\|_{2}^{2}\)
In this section, we decompose the loss \(\|y-F(t+1)\|_{2}^{2}\) into three parts \(C_{1},C_{2}\) and \(C_{3}\).
**Lemma 9.1**.: _Assuming the following condition is met:_
* \(C_{1}=-2m\eta(F(t)-y)^{\top}H(t)(F(t)-y)\)__
* \(C_{2}=2m\eta^{2}(F(t)-y)^{\top}H_{\rm{asy}}(t)(F(t)-y)\)__
* \(C_{3}=\|F(t+1)-F(t)\|_{2}^{2}\)__
_then_
\[\|F(t+1)-y\|_{2}^{2}\leq\|F(t)-y\|_{2}^{2}+C_{1}+C_{2}+C_{3}\]
Proof.: In the following manner, we can express \(F(t+1)-F(t)\in\mathbb{R}^{n}\):
\[F(t+1)-F(t)=v_{1}+v_{2}.\]
Using the notation of \(H\), we can express \(v_{1,i}\in\mathbb{R}\) as follows:
\[v_{1,i} = \sum_{r\in[m]}a_{r}\cdot\exp(\langle x_{i},w_{r}(t)\rangle)\cdot (-\eta\langle x_{i},\Delta w_{r}(t)\rangle)\] \[= \sum_{r\in[m]}a_{r}\cdot\exp(\langle x_{i},w_{r}(t)\rangle)\cdot (-\eta\sum_{j\in[n]}(F_{j}(t)-y_{j})a_{r}x_{j}^{\top}\exp(w_{r}^{\top}x_{j}))x _{i}\] \[= -m\eta\cdot\frac{1}{m}\sum_{j\in[n]}x_{i}^{\top}x_{j}(F_{j}-y_{j} )\sum_{r\in[m]}\exp(\langle w_{r}(t),x_{i}\rangle)\cdot\exp(\langle w_{r}(t),x _{j}\rangle)\] \[= -m\eta\cdot\sum_{j\in[n]}(F_{j}-y_{j})(H_{i,j}(t)),\]
where the second step follows from Definition 5.3.
The equation above indicates that the vector \(v_{1}\in\mathbb{R}^{n}\) can be expressed as
\[v_{1}=m\eta(y-F(t))^{\top}(H(t)). \tag{11}\]
Let \(p_{i,r}\in[-1,1]\). Similarly
\[v_{2,i} = \sum_{r\in[m]}a_{r}\cdot\exp(\langle w_{r}(t),x_{i}\rangle)\cdot (\eta^{2}p_{i,r}\langle\Delta x_{i},w_{r}(t)\rangle)\] \[= \sum_{r\in[m]}a_{r}\cdot\exp(\langle w_{r}(t),x_{i}\rangle)\cdot (\eta^{2}p_{i,r}\cdot\sum_{j=1}^{n}(F_{j}(t)-y_{j})a_{r}x_{j}^{\top}\exp( \langle w_{r},x_{j}\rangle))x_{i}\]
\[= m\eta^{2}\frac{1}{m}\sum_{j\in[n]}x_{i}^{\top}x_{j}(F_{j}-y_{j})\sum_ {r=1}^{m}p_{i,r}\exp(\langle w_{r}(t),x_{i}\rangle)\cdot\exp(\langle w_{r}(t),x _{j}\rangle)\] \[= m\eta^{2}\sum_{j=1}^{n}(F_{j}-y_{j})((H_{\rm asy}(t))_{i,j}),\]
The expression \(\|y-F(t+1)\|_{2}^{2}\) can be rewritten in the following:
\[\|y-F(t+1)\|_{2}^{2}\] \[= \|y-F(t)-(F(t+1)-F(t))\|_{2}^{2}\] \[= \|y-F(t)\|_{2}^{2}-2(y-F(t))^{\top}(F(t+1)-F(t))+\|F(t+1)-F(t)\|_ {2}^{2}.\]
We can rephrase the second term in the Equation above as follows:
\[\langle y-F(t),F(t+1)-F(t)\rangle\] \[= \langle y-F(t),v_{1}+v_{2}\rangle\] \[= \langle y-F(t),v_{1}\rangle+\langle y-F(t),v_{2}\rangle\] \[= m\eta(F(t)-y)^{\top}H(t)(F(t)-y)-m\eta^{2}(F(t)-y)^{\top}H_{\rm asy }(F(t)-y),\]
where the third step is from Eq. (11).
Therefore, we can conclude that
\[\|F(t+1)-y\|_{2}^{2}\leq\|F(t)-y\|_{2}^{2}+C_{1}+C_{2}+C_{3}\]
where the last step follows from Lemma 9.2.
### Choice of Parameters
In this section, we propose our choice of parameters \(m,\eta,R,B\).
**Lemma 9.2**.: _If the following conditions hold_
* _Condition 1._ \(m=\Omega(\lambda^{-2}n^{2}\exp(4B)\log(n/\delta))\)__
* _Condition 2._ \(\eta=0.01\lambda/(mn^{2}\exp(4B))\)__
* _Condition 3._ \(R=0.01\lambda/(n\exp(B))\)__
* _Required by Claim_ 9.3__
* _Condition 4._ \(R\leq 1\leq B\)__
* _Required by Claim_ 9.4 _and Claim_ 9.5__
* _Condition 5._ \(D=8\lambda^{-1}\exp(B+R)\frac{\sqrt{n}}{m}\|y-F(0)\|_{2}\)__
* _Condition 6._ \(D<R\)__
* _Condition 7._ \(\eta\|\Delta_{r}(t)\|_{2}\leq 0.01\) _for all_ \(r\in[m]\)__
* _Required by Claim_ 9.5__
_Then it holds that_
\[\|F(t+1)-y\|_{2}^{2}\leq\|F(t)-y\|_{2}^{2}\cdot(1-m\eta\lambda/2)\]
_holds with probability \(1-\delta\)._
Proof.: We can show
\[\|F(t+1)-y\|_{2}^{2} \leq\|F(t)-y\|_{2}^{2}+C_{1}+C_{2}+C_{3}\] \[\leq(1-m\eta\lambda+2m^{2}\eta^{2}n^{2}\exp(4B))\cdot\|F(t)-y\|_{2 }^{2}.\]
where the first step follows from Lemma 9.1, the second step follows from Claim 9.3, Claim 9.4, and Claim 9.5.
Choice of \(\eta\).Next, we want to choose \(\eta\) such that
\[(1-m\eta\lambda+2m^{2}\eta^{2}n^{2}\exp(4B))\leq(1-m\eta\lambda/2). \tag{12}\]
Using the choice of \(\eta\) in Condition 2
\[2m^{2}\eta^{2}n^{2}\exp(4B)\leq m\eta\lambda/4\]
This indicates:
\[\|F(t+1)-y\|_{2}^{2}\leq(1-m\eta\lambda/2)\cdot\|F(t)-y\|_{2}^{2} \tag{13}\]
lower bound for \(m\), over-parametrization SizeWe require the following two conditions
* \(D=8\lambda^{-1}\exp(B+R)\cdot\frac{\sqrt{n}}{m}\|y-F(0)\|_{2}<R=0.01\lambda/(n \exp(B))\)
* \(\|y-F(0)\|_{2}=O(\sqrt{n})\)
* \(3n^{2}\exp(-mR/10)\leq\delta\)
Therefore, it suffices to choose:
\[m=\Omega(\lambda^{-2}n^{2}\exp(4B)\log(n/\delta)).\]
### Bounding the first order term
In this section, we bound the first order term \(C_{1}\).
**Claim 9.3**.: _If the following conditions hold_
* _Let_ \(B\) _be defined as Definition_ 6.1__
* \(C_{1}=-2m\eta(F(t)-y)^{\top}H(t)(F(t)-y)\)__
* \(R\leq 0.01\lambda/(n\exp(B))\)__
* \(m=\Omega(\lambda^{-2}\cdot n^{2}\cdot\exp(2B)\cdot\sqrt{\log(n/\delta)})\)__
_Then, we have_
\[C_{1}\leq-m\eta\lambda\cdot\|y-F(t)\|_{2}^{2}\]
_holds with probability at least \(1-(n^{2}\cdot\exp(-mR/10)+\delta)\)._
Proof.: By Lemma 6.7, with probability \(1-(n^{2}\cdot\exp(-mR/10)+\delta)\), we have
\[\|H(0)-H(t)\|_{F}\] \[\leq 3nRe^{B}\] \[\leq \lambda/4 \tag{14}\]
where the last step follows from choice of \(R\) (see Claim Statement).
Given that \(\lambda=\lambda_{\min}(H(0))\), by Lemma 6.3, we have
\[\lambda_{\min}(H(t))\] \[\geq \lambda_{\min}(H(0))-\|H(0)-H(t)\|\] \[\geq \lambda/2.\]
where the second step follows from \(\lambda_{\min}(H(0))\geq\lambda/2\) and Eq.(14).
And now we can conclude that
\[(F(t)-y)^{\top}H(t)(F(t)-y)\geq\lambda/2\cdot\|F(t)-y\|_{2}^{2}.\]
### Bounding the second order term
In this section, we bound the second order term \(C_{2}\).
**Claim 9.4**.: _If the following conditions hold_
* \(C_{2}=2\langle y-F(t),v_{2}\rangle\)_._
* \(R<B\)__
_Then we can conclude that_
\[C_{2}\leq 2m\eta^{2}n\exp(4B)\cdot\|F(t)-y\|_{2}^{2}.\]
_with probability at least \(1-n\cdot\exp(-mR)\)._
Proof.: It holds that
\[C_{2} \leq 2m\eta^{2}(F(t)-y)^{\top}H_{\mathrm{asy}}(F(t)-y)\] \[\leq 2m\eta^{2}\|F(t)-y\|_{2}^{2}\cdot\|H_{\mathrm{asy}}\|\] \[\leq 2m\eta^{2}\|F(t)-y\|_{2}^{2}\cdot\|H_{\mathrm{asy}}\|_{F}\] \[\leq 2m\eta^{2}\|F(t)-y\|_{2}^{2}\cdot n\|H_{\mathrm{asy}}\|_{\infty}\] \[\leq 2m\eta^{2}\|F(t)-y\|_{2}^{2}\cdot n\cdot\exp(4B)\]
where the first step is from \(P\in[-1,1]^{m\times n}\), the second step is from Fact 4.6, the third step is from Fact 4.6, the forth step follows from Fact 4.6, the fifth step follows from Claim 8.4.
### Bounding \(\|F(t+1)-F(t)\|_{2}^{2}\)
In this section, we bound the third order term \(C_{3}\).
**Claim 9.5**.: _If the following conditions hold_
* \(C_{3}=\|F(t+1)-F(t)\|_{2}^{2}\)_._
* \(\eta\|\Delta w_{r}(t)\|_{2}\leq 0.01\)__
* \(R\leq B\)__
_Then with probability at least \(1-\delta\), we have_
\[C_{3}\leq m^{2}\eta^{2}\cdot n^{2}\cdot\exp(8B)\cdot\|F(t)-y\|_{2}^{2}.\]
Proof.: According to definition of \(F_{i}(t)\), we have
\[F_{i}(t+1)-F_{i}(t)\] \[= \sum_{r\in[m]}a_{r}\cdot(\exp(\langle x_{i},w_{r}(t+1)\rangle)- \exp(\langle w_{r}(t),x_{i}\rangle))\] \[= \sum_{r\in[m]}a_{r}\cdot\exp(\langle x_{i},w_{r}(t)\rangle)\cdot (\exp(-\eta\langle\Delta w_{r}(t),x_{i}\rangle)-1)\]
Then we have
\[|F_{i}(t+1)-F_{i}(t)| \leq \sum_{r=1}^{m}\exp(w_{r}(t)^{\top}x_{i})\cdot|\exp(-\eta\Delta w _{r}(t)^{\top}x_{i})-1|\] \[\leq \sum_{r=1}^{m}\exp(B+R)\cdot|\exp(-\eta\Delta w_{r}(t)^{\top}x_{i} )-1|\] \[\leq \sum_{r=1}^{m}\exp(B+R)\cdot 2\eta\|\Delta w_{r}(t)\|_{2}\] \[\leq 2\eta\exp(B+R)\sum_{r=1}^{m}\|\Delta w_{r}(t)\|_{2}\] \[\leq 2\eta\exp(B+R)\sum_{r=1}^{m}\exp(B+R)\sqrt{n}\|y-F(t)\|_{2}\] \[= 2m\eta\exp(2(B+R))\sqrt{n}\|y-F(t)\|_{2} \tag{15}\]
where the second step is from Lemma 6.2, the third step is from \(\eta\|\Delta w_{r}(t)\|_{2}\leq 0.1\) and Fact 4.7, the fifth step is due to Lemma 8.5.
We can conclude
\[\|F(t+1)-F(t)\|_{2}^{2} \leq n\cdot(2m\eta\cdot\exp(2(B+R))\sqrt{n}\|F(t)-y\|_{2})^{2}\] \[\leq 4m^{2}\eta^{2}\cdot n^{2}\cdot\exp(4(B+R))\cdot\|F(t)-y\|_{2}^{2}\] \[\leq 4m^{2}\eta^{2}\cdot n^{2}\cdot\exp(8B)\cdot\|F(t)-y\|_{2}^{2}\]
where the first step is due to Eq. (15). |
2310.01140 | Neural Processing of Tri-Plane Hybrid Neural Fields | Driven by the appealing properties of neural fields for storing and
communicating 3D data, the problem of directly processing them to address tasks
such as classification and part segmentation has emerged and has been
investigated in recent works. Early approaches employ neural fields
parameterized by shared networks trained on the whole dataset, achieving good
task performance but sacrificing reconstruction quality. To improve the latter,
later methods focus on individual neural fields parameterized as large
Multi-Layer Perceptrons (MLPs), which are, however, challenging to process due
to the high dimensionality of the weight space, intrinsic weight space
symmetries, and sensitivity to random initialization. Hence, results turn out
significantly inferior to those achieved by processing explicit
representations, e.g., point clouds or meshes. In the meantime, hybrid
representations, in particular based on tri-planes, have emerged as a more
effective and efficient alternative to realize neural fields, but their direct
processing has not been investigated yet. In this paper, we show that the
tri-plane discrete data structure encodes rich information, which can be
effectively processed by standard deep-learning machinery. We define an
extensive benchmark covering a diverse set of fields such as occupancy,
signed/unsigned distance, and, for the first time, radiance fields. While
processing a field with the same reconstruction quality, we achieve task
performance far superior to frameworks that process large MLPs and, for the
first time, almost on par with architectures handling explicit representations. | Adriano Cardace, Pierluigi Zama Ramirez, Francesco Ballerini, Allan Zhou, Samuele Salti, Luigi Di Stefano | 2023-10-02T12:27:22Z | http://arxiv.org/abs/2310.01140v3 | # Neural Processing of
###### Abstract
Driven by the appealing properties of neural fields for storing and communicating 3D data, the problem of directly processing them to address tasks such as classification and part segmentation has emerged and has been investigated in recent works. Early approaches employ neural fields parameterized by shared networks trained on the whole dataset, achieving good task performance but sacrificing reconstruction quality. To improve the latter, later methods focus on individual neural fields parameterized as large Multi-Layer Perceptrons (MLPs), which are, however, challenging to process due to the high dimensionality of the weight space, intrinsic weight space symmetries, and sensitivity to random initialization. Hence, results turn out significantly inferior to those achieved by processing explicit representations, e.g., point clouds or meshes. In the meantime, hybrid representations, in particular based on tri-planes, have emerged as a more effective and efficient alternative to realize neural fields, but their direct processing has not been investigated yet. In this paper, we show that the tri-plane discrete data structure encodes rich information, which can be effectively processed by standard deep-learning machinery. We define an extensive benchmark covering a diverse set of fields such as occupancy, signed/unsigned distance, and, for the first time, radiance fields. While processing a field with the same reconstruction quality, we achieve task performance far superior to frameworks that process large MLPs and, for the first time, almost on par with architectures handling explicit representations.
## 1 Introduction
**A world of neural fields.** Neural fields (Xie et al., 2021) are functions defined at all spatial coordinates, parameterized by a neural network such as a Multi-Layer Perceptron (MLP). They have been used to represent different kinds of data, like image intensities, scene radiances, 3D shapes, etc. In the context of 3D world representation, various types of neural fields have been explored, such as the signed/unsigned distance field (_SDF/UDF_) (Park et al., 2019; Chibane et al., 2020; Gropp et al., 2020; Takikawa et al., 2021), the occupancy field (_OF_) (Mescheder et al., 2019; Peng et al., 2020), and the radiance field (_RF_) (Mildenhall et al., 2020). Their main advantage is the ability to obtain a continuous representation of the world, thereby providing information at every point in space, unlike discrete counterparts like voxels, meshes, or point clouds. Moreover, neural fields allow for encoding a 3D geometry at arbitrary resolution while using a finite number of parameters, i.e., the weights of the MLP. Thus, the memory cost of the representation and its spatial resolution are decoupled.
Recently, hybrid neural fields (Xie et al., 2021), which combine continuous neural elements (i.e., MLPs) with discrete spatial structures (e.g., voxel grids (Peng et al., 2020), point clouds (Tretschk et al., 2020), etc.) that encode local information, are gaining popularity due to faster inference (Reiser et al., 2021), better use of network capacity (Rebain et al., 2021) and suitability to editing tasks (Liu et al., 2020). In particular, the community has recently investigated tri-planes (Chan et al., 2022), a type of hybrid representation whose discrete components are three feature planes \((xy,yz,xz)\), due to its regular grid structure and compactness. Tri-planes have been deployed for _RF_(Hu et al., 2023) and _SDF_(Wang et al., 2023).
**Neural processing of neural fields.** As conjectured in De Luigi et al. (2023), due to their advantages and increasing adoption in recent years, neural fields may become one of the standard methods for storing and communicating 3D information, i.e., repositories of digital twins of real objects stored as neural networks will become available. In such a scenario, developing strategies to solve tasks such as classification or segmentation by directly processing neural fields becomes relevant to utilize these representations in practical applications. For instance, given a NeRF of a chair, classifying the weights of the MLP without rendering and processing images would be faster, less computationally demanding, and more straightforward, e.g., there is no need to understand where to sample the 3D space as _there is no sampling at all_.
Earlier methods on the topic, such as Functa (Dupont et al., 2022), approached this scenario with shared networks trained on the whole dataset conditioned on a different global embedding for each object. In this case, a neural field is realized by the shared network plus the embedding, which is then processed for downstream tasks. However, representing a whole dataset with a shared network is difficult, and the reconstruction quality of neural fields inevitably drops (see the plot in Fig. 1). For this reason, later approaches such as im7vec (De Luigi et al., 2023), NFN (Zhou et al., 2023b), NFT (Zhou et al., 2023b), and DWSNet (Navon et al., 2023) propose to process neural fields consisting of a single large MLP, such as SIREN (Sitzmann et al., 2020), for each object. Although this strategy effectively maintains the reconstruction capabilities of neural fields, task performance suffers due to the challenges introduced by the need to handle MLPs, such as the large number of weights and the difficulty of embedding inductive biases into neural networks aimed at processing MLPs. Moreover, randomly initialized MLPs trained on the same input data can converge to drastically different regions of the weight space due to the non-convex optimization problem and the symmetries of neural weight spaces (Entezari et al., 2021; Ainsworth et al., 2023). Thus, identifying a model capable of processing MLPs and generalizing among all possible initializations is not straightforward. Previous works partially address these problems: im2vec proposes an efficient and scalable architecture, and bypasses the initialization problem by fixing it across MLPs; NFN, NFT, and DWSNet design networks that are equivariant to weight symmetries. Nonetheless, all previous methods processing neural fields realized as single MLPs achieve unsatisfying performance, far from established architectures that operate on explicit representations, e.g., point clouds or meshes, as shown in Fig. 1 right.
**Neural processing of tri-plane neural fields.** To overcome the limitations of previous approaches and given the appealing properties of hybrid representations, in this paper, we explore the new research problem of tackling common 3D tasks by directly processing tri-plane neural fields. To this end, we analyze the information stored in the two components of this representation, which comprises a discrete feature space alongside a small MLP, and find out that the former contains rich semantic and geometric information. Based on this finding, we propose to process tri-plane neural fields by seamlessly applying, directly on the discrete feature space, standard neural architectures that have been developed and engineered over many years of research, such as CNNs (He et al., 2016) or, thanks to tri-plane compactness, even Transformers (Vaswani et al., 2017) (Fig. 1 left). Moreover, we note empirically that the same geometric structures are encoded in tri-planes fitted on the same shape from different initializations up to a permutation of the channels. Thus, we exploit this property to achieve robustness to the random initialization problem by processing tri-planes with standard architectures that are made invariant to permutation of the channels. We achieve much better
Figure 1: **Left: Neural processing of hybrid neural fields allows us to employ well-established architectures to tackle deep learning tasks while avoiding problems related to processing MLPs, such as the high-dimensional weight space and the random initialization. Right: We achieve performance better than other works on this topic, close to methods that operate directly on explicit representations. without sacrificing the reconstruction quality of neural fields.**
performance than all previous methods in classifying and segmenting objects represented as neural fields, almost on par with established architectures that operate on explicit representations, without sacrificing the representation quality (Fig. 1).
**Summary of our contributions.**_Code and benchmark data will be released upon publication._
\(\bullet\) We set forth the new research problem of solving tasks by directly processing tri-plane neural fields. We show that the discrete features encode rich semantic and geometric information, which can be elaborated by applying well-established architectures. Moreover, we note how similar information is stored in tri-planes with different initializations of the same shape. Yet, the information is organized with different channel orders.
\(\bullet\) We show that applying well-established architectures on tri-planes achieves much better results than processing neural fields realized as a large MLP. Moreover, we reveal that employing architectures made invariant to the channel order improves performance in the challenging but more realistic scenario of randomly initialized neural fields. In this way, we almost close the gap between methods that operate on explicit representations and those working directly on neural representations.
\(\bullet\) To validate our results, we build a comprehensive benchmark for tri-plane neural field classification. We test our method by classifying neural fields that model various fields (_UDF_, _SDF_, _OF_, _RF_). In particular, to the best of our knowledge, we are the first to classify NeRFs without explicitly reconstructing the represented signal.
\(\bullet\) Finally, as the tri-plane structure is independent of the represented field, we train a single network to classify diverse neural fields. Specifically, we show promising preliminary results of a unique model capable of classifying _UDF_, _SDF_, and _OF_.
## 2 Related work
**Neural fields.** Recent approaches have shown the ability of MLPs to parameterize fields representing any physical quantity of interest (Xie et al., 2021). The works focusing on representing 3D data with MLPs rely on fitting functions such as the unsigned distance (Chibane et al., 2020), the signed distance (Park et al., 2019; Gropp et al., 2020; Sitzmann et al., 2019; Jiang et al., 2020; Peng et al., 2020), the occupancy (Mescheder et al., 2019; Chen & Zhang, 2019), or the scene radiance (Mildenhall et al., 2020a). Among these approaches, SIREN (Sitzmann et al., 2020) uses periodic activation functions to capture high-frequency details. Recently, hybrid representations, in which the MLP is paired with a discrete data structure, have been introduced within the vision and graphic communities motivated by faster inference (Reiser et al., 2021), better use of network capacity (Rebain et al., 2021) and suitability to editing tasks (Liu et al., 2020). These data structures decompose the input coordinate space, either regularly, such as for voxel grids (Reiser et al., 2021; Fridovich-Keil et al., 2022; Liu et al., 2020), tri-planes (Wang et al., 2023; Chan et al., 2022; Wu & Zheng, 2022; Hu et al., 2023), and 4D tensors (Chen et al., 2022), or irregularly, such as for point clouds (Tretschk et al., 2020), and meshes (Peng et al., 2021). Unlike these works, we do not focus on designing a neural field representation, but we investigate how to directly process hybrid fields to solve tasks such as shape classification and 3D part segmentation. We focus on tri-planes due to their regular grid structure and compactness, which enable standard neural networks to process them seamlessly and effectively.
**Neural functionals.** Several very recent approaches aim at processing functions parameterized as MLPs by employing other neural networks. MLPs are known to exhibit weight space symmetries (Hecht-Nielsen, 1990), i.e., hidden neurons can be permuted across layers without changing the function represented by the network. Works such as DWSNet (Navon et al., 2023), NFN (Zhou et al., 2023a), and NFT (Zhou et al., 2023b) leverage weight space symmetries as an inductive bias to develop novel architectures designed to process MLPs. Both DWSNet and NFN devise neural layers equivariant to the permutations arising in MLPs. In contrast, NFT builds upon the intuition of achieving permutation equivariance by removing positional encoding from a Transformer architecture. Among the works processing MLPs, inr2vec (De Luigi et al., 2023) is the first that focuses specifically on MLPs representing 3D neural fields. It proposes a representation learning framework that compresses neural fields of 3D shapes into embeddings, which can then be used as input for downstream tasks. In the scenario addressed by inr2vec, DWSNet, NFN, and NFT, each neural field is parameterized by its own MLP. Differently, the framework proposed in Functa (Dupont et al., 2022) relies on learning priors on the whole dataset with a shared network and then encoding
each sample in a compact embedding. In this case, each neural field is parameterized by the shared network plus the embedding. In particular, Functa (Dupont et al., 2022) leverages meta-learning techniques to learn the shared network, which is modulated with latent vectors to represent each data point. These vectors are then used to address both discriminative and generative tasks. It is worth pointing out that, though not originally proposed as a framework to process neural fields, DeepSDF (Park et al., 2019) learns dataset priors by optimizing a reconstruction objective through a shared auto-decoder network conditioned on a shape-specific embedding. Thus, as investigated in De Luigi et al. (2023), the embeddings learnt by DeepSDF may be used for neural processing tasks similar to Functa's. However, as noted in De Luigi et al. (2023), shared network frameworks are problematic, as they cannot reconstruct the underlying signal with high fidelity and need a whole dataset to learn the neural field of an object. Thus, akin to inr2vec, DWSNet, NFN, and NFT, we adopt the setting in which an individual network represents each sample in a dataset, as it is easier to deploy in the wild and thus more likely to become the standard practice in neural field processing. Unlike all previous works, however, we process hybrid neural fields that combine an MLP with a discrete spatial data structure. By only processing the discrete component, we circumvent the issues arising from directly processing MLP weights and obtain remarkable performance.
## 3 Tri-plane hybrid neural fields
### Preliminaries
**Neural fields.** A field is a physical quantity defined for all domain coordinates. We focus on fields describing the 3D world, and thus on \(\mathbb{R}^{3}\) coordinates \(\mathbf{p}=(x,y,z)\). We consider the 3D fields commonly used in computer vision and graphics, i.e., the _SDF_(Park et al., 2019) and _UDF_(Chibane et al., 2020), which map coordinates to the signed an unsigned distance from the closest surface, respectively, the _OF_(Mescheder et al., 2019), which computes the occupancy probability, and the _RF_(Mildenhall et al., 2020), that outputs \((R,G,B)\) colors and density \(\sigma\). A field can be modelled by a function, \(\Phi\), parameterized by \(\theta\). Thus, for any point \(\mathbf{p}\), the field is given by \(\hat{\mathbf{q}}=\Phi(\mathbf{p};\theta)\). If parameters \(\theta\) are the weights of a neural network, \(\Phi\) is said to be a neural field. On the other hand, if some of the parameters are the weights of a neural network, whereas the rest encode local information within a discrete spatial structure, \(\Phi\) is a hybrid neural field (Xie et al., 2021).
**Tri-plane representation.** A special case of hybrid neural fields, originally proposed in Chan et al. (2022), is parameterized by a discrete tri-plane feature map, \(T\), and a small MLP network, \(M\) (Fig. 2, left). \(T\) consists of three orthogonal 2D feature maps, \(T=(\mathbf{F}_{xy},\mathbf{F}_{xz},\mathbf{F}_{yz})\), with \(\mathbf{F}_{xy},\mathbf{F}_{xz},\mathbf{F}_{yz}\in\mathbb{R}^{C\times H\times W}\), where \(C\) is the number of channels and \(W,H\) are the spatial dimensions of the feature maps. The feature vector associated with a 3D point, \(\mathbf{p}\), is computed by projecting the point onto the three orthogonal planes so to get the 2D coordinates, \(\mathbf{p}_{xy}\), \(\mathbf{p}_{xz}\), and \(\mathbf{p}_{yz}\), relative to each plane. Then, the four feature vectors corresponding to the nearest neighbours in each plane are bi-linearly interpolated to calculate three feature vectors, \(\mathbf{f}_{xy}\), \(\mathbf{f}_{xz}\), and \(\mathbf{f}_{yz}\), which are summed up element-wise to obtain \(\mathbf{f}=\mathbf{f}_{xy}+\mathbf{f}_{xz}+\mathbf{f}_{yz}\), \(\mathbf{f}\in\mathbb{R}^{C}\). Finally, we concatenate \(\mathbf{f}\) with a positional encoding (Mildenhall et al., 2020), \(\mathbf{PE}\), of the 3D point \(\mathbf{p}\) and feed it to the MLP, which in turn outputs the field value at \(\mathbf{p}\): \(\hat{\mathbf{q}}=\Phi(\mathbf{p};\theta)=M([\mathbf{f},\mathbf{PE}])\). We implement \(M\) with _sin_ activation functions (Sitzmann et al., 2020) to better capture high-frequency details.
Figure 2: **Left:** Tri-plane representation and learning of each neural field. **Right:** Datasets are composed of many independent tri-plane hybrid neural fields, each representing a 3D object.
**Learning tri-planes.** To learn a field, we optimize a \((T,M)\) pair _for each 3D object_, starting from randomly initialized parameters, \(\theta\), for both \(M\) and \(T\). We sample \(N\) points \(\mathbf{p}_{i}\) and feed them to \(T\) and \(M\) to compute the corresponding field quantities \(\hat{\mathbf{q}}_{i}=\Phi(\mathbf{p}_{i};\theta)\). Then, we optimize \(\theta\) with a loss, \(\mathcal{L}\), capturing the discrepancy between the predicted fields \(\hat{\mathbf{q}}_{i}\) and the ground truth \(\mathbf{y}_{i}\), applying an optional mapping between the output and the available supervision if needed (e.g., volumetric rendering in case of \(RF\)). An overview of this procedure is shown on the left of Fig. 2 and described in detail in Appendix A. We repeat this process for each 3D shape of a dataset, thereby creating a dataset of tri-plane hybrid neural fields (Fig. 2, right). We set \(C\) to 16 and both \(H\) and \(W\) to 32. We use MLPs with three hidden layers, each having 64 neurons. We note that our proposal is independent of the learning procedure, and, in a scenario in which neural fields are a standard 3D data representation, we would already have datasets available.
### Tri-plane analysis
We investigate here the benefits of tri-planes for 3D data representation and neural processing. Firstly, we assess their reconstruction capability, which is crucial in a world where neural fields may be used as a standard way to represent 3D assets. Secondly, we analyze the information learned in the 2D planes and how to be robust to the random initialization problem when handling tri-planes.
**Reconstruction quality.** We assess the tri-plane reconstruction performance by following the benchmark introduced in De Luigi et al. (2023). In Table 1, we present the quantitative outcomes obtained by fitting _SDF_s and _UDF_s from meshes and point clouds of the Manifold40 dataset (Hu et al., 2022). We compare with neural fields employed in inr2vec (De Luigi et al., 2023) and alternatives based on a shared architecture, such as DeepSDF (Park et al., 2019) and Functa (Dupont et al., 2022). Given the _SDF_ and _UDF_ fields learned by each framework, we reconstruct the explicit meshes and point clouds as described in Appendix B.1 and evaluate them against the ground-truths. To conduct this evaluation, we sample dense point clouds of 16,384 points from both the reconstructed and ground-truth shapes. We employ the Chamfer Distance (Fan et al., 2017) and the F-Score (Tatarenko et al., 2019) to evaluate fidelity to ground-truths. As for meshes, the tri-planes representation stands out with the lowest Chamfer Distance (CD) (0.18 mm), indicating its excellent reconstruction quality despite the relatively small number of parameters (only 64K). For point clouds, tri-planes produce reconstructions slightly worse than inr2vec but still comparable, i.e., 0.21m vs 0.24mm CD. In Appendix B.2 (Fig. 10), we show reconstructions attained from tri-plane representations for various types of fields. Moreover, in agreement with the findings of De Luigi et al. (2023), Table 1 shows that shared network frameworks such as DeepSDF and Functa yield significantly worse performance in terms of reconstruction quality. We finally point out how sharing the MLP for all tri-planes is not as effective as learning individual neural fields (third vs second row). These results support our intuition that reconstruction quality mandates hybrid neural fields optimized individually on each data sample and highlight the importance of investigating the direct neural processing of these representations. In Appendix B.3 (Fig. 5, Fig. 6), we show the reconstructions obtained by tri-planes and the other approaches considered in our evaluation.
**Tri-plane content.** To investigate how to directly process tri-plane neural fields, we inspected the content of their discrete spatial structure by visualizing the features stored in a plane alongside the view of the object rendered from the vantage point corresponding to the plane. Examples of these visualizations are depicted in Fig. 3 (left) for various objects such as a car, an airplane, and a bottle. To visualize features as a single image, displayed by a _viridis_ colormap, we take a sum across the feature channels at each spatial location. These visualizations show clearly that the tri-plane spatial structure learns the object shape, i.e., it contains information about its geometry. For this reason
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & & \multicolumn{3}{c}{**Mesh from _SDF_**} & \multicolumn{2}{c}{**Point Cloud from _UDF_**} \\ \cline{3-6} Method & Type & \# Params (K) & CD (mm) & F-score (\%) & CD (mm) & F-score (\%) \\ \hline inr2vec (De Luigi et al., 2023) & Single & 800 & 0.26 & 69.7 & 0.21 & 65.5 \\ Tri-plane & Single & 64 & 0.18 & 68.6 & 0.24 & 60.7 \\ \hline Tri-plane & Shared & 64 & 1.57 & 42.9 & 3.45 & 33.3 \\ DeepSDF (Park et al., 2019) & Shared & 2400 & 6.6 & 25.1 & 5.6 & 5.7 \\ Functa (Dupont et al., 2022) & Shared & 7091 & 2.85 & 21.3 & 12.8 & 5.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results of mesh and point cloud reconstruction on the Manifold40 test set. “Single” and “Shared” indicate neural fields trained on each shape independently or on the whole dataset.**
and further investigations reported in Appendix D, we conjecture, and demonstrate empirically in subsequent sections, that to tackle tasks such as classification and segmentation we can discard the MLPs and process only the tri-plane structure of the neural fields. Remarkably, the regular grid structure of tri-planes allows us to deploy popular and effective neural architectures, such as CNNs and Transformers. On the contrary, direct ingestion of MLPs for neural processing is problematic and leads to sub-optimal task performance.
**Random initialization.** Furthermore, we investigate the effect of random initialization on tri-plane neural fields. We find out empirically that the main difference between tri-plane structures learnt from different optimizations of the same shape lies in the channel order within a feature plane. Indeed, we conducted experiments where we fit the same 3D shape twice (see Fig. 3 (right)), starting from two random initializations of both the tri-plane structure and the MLP weights. Although the geometric content of the two tri-planes is similar, due to the different initialization, the tri-plane learnt in the first run cannot be used with the MLP obtained in the second (third column of Fig. 3, right side), and vice-versa. However, it is always possible to find a suitable permutation of the channels of the first tri-plane such that the second MLP can correctly decode its features (fourth column of Fig. 3, right side), and vice-versa. We found the right permutation by a brute-force search based on maximizing reconstruction quality. To make the search feasible, we used a smaller number of channels, i.e., \(C=8\) rather than \(C=16\). Still, the experimental results in Section 4.1 support our belief that the main source of variance across randomly initialized tri-plane optimizations of the same shape consists of a permutation of the channel order. Thus, unlike neural fields realized as MLPs, with tri-planes, it is straightforward to counteract the nuisances due to random initialization by adopting standard architectures made invariant to the channel order.
### Architectures for neural processing of tri-plane neural fields
Based on the above analysis, we propose to process tri-planes with Transformers (Vaswani et al., 2017). In particular, we propose to rely on a Transformer encoder without positional encoding, which is equivariant to token positions. By tokenizing tri-planes so that each token represents a channel of a plane, such architecture seamlessly computes representations equivariant to the order of the channels. Specifically, we enroll each channel of size \(H\times W\), to obtain a token of dimension \(HW\) within a sequence of length \(3C\) tokens. These tokens are then linearly projected and fed into the Transformer. The output of the encoder is once again a sequence of \(3C\) tokens.
For global tasks like classification, the output sequence is subsequently subjected to a max pool operator to obtain a global embedding that characterizes the input shape. In our experiments, this embedding is then processed through a stack of fully connected layers to compute the logits. The way the tokens are defined, the absence of positional encoding, and the final max pool operator allow for achieving invariance to the channel order. For dense tasks like part segmentation, instead, we
Figure 3: **Left:** for three different hybrid neural fields (from top to bottom: _SDF_, _UDF_, _RF_) we render a view of the reconstructed 3D object alongside the corresponding tri-plane feature map. **Right:** from left to right, reconstructions of two (tri-plane, MLP) pairs with different initialization, namely \((T_{A},M_{A})\) and \((T_{B},M_{B})\); the mixed pair \((T_{A},M_{B})\); a channel permutation of \(T_{A}\) and \(M_{B}\).
also utilize the decoder part of Transformers. More specifically, we treat the coordinates queries to segment as a sequence of input tokens to the decoder. Each point \(\mathbf{p}\) with coordinates \((x,y,z)\) undergoes positional encoding (Mildenhall et al., 2020) and is then projected to a higher-dimensional space using a linear layer. By leveraging the cross-attention mechanisms within the decoder, each input token representing a query point can globally attend to the most relevant parts of the tri-planes processed by the encoder to produce its logits. Additional details about the architectures, including block diagrams, are reported in Appendix E.3.
## 4 Tasks on Neural fields
### Neural field classification
**Benchmark.** We perform extensive tests to validate our approach. In so doing, we build the first neural field classification benchmark, where we compare all the existing proposals for neural field processing on the task of predicting the category of the objects represented within the field without recreating the explicit signal. Specifically, we test all methods on _UDF_ fields obtained from point clouds of ModelNet40 (Wu et al., 2015), ShapeNet10 (Chang et al., 2015), and ScanNet (Dai et al., 2017); _SDF_ fields learned from meshes of Manifold40 (Hu et al., 2022); _OF_ fields obtained from voxels grids of ShapeNet10. In addition, we provide for the first time classification results on neural radiance fields (_RF_), learned from ShapenetRender (Xu et al., 2019). See Appendix E.1 for more details on the benchmark. Besides a simple MLP baseline, we compare with frameworks designed to process neural fields realized as MLPs, i.e., im2vec (De Luigi et al., 2023), NFT (Zhou et al., 2023), NFT (Zhou et al., 2023), and DWSNet (Navon et al., 2023). These methods process single MLP neural fields, which we implement as SIREN networks (Sitzmann et al., 2020). Differently from De Luigi et al. (2023), the MLPs in our benchmark are _randomly initialized_ to simulate real-world scenarios. Unlike all previous methods, ours processes individual tri-plane neural fields, which are also randomly initialized. Moreover, we compare with frameworks where neural fields are realized by a shared network and a small latent vector or modulation, i.e., DeepSDF (Park et al., 2019) and Functa (Dupont et al., 2022). Whenever possible, we use the official code released by the authors to run the experiments. Note that not all frameworks can be easily extended to all fields. Therefore, we only test each framework in the settings that are compatible with our resources and that do not require fundamental changes to the original implementations (see Appendix E.2 for more details).
**Results.** As we can observe in Table 2, overall, shared architecture frameworks (DeepSDF and Functa) outperform previous methods that directly operate on neural fields represented as a single neural network. However, we point out again that the reconstruction capability of such frameworks is poor, as shown in Section 3.2. Conversely, previous methods that utilize individual neural fields demonstrate superior reconstruction quality but struggle to perform effectively in real-world scenarios where shapes need to be fitted starting from arbitrary initialization points. im2vec makes the assumption of learning all MLPs starting from the same initialization, and it does not work when this initialization schema is not applied. Among the family of methods that adopt layers equivariant and invariant to permutations of the neurons, only DWSNet works on the large MLPs constituting our benchmark, though performance tends to be worse than shared network approaches. Our method delivers the best of both worlds: it ingests tri-planes neural fields, which exhibit excellent reconstruction quality while achieving the best performance overall, often surpassing by a large margin all other methods, including those relying on a shared neural field, e.g., the accuracy on ScanNet10 is 56.4 for Functa vs 69.1 for our method. Hence, we can state confidently that our approach achieves the best trade-off
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & & _UDF_ & _SDF_ & _OF_ & _RF_ \\ \hline Method & Type & Input & ModelNet0 & ShapeNet10 & ScanNet10 & Manifold40 & ShapeNet10 & ShapeNetNet10 \\ \hline DropSDF (Park et al., 2019) & Shared & Latent vector & 41.2 & 76.9 & 51.2 & 64.9 & – & – \\ Functa (Dupont et al., 2022) & Shared & Modulation & **87.3** & 83.4 & 56.4 & 85.9 & 36.3 & – \\ \hline im2vec (De Luigi et al., 2023) & Single & MLP & 10.6 & 42.0 & 40.9 & 13.1 & 38.6 & – \\ MLP & Single & MLP & 3.7 & 28.8 & 36.7 & 4.2 & 29.6 & 22.0 \\ NFN (Zhou et al., 2023) & Single & MLP & 9.0 & 9.0 & 45.3 & 4.1 & 33.8 & 87.0 \\ \hline & & & 6.9 & 6.9 & 45.3 & 4.1 & 33.8 & 85.3 \\ DWSNet (Navon et al., 2023) & Single & MLP & 56.3 & 78.4 & 62.2 & 47.9 & 79.1 & 83.1 \\ Ours & Single & Tri-plane & 87.0 & **94.1** & **69.1** & **86.8** & **91.8** & **92.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Test set accuracy for shape classification across neural fields.** We compare several frameworks capable of processing neural fields.
between classification accuracy and reconstruction quality. Finally, we highlight that our proposal is effective with all the datasets and kinds of fields addressed in the experiments.
**Comparison with explicit representations.** In Table 3, we compare our method against established architectures specifically designed to process explicit representations. For a fair comparison, we reconstruct the explicit data from each field so that exactly the same shapes are used in each experiment. Practically, we reconstruct point clouds, mesh, and voxel grids from _UDF_, _SDF_, and _OF_, respectively. Then, we process them with specialized architectures, i.e., PointNet (Qi et al., 2017) for point clouds, MeshWalker (Lahav and Tal, 2020) for meshes, and Conv3DNet (Maturana and Scherer, 2015) for voxel grids. As for _RF_, we render a multi-view dataset with 36 views for each object. Then, we train 36 per-view ResNet50 (He et al., 2016) so as to ensemble the predictions at test time. We highlight how our proposal, which can classify every neural field with the same standard architecture, almost closes the performance gap with respect to _specialized_ architectures designed to process explicit representations. Noticeably, we show that NeRFs can be classified accurately from the features stored in a tri-plane structure without rendering any images.
**Towards universal neural field classification.** Finally, to the best of our knowledge, we implement for the first time a _universal 3D classifier_, i.e., a model which can be trained and tested with any kind of 3D neural field. Indeed, since the tri-plane structure, as well as the neural processing architecture, are just the same, regardless of the kind of field, we can seamlessly learn a unified model able to classify a variety of fields. For example, we start from the meshes of the Manifold40 dataset and obtain the corresponding point clouds and voxel grids so as to fit three different fields (_SDF_, _UDF_, and _OF_). Accordingly, we build training, validation, and test sets with samples drawn from all three fields. More precisely, if a shape appears in a set represented as an _SDF_, it also appears in that set as a _UDF_ and _OF_. Then, as reported in Table 4, we run classification experiments by training models on each of the individual fields as well as on all three of them jointly. The results show that when a classifier is trained on only one field, it may not generalize well to others. On the other hand, a single model trained jointly on all fields not only works well with test samples coming from each one, but it also outperforms the models trained individually on a single kind of field.
### Neural field 3D part segmentation
We explore here the potential of our method in tackling dense prediction tasks like part segmentation, where the goal is to predict the correct part label for any given 3D point. In Table 5, we compare our method to inr2vec (De Luigi et al., 2023), which was trained on fields generated from random initialization and is the only competitor capable of addressing the part segmentation task. Our experiments were conducted by fitting _UDF_ fields from point clouds of 2048 points from the ShapeNetPart dataset (Yi et al., 2016). As a reference, we present the results obtained using specialized architectures commonly used for point cloud segmentation, like PointNet, PointNet++, and DGCNN. Akin to De Luigi et al. (2023), all models are trained on the point clouds reconstructed from the fitted fields. We observe that our proposal outperforms inr2vec by a large margin, with improvements of 20% and 16.7% for instance and class mIoU, respectively. Moreover, Table 5 demonstrates once again that tri-planes are effective in substantially reducing the performance gap between processing neural fields and explicit representations.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{3}{c}{Train} & \multicolumn{3}{c}{Test} \\ \hline _UDF_ & _SDF_ & _OF_ & _UDF_ & _SDF_ & _OF_ \\ \hline ✓ & & & 78.4 & 84.7 & 15.6 \\ & ✓ & & 86.8 & 67.3 & 11.9 \\ & ✓ & ✓ & 46.9 & 49.3 & 77.7 \\ ✓ & ✓ & ✓ & **87.8** & **87.4** & **80.3** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Universal neural field classifier.** Test set accuracy on Manifold40.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline Method & Input & ModelNet40 & ShapeNet10 & ScanNet10 & Manifold40 & ShapeNet10 & ShapeNetRemder \\ \hline Ours & Ti-plane & 87.0 & 94.1 & 69.1 & 86.8 & 91.8 & 92.6 \\ \hline PointNet (Qi et al., 2017) & Point Cloud & 88.8 & 94.3 & 72.7 & – & – & – \\ MeshWalker (Lahav and Tal, 2020) & Mesh & – & – & – & 90.0 & – & – \\ Conv3DNet (Maturana and Scherer, 2015) & Voxel & – & – & – & – & 92.1 & – \\ ResNet50 (He et al., 2016) & Images & – & – & – & – & – & 94.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison with explicit representations.****Top:** Test set accuracy of our neural field processing method. **Bottom:** standard networks trained and tested on explicit representations.
### Different architectures for tri-plane processing
In Table 6, we compare several plausible alternatives to Transformers for processing tri-planes, which have roughly the same number of parameters and have been trained with the same hyperparameters. As discussed previously, since tri-planes contain an informative and regular discrete data structure and are compact, they can be processed with standard architectures. Hence, we test an MLP, a ResNet50 (He et al., 2016), and two variants of PointNet all with roughly the same parameters. A simple MLP that processes the flattened tri-planes (row 1) severely under-performs with respect to the alternatives, likely due to its inability to capture the spatial structures present in the input as well as its sensitivity to the channel permutation caused by random initializations. A standard CNN like ResNet50, processing tri-planes stacked together and treated as a multi-channel image of resolution \(W\times H\), is instead equipped with the inductive biases needed to effectively process the spatial information contained in the tri-planes (Fig. 3) and already delivers promising performance, although it cannot cope with channel permutations. The two variants of PointNet show the importance of invariance to channel order. In the first variant (row 3), each channel is flattened to create a set of vectors in \(\mathbb{R}^{W\times H}\) with \(3C\) elements, and then the max pool operator is applied to extract a global embedding invariant to the channel order that is fed to a classifier. We observe here better performance across all fields than those attained by CNN. If we instead unroll tri-planes along the channel dimension to create a set of vectors in \(\mathbb{R}^{3C}\) with \(W\times H\) elements (row 4), the results are poor, as this arrangement of the input does not make the network invariant to the channel order but to the spatial position of the features. Finally, we report (row 5) results for the Transformer architecture adopted in this paper, which, similarly to the previous PointNet, is invariant to the channel order thanks to the max-pool operator and yields slightly better performance, probably due to the attention mechanism that better captures inter-channel correlations.
## 5 Concluding remarks and limitations
We have shown that tri-plane hybrid neural fields are particularly amenable to direct neural processing without sacrificing representation quality. Indeed, by feeding only the tri-plane structure into standard architectures, such as Transformers, we achieve better classification and segmentation performance compared to previous frameworks aimed at processing neural fields and dramatically shrink the gap with respect to specialized architectures designed to process 3D data represented explicitly. To validate our intuitions, we propose the first benchmark for neural processing of neural fields, which includes the main kinds of fields used to model the 3D world as well as all the published methods that tackle this very novel research problem. Within our experimental evaluation, we show for the first time that NeRFs can be effectively classified without rendering any images. A major limitation of our work is that tri-plane neural fields are specific to 3D world modeling. Thus, we plan to address other kinds of hybrid neural fields, like those relying on sparse feature sets (Li et al., 2022) as well as other kinds of signals, such as time-varying radiance fields (Sara Fridovich-Keil and Giacomo Meanti et al., 2023). Other research directions deal with processing hybrid neural fields capturing large
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multicolumn{5}{c}{} & \multicolumn{3}{c}{**LDDE**} & \multicolumn{3}{c}{**SDE**} & \multicolumn{1}{c}{**DFF**} \\ \hline Method & Input & ModelNet40 & ShapeNet101 & ScanNet101 & Manifold4040 & ShapeNet101 \\ \hline MLP & Tri-plane & 41.6 & 84.2 & 55.8 & 40.2 & 79.1 \\ CNN & Tri-plane & 82.2 & 92.1 & 63.4 & 82.5 & 88.4 \\ PointNet & Tri-plane & 85.8 & 93.4 & **69.3** & 85.6 & 91.5 \\ Spatial PointNet & Tri-plane & 32.3 & 65.4 & 51.3 & 37.0 & 54.7 \\ Transformer & Tri-plane & **87.0** & **94.1** & 69.1 & **86.8** & **91.8** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Ablation study of architectures for tri-plane neural field classification**
\begin{table}
\begin{tabular}{l l c c c c c c c c c c c c c c c c} \hline \hline Method & Input & & & & & & & & & & & & & & & & & & & & & & & & \\ \hline \hline \multicolumn{12}{l}{**\#2C** (Leing et al., 2023)} & MLP & 62.4 & 64.5 & 57.9 & 72.9 & 78.6 & 76.4 & 67.8 & **64.4** & 81.6 & 78.5 & 85.3 & 51.5 & 52.7 & 62.7 & 47.4 & 51.4 & 54.2 \\
scenes featuring multiple objects so as to address tasks like 3D object detection or semantic/instance segmentation.
|
2310.02449 | Impact of geography on the importance of parameters in infectious
disease models | Agent-based models are widely used to predict infectious disease spread. For
these predictions, one needs to understand how each input parameter affects the
result. Here, some parameters may affect the sensitivities of others, requiring
the analysis of higher order coefficients through e.g. Sobol sensitivity
analysis. The geographical structures of real-world regions are distinct in
that they are difficult to reduce to single parameter values, making a unified
sensitivity analysis intractable. Yet analyzing the importance of geographical
structure on the sensitivity of other input parameters is important because a
strong effect would justify the use of models with real-world geographical
representations, as opposed to stylized ones.
Here we perform a grouped Sobol's sensitivity analysis on COVID-19 spread
simulations across a set of three diverse real-world geographical
representations. We study the differences in both results and the sensitivity
of non-geographical parameters across these geographies. By comparing Sobol
indices of parameters across geographies, we find evidence that infection rate
could have more sensitivity in regions where the population is segregated,
while parameters like recovery period of mild cases are more sensitive in
regions with mixed populations. We also show how geographical structure affects
parameter sensitivity changes over time. | Arindam Saha, Maziar Ghorbani, Diana Suleimenova, Anastasia Anagnostou, Derek Groen | 2023-10-03T21:39:12Z | http://arxiv.org/abs/2310.02449v2 | # Impact of geography on the importance of parameters in infectious disease models
###### Abstract
Agent-based models are widely used to predict infectious disease spread. For these predictions, one needs to understand how each input parameter affects the result. Here, some parameters may affect the sensitivities of others, requiring the analysis of higher order coefficients through e.g. Sobol sensitivity analysis. The geographical structures of real-world regions are distinct in that they are difficult to reduce to single parameter values, making a unified sensitivity analysis intractable. Yet analyzing the importance of geographical structure on the sensitivity of other input parameters is important because a strong effect would justify the use of models with real-world geographical representations, as opposed to stylized ones.
Here we perform a grouped Sobol's sensitivity analysis on COVID-19 spread simulations across a set of three diverse real-world geographical representations. We study the differences in both results and the sensitivity of non-geographical parameters across these geographies. By comparing Sobol indices of parameters across geographies, we find evidence that infection rate could have more sensitivity in regions where the population is segregated, while parameters like recovery period of mild cases are more sensitive in regions with mixed populations. We also show how geographical structure affects parameter sensitivity changes over time.
## Introduction
Modeling and prediction of infectious diseases play an important role in mitigating their spread and impact. Many models have been developed in the past to study the spread of infectious diseases such as the 2014 Ebola outbreak in Africa [1, 2], the 2009 H1N1 pandemic [7, 3, 4] and the 2015 Zika outbreak in South America [5]. Needless to say, the COVID-19 pandemic which has severely impacted the entire world has also been the subject of a significantly high number of modeling studies [6, 7, 8] given the severity of the pandemic and increased computational capabilities available now.
Infectious disease models can be broadly classified into two categories: differential equation models and agent-based models [9, 10]. Differential equation models are based on the assumption that the population is well mixed and the disease spreads homogeneously across the population. These models are computationally efficient and can be easily used to study the spread of the disease in a large population in real-time. However, these models are not able to capture the effect of the geographical distribution of the population on the spread of the disease, which is necessary in realistic scenarios. Agent-based models, on the other hand, are based on the assumption that the population is not well-mixed and the disease spreads heterogeneously across the population. Therefore, they simulate the movement and mutual interactions of individual agents in the population. These models are computationally expensive and, without the use of high-performance computers (HPC's), can only be used to study the spread of the disease in a small population in real-time. During the COVID-19 crisis, many differential equation models [11, 12, 13] and agent-based models [7, 14, 15, 16, 17] have been developed and used to study the evolution of the pandemic.
The accuracy of any model depends on the assumptions made and the parameters used in it. Since agent-based models provide a more detailed representation of the population, they usually depend on a larger set of parameters compared to differential equation models. Therefore, it is important to understand how input assumptions and parameters affect the predictions of the model. This can be done by performing sensitivity analysis.
In recent years, sensitivity analysis has become a popular tool for analysis of models due to its importance in a variety of fields. In any model, it is essential to quantify the uncertainties in the predictions it makes. In general, the model inputs are subject to sources of uncertainty, including errors of measurement, absence of information and poor or partial understanding of the driving forces and mechanisms. While such uncertainties in the inputs of a model can mostly be quantified, mathematically computing the uncertainties in the output is mostly impossible due to the complexities of the model. In such circumstances, sensitivity analysis becomes a useful tool to estimate the uncertainties and confidence intervals of the model outputs. Such applications of sensitivity analysis have been used in multiple fields of study [7, 7, 8, 7]. Sensitivity analysis can also be used to assist model development itself as demonstrated by sensitivity-driven simulation development approach [7]. Sensitivity analysis
can be used in combination with validation techniques to identify the most important parameters in the model and iteratively steer the parameters to improve the model. This technique has been recently applied to improve the predictions of a human migration model [7].
Sensitivity analysis is a well-established field of study, and many methods have been developed to perform sensitivity analysis of models [7, 18, 19, 20]. These methods study how the uncertainty in the output of a model can be apportioned to different sources of uncertainty in their inputs. Essentially, sensitivity analysis provides a quantitative tool that can be used to sequentially order the parameters according to their importance in the model output. This can help reduce the computational cost of the model by reducing the number of parameters that need to be estimated. These methods can be broadly classified into two categories: local sensitivity analysis and global sensitivity analysis. Local sensitivity analysis methods are mostly based on partial derivatives of the output variables with respect to the input parameters of the model. They assume that the model is linear at least locally around the point of interest. These methods are computationally efficient and can be used to perform sensitivity analysis of models with a large number of parameters. However, these methods are not able to capture the non-linear effects of the parameters on the model output. Global sensitivity analysis methods, on the other hand, are based on the assumption that the model is non-linear. They generally involve sampling the parameter space and evaluating the model output at each sample point. Then the effect of the parameters on the model output is estimated using statistical methods such as variance decomposition or regression analysis. Given the inherent non-linearities and the number of parameters in agent-based models, global sensitivity analysis methods are more suitable for performing sensitivity analysis of these models. In this study, we use the Sobol method [21] to perform a global sensitivity analysis of the model.
In this paper, we discuss the importance and necessity of sensitivity analysis of agent-based disease models and specifically study to what extent the geographical structure of the regions affects the parameter sensitivities to COVID spread. In other words, does it indeed matter that agent-based models explicitly resolves geographical aspects from maps, or should we use uniform geographies instead, saving ourselves development time and reducing simulation complexity? To this end, we use the Flu and Coronavirus Simulator (FACS) [22] to study the effect of the geographical distribution of the population on the spread of the disease. FACS is an open-source [7], stochastic, spatially explicit and individual-based model that simulates the spread of an infectious disease in a population. The model simulates the spread of the disease by simulating the movement of individuals in the population and the transmission of the disease between individuals. The model has been used to study e.g. the effect of school closures on the spread of the disease [7] and can be combined with a hospital model to support the allocation of intensive care capacity in anticipation of pandemic waves [7].
We use FACS to simulate the spread of the disease in three different real-world regions: Calarasi (Romania), Klaipeda (Lithuania), and Harrow (England). Each of these regions was selected by health authority stakeholders as part of the STAMINA research project [7], and has a different geographical structure in terms of the distribution of buildings and populations. Because sensitivity analysis procedures rely on a very large number of simulation runs, we use the SEAVEA toolkit [7] to automatically deploy and run simulations on high-performance computers and analyze the results obtained. In our case, the simulations were run on the ARCHER2 supercomputer at EPCC in Edinburgh, UK, which has been used to support numerous large research projects around the world [7, 7, 7]. Using these tools, we can automatically obtain the evolution of the disease as predicted by FACS and then compute the Sobol indices for a selected subset of input parameters.
In the Results section, we present the differences in the geographical structure of the regions and describe how they lead to differences in the movement patterns of individuals in the population and in the spread of disease. We also discuss the resulting variation in the spread of the disease in the three regions, present the results of the sensitivity analysis of the model and discuss how the observed differences in Sobol indices are related to differences in the geographical structure, movement of individuals and transmission of the disease in the three regions. In the Discussion section, we present the conclusions of the study and the implications of the results. We also discuss possible future research directions based on the results of this study. Details of the model and sensitivity analysis are provided in the Methods section. This section also details the computational resources used for the study.
## Results
In this section, we present the various aspects of the results obtained by simulating the spread of an infectious disease in the regions of Calarasi, Klaipeda, and Harrow using FACS. Although the disease used for this study is the COVID-19 virus, the results are applicable to any infectious disease. The regions of Calarasi, Klaipeda and Harrow have been chosen because, in spite of having similar populations, they have significantly different geographical structures in terms of the distribution of houses and amenities.
### Geographical structure of the regions
To appreciate further results, we first present the distribution of buildings in Calarasi, Klaipeda, and Harrow (Figure 1) as identified from OpenStreetMap using the method described in the Methods section. Note that the offices in the regions have
not been shown because they are uniformly distributed across the entire region (see the Methods section). The maps make it evident that the houses and other amenities, namely hospitals, parks, leisure centers, schools, supermarkets, and shopping centers, are distributed very differently across the regions.
In Calarasi, there are distinct clusters of houses in the suburbs and a single urban area in the southeast of the region where most of the amenities are located. This urban area has relatively few houses, which implies that most of the population travels to a relatively small urban center to use the amenities.
In Klaipeda, the distribution of houses and amenities is around multiple population hubs. Each of these hubs has houses as well as amenities. The population of each hub would mostly use one of the amenities present within the hub and would rarely interact with populations from other hubs during their visits to the amenities.
In Harrow, the population is very densely distributed with a relatively homogeneous distribution of amenities across the region. This allows for a relatively even mixing of the population as people visit the amenities in the region.
We are aware that the methods employed for extracting the buildings from the regions might not be completely accurate due to inaccuracies in the OpenStreetMap data, its varying accuracy in different parts of the world, as well as the inaccurate choice of classification criteria in our algorithm. However, it is important to note that in this study, we are not interested in the actual prediction of the evolution of a disease in these cities. Instead, we focus on the qualitative difference in the structure of the city and the effect it has on the evolution of an infectious disease. Therefore, we are not concerned about the accurate representation of the regions in particular but the relative distribution of houses and amenities in the region.
### Location graphs
Based on the geographical locations of the buildings, a bipartite location graph of buildings is created for each region. Each house in the region is connected to a single amenity of each type. This leads to each house being connected to seven amenities. The choice of amenities to be connected to each house depends on the physical size of the amenity and its distance to the house (refer to the Methods section for further details). Therefore, while the degree of each house is identical, the mean degree of each amenity depends on its geographical location. Note further that the degree distribution of offices is similar for each region because of the assumed uniform geographical distribution of offices in the regions.
The above-mentioned difference in the physical distribution of houses and amenities is reflected in the structure of the location graphs of Calarasi, Klaipeda and Harrow. Figure 2 shows the mean degree for nodes representing each type of amenity for each region. Since the amenities are fewer and more centrally located, the mean degree for each type of amenity is the highest in Calarasi. This is followed by Klaipeda and Harrow, in order, for all amenities except hospitals and supermarkets. This is a further indication of the difference in the geographical structures of regions. The degree of each amenity represents the number of houses served by the amenity. Since the household size is taken to be the same across regions, the differences in mean degrees also represent the number of people visiting these amenities on average. In the next subsections, we would see the impact of this difference on the spread of the disease in these cities.
### Disease progression
We now present the results of the FACS simulation. To analyze the differences in sensitivity analysis caused by differences in the spatial features of the region, we simulate the progression of the disease from March 1, 2020, for a duration of 400 days in the regions of Calarasi, Klaipeda, and Harrow, keeping all other parameters identical. In particular, the demographics of the regions, properties of the disease, needs of the individual agents, government measures and lockdowns, and vaccination strategies are kept identical across the regions. The details of the parameters of simulations can be seen in the Methods section. Given the stochastic nature of the simulations, an ensemble of 50 simulations was run for each region, and the mean of the results along with their 95% confidence intervals is plotted in Figure 3.
In Figure, we show the number of infectious people and the daily number of new hospitalizations as a function of time. It is evident from the plots that all regions witness two waves of infections during the period of simulation. However, the shape and height of the waves differ across the regions. It is also to be noted that the plot for the number of hospitalizations is similar to the number of infectious people in shape but is scaled down in magnitude. This is expected, as only a fraction of infected people need to be hospitalized.
In the region of Calarasi, a higher level of intermixing among individuals is observed, mainly due to the increased number of visitors that frequent the local amenities. As a result, the peaks in the number of infections and hospitalization are higher and exhibit a sharper intensity compared to other regions. On the contrary, in the areas of Klaipeda and Harrow, the heights of these peaks are due to a lower number of individuals visiting the same amenities.
Another important peculiarity in the progression of the disease in Klaipeda is the distinct shape of the second wave. Around day 250 after the start of the simulation, the number of infectious people, as well as the number of hospitalizations, start increasing gradually. The simulation then reaches an inflection point around day 275, where the rate of increase of infectious people and hospital admissions suddenly increases. Such an inflection point is not found in the other two regions. We analyze the reasons behind this peculiarity in the next subsection.
Figure 1: Maps of (a) Calárasi, (b) Klaipéda, and (c) Harrow. Maps in the top and bottom rows of each column show the location of houses and amenities in each region respectively, as identified from OpenStreetMap. While amenities are individually identified, houses are randomly generated in the housing areas identified from OpenStreetMap. Offices are not shown here as they are assumed to be uniformly generated throughout the region.
Figure 3: Number of infectious people and the number of hospitalizations on each day of simulation for Calárasi, Klaipeda and Harrow. Since the simulations are stochastic in nature, the solid lines in the plots show the average of the 50 simulation results. The 95% confidence intervals are shown using shaded regions. The lockdown measures are kept constant across all runs.
Figure 2: Mean degree of each type of amenities in the three regions. Since amenities are connected to the houses which use the amenity, the degree of an amenity indicates the number of people who visit the amenity. Note that the degree of an amenity is independent of the actual time spent by a person in an amenity, which depends on the type of the amenity and the age of the person.
One of the crucial factors affecting the overall shape of the plots shown in Figure 3 are the lockdown measures taken during the simulation period. These measures are summarized in Table 1. Prior to the start of simulation, the population is randomly seeded with a fixed number of initial infections. Subsequently, based on the movement and interaction of people, the number of infections changes over time. The number of infections that occur each day is plotted in Figure 4. For each day, the number of infections occurring at each location type is shown in different colors.
It is important to note that the timeline presented in Table 1 only shows the dates on which the main policy decisions were made. The FACS simulation used for the results presented here takes into consideration many more minor measures. However, only some of the major measures are shown in the table due to their relevance in the results presented here. A full list of all measures used for the simulations can be found on the GitHub page of the software[2, 7, 2].
Looking at Figures 3 and 4 together with Table 1 gives us a clear idea about how the lockdown measures have affected the evolution of the disease in the regions being studied. Since the first lockdown measures come into effect only 20 days after the start of the simulations, the the number of infections increase rapidly from the start of the simulation to around day 20. This causes the first wave of infections and hospitalizations in the regions. From day 20 to day 53, various measures are introduced so that the interaction of people and the transmission of diseases is restricted. This includes allowing work from home when possible, partial and then complete closure of schools and leisure centers, mask mandates, travel restrictions and emphasis on social distancing.
These measures lead to the end of the first wave in all the three regions by day 75. Thereafter, the lockdown measures are gradually lifted from around day 184. It is on this day that schools start re-opening gradually with a varying but limited fraction of students being allowed to attend. Other restrictions are also gradually lifted which results in a lesser fraction of population working from home and a greater fraction of population intermingling in shopping centers, leisure centers and parks. The effects of lesser restrictions are not directly visible until after day 200 when the number of infectious people and
\begin{table}
\begin{tabular}{|c|c|c|} \hline Date & Days since start of simulation & Lockdown Measures \\ \hline \hline March 01, 2020 & 0 & Start of simulation \\ March 21, 2020 & 20 & First lockdown \\ April 22, 2020 & 52 & Peak of lockdown measures \\ September 01, 2020 & 184 & Schools re-opening \\ November 05, 2020 & 249 & Second lockdown \\ December 02, 2020 & 276 & Restrictions lifted \\ December 23, 2020 & 297 & Christmas bubble \\ January 06, 2021 & 311 & Third lockdown \\ March 08, 2021 & 372 & Restrictions lifting \\ April 04, 2021 & 399 & End of simulation \\ \hline \end{tabular}
\end{table}
Table 1: Summary of lockdown measures implemented during the simulation.
Figure 4: Stacked plot showing the number of infections occurred in each type of amenity on each day in Calárasi, Klaipéda and Harrow. On each day, the number of infections occurred at each type of location is represented by a different color.
hospitalizations start increasing again.
This rise in the number of infections after day 200 gives rise to the second wave of infections and hospitalizations in the three regions. While the heights of the peaks of infection and hospitalization follow the same trend as the first wave, the shape of the second wave are significantly different for the three regions being studied. The second wave is sharp and symmetric around the peak for Calarasi, but for Harrow, the wave is asymmetric with a long tail. As also noted earlier, the second wave in Klaipeda is characterized by an inflection point around day 275 after which the slope of the two curves in Figure 3 increase sharply. We now discuss these differences and the possible reasons behind them.
### Comparing results of the regions
Note that there is a clear distinction among the regions when it comes to the locations at which the infections take place (see Figure 4). While in Calarasi most of the infections occur in the shopping centers, the majority of infections in Klaipeda and Harrow occur in houses and offices. Additionally, in Klaipeda, a significant share of the infections also occur in schools. These differences in the location of infections are a direct result of differences in the structure of the location graphs and the chronology of the lockdown measures.
At the start of the simulation, the number of infections increases with time as people interact with each other. In Calarasi, due to a high degree of connectivity for all the amenities (except offices), more people visit the amenities at the same time. This leads to a quick spread of infections resulting in a sharp and high peak for the first wave. The wave comes down in Calarasi by day 50 mostly because of the lockdown restrictions as well as due to the high levels of immunity developed across the population. The high immunity in the population is perhaps also one of the reason behind the late start of the second wave as compared to the other regions. Although schools and other amenities start opening up gradually, it is not until the restrictions are significantly reduced after day 276 that the second wave starts picking up. As evident from Figure 4, most of the infections in Calarasi happen in shopping centers. This is because of the high connectivity of shopping centers (as seen in Figure 2). It is important to note that although the average degree of hospitals and supermarkets are higher than shopping centers in Calarasi, they do not have a large contribution towards infections. This is because hospitals and supermarkets typically have a larger area than shopping centers. This allows people to spread out more, resulting in a lesser probability of infections. Infections in hospitals are also less due to the less amount of time that people spend in them on average and the additional isolation measures found in them.
In Klaipeda and Harrow, the average degree of an amenity location is lower than that of Calarasi. This leads to less number of people visiting any particular amenity on average and therefore, a lower height of waves. In particular, this leads to a late peak of the first wave. It is also to be noted that due to a lower peak of the first wave, a significantly lower number of people got immunized. This leads to a second wave which is higher than the first wave. It is also interesting that most of the infections in Klaipeda and Harrow occur in houses and offices (see Figure 4). This is due to the significantly larger portions of the day spent in them. At least a small but significant portion of the population continues working in offices throughout parts of the simulation period. This causes infections to spread first in offices and then across houses where they spent the majority of their time.
Schools also play a significant role in spreading infections in Klaipeda due to their high degree in the location graph. This leads to the interesting infection point in the second wave as noted earlier. After the first wave, schools reopen on day 184. This leads to an increase in infections in schools. The infected students from schools then spread the infection to other people in the household. They spread the infection further mainly through the offices. The spread of infections is accelerated when movement restrictions are lifted between days 276 and 297. In particular, shopping centers and supermarkets follow normal opening hours, movement between regions is permitted again and work from home is significantly reduced. This leads to the inflection point noted earlier.
Due to a lower average degree of schools in Harrow, a significantly lower number of infections take place in schools. Therefore, a significant upsurge in the number of infections is not seen for Harrow after the re-opening of schools on day 184. Hence, the onset of the second wave is delayed by approximately 75 days when compared to Klaipeda. Other than that, the overall lower heights of the waves in Harrow can be attributed to the lower average degree of amenities.
### Sobol indices
The FACS simulation depends on a large number of parameters. Location and size of buildings, the age distribution of the population, time spent by each agent in each amenity, restrictions imposed on the population during the lockdown, specific properties of the disease vector, and the efficiency and rates of vaccines administered are some of the broad categories of properties on which the results of the simulation depend. Each of these parameters affects different aspects of the simulation in different ways. To study the impact of some of these parameters, we present the sensitivity analysis of the results presented earlier with respect to some of the simulation parameters.
With the devastation caused around the world by new coronavirus mutants in the COVID-19 pandemic, we chose to analyze in detail the importance of the properties of the disease vector. This would allow us to address the following class of questions: If a new mutant of the disease vector arises, which of its properties would impact the number of hospitalizations the most?
In FACS, the disease vector is characterized by seven scalar properties: (a) infection rate, (b) mortality period, (c) recovery period, (d) mild recovery period, (e) incubation period, (f) period to hospitalization, and (g) immunity duration. Among these parameters, our preliminary analysis showed that the Sobol indices corresponding to the mortality period, recovery period, and period to hospitalization were significantly lower than those corresponding to the other four parameters. Therefore, given the large size of the parameter set, we chose a smaller subset for our detailed analysis. It may be noted that the chosen subset of parameters also overlaps with one used in a small-scale sensitivity analysis in an earlier publication [7].
In Figure 5, we present the results of the sensitivity analysis with respect to four scalar parameters which determine the properties of the disease vector: (a) infection rate, (b) incubation period, (c) mild recovery period and (d) duration of immunity. The plot, for each region, gives the first-order Sobol indices corresponding to each of these parameters as a function of time. The Sobol index is a number from 0 to 1 which quantifies the sensitivity of the simulation result with respect to these parameters. Since the output of FACS is multi-dimensional, we select the number of daily hospitalization to be the output with respect to which the Sobol indices are to be computed.
Although the computation of Sobol indices is a computationally intensive process, its interpretation is relatively simple to explain. If the Sobol index corresponding to a parameter is high, the output of the simulation is more sensitive to that parameter. In other words, relatively small variations in a parameter with high Sobol index will lead to relatively high variation in the output of the simulation and vice versa.
In order to explain the plots observed in Figure 5, let us take a closer look at the parameters with respect to which the Sobol indices are computed.
#### Infection rate
Infection rate is the probability that a susceptible individual gets infected when he/she remains in the proximity of an infected individual for 24 hours. Since it is necessary for an individual to be in the infected state before he/she can get hospitalized, the infection rate of the disease directly impacts the daily number of hospitalizations. However, note that only the susceptible individuals can be infected and not those who are immune. Moreover, infections can only spread if individuals interact with each other. Therefore, the Sobol index of infection rate increases when the number of susceptible individuals is high and the lockdown restrictions allow a sufficient number of interactions between susceptible and infected individuals.
This trend is clearly seen in Figure 5. In the initial periods of simulation, the Sobol index corresponding to infection rate starts high for all three regions. It drops rapidly as the first wave ends and the number of infections comes close to zero. This is due to the increased number of immune individuals as well as the imposition of lockdown restrictions. As minor modifications are regularly made to the lockdown measures between the two waves, the Sobol index corresponding to the infection rate fluctuates around a low value. As the restrictions are gradually lifted and the number of infections gradually increases, the Sobol index also increases. The curve corresponding to the infection rate shows consecutive peaks until around day 300 when the second wave is seen in all regions.
While the overall trend of the Sobol index is the same across regions, there are slight differences between the three regions.
Figure 5: Sobol indices corresponding to four disease parameters as a function of time for Calárasi, Klaipéda and Harrow with respect to the number of hospital admissions on each day.
In Calarasi, the peak Sobol index for infection rate is similar for the first and second waves. However, the peak Sobol index for infection rate for the second wave is smaller than that of the first wave for Klaipeda and Harrow. A lower peak for the second wave in Klaipeda and Harrow implies that the daily number of hospital admissions is also determined by several competing factors other than infection rate. However, in Calarasi, the infection rate is the primary and overwhelming factor determining the number of daily hospitalizations during the second wave. We will now look into this aspect of the mechanism as we analyze the Sobol indices corresponding to other parameters.
#### Incubation period
When a susceptible individual gets exposed to the disease vector, there is a small period of time before the symptoms of the disease manifest themselves. This period of time is called the incubation period. Since the incubation period essentially delays the time when symptoms are perceived and hence the individual is potentially hospitalized, the impact of changes in the incubation period is high if the number of hospitalizations changes in a short interval of time. Hence, in such a period, the Sobol index corresponding to it is high.
In Figure 5, we can clearly see that Sobol indices corresponding to incubation period starts high for all regions because, during the initial days of simulation, the number of infections and hospitalizations rise rapidly. The Sobol indices then start falling until after the end of the first wave in all regions. They stay low during most of the first lockdown period rising only just before the begining of the second wave. The variation in time, duration and shape of the second wave across the regions is captured by the variations in Sobol indices of incubation period. In Calarasi, where the second wave is sharp, the Sobol index shows slight bumps just before and after the start of the wave when the number of infections and hospitalizations changes the fastest. In Harrow, the rise and fall of the Sobol index is much more pronounced and prolonged corresponding to the long-lasting second wave in the region. Interestingly, the Sobol index for Klaipeda captures the peculiar shape of the second wave. The Sobol index rises not only prior to the advent and after the end of the second wave but also during the infection point in the wave around day 300.
It is notable that, while changes in the rates of hospitalization are captured well by the Sobol indices corresponding to them, incubation period by itself does not become a parameter to which the results of the simulation are most sensitive. This might be due to the fact that neither the peak height nor the frequency of the waves of infection and hospitalization can be significantly altered by independently changing the incubation period of the disease. Changing the incubation period can only alter the time when the waves occur.
#### Mild recovery period
Mild recovery period is the average number of days required for recovery if an infected person is showing mild symptoms. An infection is considered mild if the infected person does not need to be hospitalized. In FACS, there are two ways in which an infected person can get hospitalized. After infection, the person might develop severe infection immediately. In this case, he/she is directly admitted to the hospital and is never isolated at home. However, If the symptoms are mild, he/she would stay isolated at home. Until such a person recovers, there is a probability that the disease may worsen and he/she might be hospitalized. It is in this scenario that mild recovery period becomes crucial in affecting the overall number of hospitalizations per day.
The stark differences between the sensitivity of the daily number of hospitalizations on mild recovery period for the three regions can be seen clearly in Figure 5. In Calarasi, where the waves of infection are sharp and high, changes in mild recovery period have a very limited impact on the number of hospitalizations. In Klaipeda and Harrow, where the waves are less sharp and high, the impact of mild recovery period is more pronounced.
This is probably because, when the waves of infection are sharp, people remain infected for a short period of time. Since mild recovery period can affect the daily number of hospitalizations only when the number of infected people is significant, therefore, there is only a short interval of time when it can have a high Sobol index. Additionally, it should be noted that a high peak implies a large outbreak in a specific set of infections in a small interval of time. This results in a large number of infected people becoming hospitalized in a small period of time. This results in a sudden reduction in the number of people who can spread the infection further. Therefore, the impact of mild recovery period would be limited. This explains why mild recovery period has a very limited impact in Calarasi.
In Klapieda and Harrow, the waves of infection are extended over a longer period of time. This allows for more opportunities for interaction between infected and susceptible people resulting in more infected people over time. The mild recovery period parameter impacts the hospitalization probability of these newly infected people. The higher the mild recovery period, the greater the uncertainty about the time at which the person might be hospitalized. This effect is seen most prominently during the second wave in Harrow which takes a long time to subside. This results in a significantly high Sobol index for mild recovery period during the second wave.
#### Immunity Duration
Immunity duration is the average number of days for which a person remains immune from the disease after gaining immunity. This immunity can be gained due as a result of recovery from a recent infection or vaccination. In the simulation results shown
here, vaccination was gradually introduced from day 275. Prior to that, all immunity in the population is induced as a result of prior infections.
Since the duration of immunity directly impacts the number of people who can become infected, it plays a significant role in determining the daily number of hospitalizations in all regions studied. In Calarasi, where a significant proportion of the population was infected during the second wave, the Sobol index corresponding to immunity duration is observed for the longest duration. In Klaipeda and Harrow, the impact of immunity duration is significantly observed only for a shorter and later period of time. This difference can be attributed to the lower number of people infected in these two regions during the first wave.
In Calarasi, a sharp and high first wave results in a large number of people recovering and gaining immunity in a short period of time. Therefore, a change in the duration of immunity between the first and second waves would change the time when a large number of people would become susceptible again. This would change their probability of getting infected and hence getting hospitalized. In Klaipeda and Harrow, the first wave is much wider and shorter. Therefore, the number of people who became immune in the first place is smaller. Moreover, the period of time, when they gain immunity, is spread over a longer period of time. Therefore, the Sobol index in these regions gradually gains importance over time until the advent of the second wave.
After day 275, when gradual vaccination was introduced in the population, the Sobol index corresponding to immunity duration increases for all regions due to vaccine-induced immunity.
## Discussion
Through a detailed study of simulation results of the agent-based model FACS in three geographically distinct regions of Europe, we have demonstrated that geographical structure of regions has a significant impact on the role of each input parameter in the model. The three regions studied in this paper: Calarasi, Klaipeda and Harrow have similar population sizes but very different geographical distributions of houses and amenities. We have shown that this difference leads to differences in population movement patterns, evolution of the disease and sensitivities to disease parameters.
While all three regions have experienced two waves of infection, the timing and intensity of the waves have been different. Calarasi, with a more segregated population, witnessed the sharpest and most intense waves, with a large number of infections and hospitalizations occurring in a short period of time. Klaipeda and Harrow, which have relatively more mixed populations, saw waves which were flatter but lasted for a longer period of time. Some of the intricate details of the shapes and timing of the waves were discussed in detail in the previous sections. We also highlight the relationship between differences in the geographical structure of the regions and differences in the evolution of the disease.
We also analyzed the Sobol indices of a subset of parameters in the FACS model to highlight that the differences in the geographical structure of the regions also impact the sensitivity of the model to different parameters. One of the interesting observations was that, for instance, the recovery period from a mild infection of the disease has a very limited impact on the number of hospitalizations in a region like Calarasi, where infections spread through a limited number of highly connected hubs. However, in regions like Klaipeda and Harrow, where amenities are more evenly distributed, the recovery period from mild infection has a more pronounced impact on the number of hospitalizations.
The results of the sensitivity analysis presented here give us an insight into the impact of the various properties of the disease virus on different types of geographical regions. Given the various mutations that may arise in the genetic structure of the virus, it is important to understand the impact of these mutations on infections and hospitalizations in a region. The results presented here highlight that changes in the same property of the virus may have different impacts on the evolution of the disease in different regions. Therefore, it is important to understand the geographical structure of a region to make accurate predictions about the evolution of the disease in that region.
Due to the large number of parameters in the FACS model, it is not possible to analyze the Sobol indices of all parameters in this paper. However, the results presented in this paper highlight the importance of analyzing the sensitivity of the model to different parameters. This can help identify parameters that have a significant impact on the evolution of the disease in a region. This can also help identify the parameters that need to be estimated more accurately to make better predictions in a region. For example, the impact of various lockdown measures on the evolution of infections and hospitalizations is also an important factor that needs to be analyzed. The effectiveness of these measures will also depend on the geographical structure of the region. In particular, the results presented in this paper show that infections in schools play a significant role in Klaipeda. Therefore, school closure periods are expected to have a greater impact on the evolution of the disease in Klaipeda as compared to Calarasi and Harrow. Similar statements can be made about shopping centres in Calarasi and offices in Harrow. Since lockdowns also have an economic impact, it becomes interesting and important to conduct such sensitivity analysis in each region to mitigate any future pandemics in a more effective manner.
## Methods
At the core of the results discussed in the previous section, are the Flu and Coronavirus Simulator, computation of Sobol indices, and the software and hardware architecture which allows such computationally intensive tasks to be performed. In this section, we present each of these aspects in detail.
### Overview of Flu and Coronavirus Simulator (FACS)
All the results presented in this paper were generated by the Flu and Coronavirus Simulator (FACS) which simulates the propagation of an infectious disease in a given geographical region. FACS views the geographical region as a set of spatially distributed houses and amenities. Amenities are classified into seven categories: (i) shops, (ii) supermarkets (iii) schools, (iv) offices, (v) parks, (vi) leisure, and (vii) hospitals. Each house contains one or more agents (persons). Some of these agents are initially infected by the disease being simulated. FACS simulates each day for all the agents who visit these amenities according to their age and spend some time of the day there. During these visits, the agents might either get infected or infect other people with the virus.
In terms of the disease model, the agents are divided into the following categories: susceptible, exposed, infected, recovered, dead, and immune. Depending on the visits made by the agent during the day, and other factors such as the lockdown measures and vaccination status, the transition probabilities among the categories are computed. For this computation, the age of the agent, the size and type of the locations visited, and the compliance rate to the lockdown measures is also considered. The transition rates can also be modified with the progress of time. This is crucial to model diseases where new variants of the disease vector emerge over time.
The results of a particular run of FACS are defined by six configuration files:
1. **Buildings file:** lists the geographical coordinates of houses and other amenities in the geographical region. For each amenity, its size in terms of the area occupied on the map is also listed.
2. **Demographics file:** describes the age-dependent population of the region
3. **Needs file:** defines the amount of time spent by each agent at each type of amenity location. This depends on the age of the agent.
4. **Disease file:** defines the properties of the disease vector such as its infection rate, the probability to get hospitalized, incubation period, etc.
5. **Measures file:** gives a timeline of the social and governmental measures being taken to mitigate the spread of the disease such as movement restrictions, mask mandates, restricted opening hours of certain amenities, etc.
6. **Vaccinations file:** defines the time-dependent rate of vaccination and vaccine efficiency.
Once all configuration files are provided, FACS can be run using a one-line command on any Linux-based system. Some of the initial conditions for the simulation, some parameters, and the input/output directories are specified as options in the command. The runtime of FACS depends on the population size of the geographical region, parameters and initial conditions.
As a first step, FACS constructs a location graph of the region based on the locations provided in the buildings file. This involves connecting each house of the region with one amenity of each type. This is done by computing a cost-function \(C_{ij}\) for each house \(i\) and amenity \(j\),
\[C_{ij}=\frac{D_{ij}}{\sqrt{S_{j}}} \tag{1}\]
where \(D_{ij}\) is the distance between the house and the amenity, and \(S_{j}\) is the size of the amenity. Then, for each ammenity category, a link is made between house \(i\) and amenity \(k\) such that \(C_{ik}=\min C_{ij}\forall j\).
Thereafter, each house is randomly populated by one or more agents. Each agent is characterized by an age which is sampled in accordance with the demographics file.
Thereafter, FACS simulates the daily routine of each agent from a specified date. The agents, based on their age, try to spend a certain amount of time in a day at each amenity to which their house is connected. Whether they can visit the amenity successfully on a particular day depends on the restrictions applied on that day, as well as their infection status. For example, if the agent is hospitalized, they neglect their needs and spend their entire day at the hospital. During these visits to the amenities, the agents may get exposed to the disease vector via other infected people. They may then pass through the various stages of infection as described earlier.
As its main output, FACS computes timeseries of the various quantities of interests such as the number of susceptible, exposed, infectious people, as well as the number of hospitalizations and deaths. In addition, the simulator produces output files detailing the locations with infections and recoveries that occurred on each day.
_A note on the generation of offices and in FACS_
Offices in FACS represent all types of workplaces in the region. Although the locations of houses and other types of amenities used in FACS are in accordance with their locations in OpenStreetMaps, the office buildings used in FACS simulation are assumed to be uniformly distributed across the regions in consideration. This is primarily because a significant part of the population travels to other neighboring regions for work. Additionally, given the diversity of types of workplaces, it was particularly difficult to create an exhaustive list of tags that might be used to identify buildings as workplaces. With these shortcomings in the available data, runs conducted for preliminary validation studies demonstrated that the model is better validated with offices being uniformly distributed across the region.
### Sobol indices
The sensitivity analysis presented in this paper primarily involves the computation of Sobol indices. We now discuss in brief the computation of these Sobol indices.
Let there be a time-varying function \(y=f(x;p):\mathcal{R}^{n}\rightarrow\mathcal{R}^{n}\) which depends on an \(n-\)dimensional state-variable \(x=\{x_{1},x_{2},\ldots,x_{n}\}\) and \(m\) parameters which can be represented by a parameter vector \(p=\{p_{1},p_{2},\ldots,p_{m}\}\).
Consider \(y_{i}\) a component of the vector \(y\). The value of each of \(y_{i}\), in general, depends on the value of each parameter \(p_{j}\) in the parameter vector \(p\); and varying any parameter would result in a variation in \(y_{i}\).The amount of variation caused in \(y_{i}\) due to a unit variation in \(p_{j}\) determines the sensitivity of \(y_{i}\) to the parameter \(p_{j}\). There are various well-known methods to quantify this sensitivity. In this paper, we have presented the results of variance-based sensitivity analysis introduced first by I. M. Sobol [21].
For a given component \(y_{i}\), the variance-based sensitivity analysis gives indices \(S_{i,j}\) corresponding to each parameter \(p_{j}\). The index \(S_{i,j}\) is known as the first order Sobol index corresponding to parameter \(p_{j}\) and output component \(y_{i}\). It correlates with the sensitivity of \(y_{i}\) with respect to variations in \(p_{j}\) and is defined as
\[S_{i,j}=\frac{V_{i,j}}{V_{i}}=\frac{V\left(E\left(y_{i}|p_{j}\right)\right)}{V \left(y_{i}\right)}, \tag{2}\]
where \(V_{i,j}=V\left(E\left(y_{i}|p_{j}\right)\right)\) is the variance in the expected value of \(y_{i}\) when the parameter \(p_{j}\) is fixed and \(V_{i}=V(y_{i})\) is the variance in the value of \(y_{i}\) when no restrictions are imposed.
Note that the variation in \(y\) may not only be caused by variation in single parameters, but also by variations in a combination of parameters. For example, in the results presented in the paper, if infection rate is varied and all other parameters are kept constant, noting the variation in daily number of hospitalizations gives us the first order Sobol index corresponding to infection rate. Similarly, if immunity duration varied and all other parameters are kept constant, we obtain the first order Sobol index corresponding to immunity duration. However, if infection rate and immunity duration are simultaneously varied, the variation in daily number of hospitalizations can be more than the sum of variations in the previous two cases. The measure of this additional variation in the output variable (daily number of hospitalizations) with respect to variation in more than one parameter gives rise to higher order Sobol indices. Investigations into these higher order Sobol indices can be a topic of future research.
It is clear from equation 2 that the computation of Sobol indices requires variances in the expected values of the output variables. Since the output variables can only be defined implicitly when dealing with complex systems such as the one being studied in this paper, it becomes important to estimate this variance. To this end, various statistical techniques such as polynomial chaos expansion [7], stochastic collocation [7], and quasi-Monte-Carlo method [7]. In this paper, we have used the stochastic collocation method for computing the variance of the output variables and hence the Sobol indices.
Stochastic collocation method essentially samples the parameter space using a number of discrete univariate points. The model is computed at these points in the parameter space. A multivariate interpolant is then constructed using the output of the model at these points. This interpolant is then used as an approximation of the output variable \(y_{i}\) in equation 2. Further mathematical details of the stochastic collocation method can be seen in the literature [7].
### Software and computational facilities used
Given their computational requirements, the FACS simulations used in this paper were performed on ARCHER2 high-performance computers, which have a total of 5,860 nodes with an estimated peak performance of 28 Pflops per second. Since ARCHER2 is used by a large number of researchers around the world, it is necessary to efficiently distribute the simulation between the HPC nodes. This was done using QCG-PilotJob [7]. The sensitivity analysis and computation of the Sobol indices presented in the paper was performed using EasyVVUQ software [7]. Simultaneous integration and use of the above-mentioned tools can become challenging. Therefore, we used FabSim [37] as an interface, which gives access to these tools and HPC using
one-line commands. Figure 6 shows the interaction between the various software components. We will now describe each of these components in some detail.
#### Simulation run time
FACS is an open-source software written in Python, which can be run on any Linux machine. Since the code for FACS can run on multiple cores, the time required for each simulation run mostly depends on the number of agents in the simulation, the number of days to be simulated, and the number of cores being used by FACS. For the results presented in this paper, we have simulated regions with an approximate population of 200,000 people for a simulation period of 400 days. While running on a single core, a single run of such a simulation takes about an hour to run. In this paper, we presented sensitivity analysis results against a set of four parameters which define the disease vectors. As discussed earlier in the Results section, these parameters were selected out of seven possible scalar parameters based on preliminary sensitivity analysis. Taking into consideration the fact that each sensitivity analysis itself comprises an ensemble of 256 runs, a total of 2034 simulations were required to obtain these results. Out of these, results of 768 runs are presented in the paper. The total amount of time required to perform the required number of simulations would have been too large to be practically run on a personal computer. Therefore, we used ARCHER2, which is an HPC hosted at the University of Edinburgh. On ARCHER2, the simulations were run using 128 cores. Therefore, the amount of time required for each simulation run was reduced to under 60 seconds.
#### Computing the Sobol indices
The computation of Sobol indices presented in this paper was handled by EasyVVUQ, a Python package to conduct verification, validation and uncertainty quantification for HPC simulations. One of the primary tools provided by this package is the computation of Sobol indices. Given a simulation which takes one or more input files, EasyVVUQ uses appropriately constructed templates to modify the numerical values present in the input files. The parameters to be varied, region of the parameter space to be scanned, the ensemble size and the sampling method to be used for the sensitivity analysis is described in a separate settings file. The ensemble of jobs is then created and submitted to the HPC. After the simulations have completed, the results are collected and Sobol indices are appropriately computed and stored in a database.
#### Interface with HPC
Since HPCs are generally used only for running the simulations and not for analyzing or plotting the results, running a large ensemble of jobs on the HPC would normally be a cumbersome process which would involve organizing the input and output data, creating the job submission scripts, job submission, transferring the output data back to the local machine and further
Figure 6: Overview of the FACS and FabCovid19 plugin in congestion with the SEAVEA toolkit. Lines represent the interaction between the components, where solid lines represent the default or most common interactions. Dashed lines represent the available alternatives.
post-processing. Performing all these steps manually would not only be time-consuming but also prone to errors. Hence, we use FabSim3, a software written in Python based on the Fabric2 framework which provides an interactive user interface to the HPCs.
FabSim3 essentially automates the above-mentioned workflow by preparing shell scripts and executing a set of commands automatically. For instance, as mentioned earlier, the computation of Sobol indices involves running a large ensemble of jobs which are prepared using the EasyVVUQ package. FabSim3 allows us to prepare the ensembles and submit the jobs using a single command issued on the local machine. A second command can then transfer the output of the simulations back to the local machine, compute the Sobol indices and prepare the plots shown in Figure 5. Hence, the Sobol indices can be computed using two commands in total. FabSim3 has also been used to prepare all other plots in the paper too.
The computation and visualization tools specific to FACS are handled through a FabSim3 plugin called FabCovid19. These plugins use the general-purpose API's offered by FabSim3 for a specific software. Using such plugins, FabSim3 has also been applied to various other computationally intensive simulations in other fields of research [7, 8].
|
2304.09483 | Search for the production of dark fermion candidates in association with
heavy neutral gauge boson decaying to dimuon in proton-proton collisions at
$\sqrt{s} = 8$ TeV using the CMS open data | This analysis shows a search for dark fermion particles produced in
association with a heavy neutral gauge boson (Z$^{\prime}$). The studied events
topology are dimuon and a large missing transverse momentum. %We considered the
muonic decay of Z$^{\prime}$. The analyzed data were the Open Data collected by
the CMS detector in proton-proton collisions at the LHC in 2012 and correspond
to an integrated luminosity of 11.6 fb$^{-1}$ at $\sqrt{s} = $ 8 TeV. One
benchmark scenario the light vector was used for interpreting the data, based
on a simplified model so called the mono-Z$^{\prime}$ model. No evidence of
dark fermion candidates was found, 95$\%$ confidence level limits have been set
on both Z$^{\prime}$ and dark fermion masses. | Y. Mahmoud, H. Abdallah, M. T. Hussein, S. Elgammal | 2023-04-19T08:11:15Z | http://arxiv.org/abs/2304.09483v4 | Search for the production of dark fermion candidates in association with heavy neutral gauge boson decaying to dimuon in proton-proton collisions at \(\sqrt{s}=8\) TeV using the CMS open data
###### Abstract
This analysis shows a search for dark fermion particles produced in association with a heavy neutral gauge boson (Z\({}^{\prime}\)). The studied events topology are dimuon and a large missing transverse momentum. The analyzed data were the Open Data collected by the CMS detector in proton-proton collisions at the LHC in 2012 and correspond to an integrated luminosity of 11.6 fb\({}^{-1}\) at \(\sqrt{s}=8\) TeV. One benchmark scenario the light vector was used for interpreting the data, based on a simplified model so called the mono-Z\({}^{\prime}\) model. No evidence of dark fermion candidates was found, 95% confidence level limits have been set on both Z\({}^{\prime}\) and dark fermion masses.
## I Introduction
Searches for dark matter (DM) at the Large Hadron Collider (LHC) have been one of the main goals of the LHC since it has been started working. Dark matter has been proposed to have the form of non-luminous matter, which can contribute in explaining many astrophysical and cosmological phenomenon [1; 2; 3; 4; 5; 6; 7; 8; 9]. Recent observations, by Planck telescope [10], suggested that it contributes to about 27% of the mass of the universe. Data collected by the LHC at CERN are scrutinized for large missing transverse momentum (\(p\!\!\!/_{T}\)) as a signature of new weakly interacting particles that may be related to dark matter. These searches rely on the production of a visible object "X", which recoils against the large missing transverse momentum from the dark matter particles leaving a signature of \(\mathrm{X}+p\!\!\!/_{T}\) in the detector. The visible particle could be a SM particle like W, Z bosons or jets [11], photon [12] or SM Higgs boson [13]. In our study we present a search for dark fermions (DF) in events with dimuon, with high invariant mass, plus large missing transverse momentum. Similar searches for dark matter in this channel have been performed at the ATLAS and CMS experiments at the LHC with the visible particle being a Z boson decaying to dimuon at \(\sqrt{s}\)= 8 TeV [14] and \(\sqrt{s}=13\) TeV [15]. It is also possible that, the visible particle could be a heavy neutral gauge boson (Z\({}^{\prime}\)) predicted by BSM models [16; 17]. The scenario which we present in this paper is for the possible production of dark matter at the LHC. At which the visible particle is a new neutral gauge boson called Z\({}^{\prime}\) and recoils against dark sector particles, that leave a trace of a large missing transverse momentum \(P\!\!\!/_{T}\) at the Compact Muon Solenoid (CMS) detector [18; 19]. This type of models is known as Mono-Z\({}^{\prime}\) model [17]. The Z\({}^{\prime}\) is neutral and can decay leptonically into a pair of oppositely charged leptons (l\({}^{+}\)l\({}^{-}\)) or hadronically into a pair of quarks leading to dijet, so that it can be detected as a resonance in the dilepton or dijet invariant mass distribution [20; 21; 22; 23]. The hadronic decay of Z\({}^{\prime}\) in the Mono-Z\({}^{\prime}\) model was studied previously by ATLAS collaboration in [24]. In the current analysis, we consider the leptonic decay of Z\({}^{\prime}\) (i.e. Z\({}^{\prime}~{}\rightarrow~{}\mu^{+}\mu^{-}\)), which has not been studied before in the context of the Mono-Z\({}^{\prime}\) model. The data sets used in this study are obtained from the CMS open data project [25], which released data sets from recorded and simulated proton-proton collisions at centre of mass energy (\(\sqrt{s}=8\) TeV). These data sets are available publicly for for all researchers even if they are not members in the CMS collaboration. The open data samples provide a great potential for researchers in high energy particle physics to test many theoretical models available in literature [26].
In the rest of this paper, we will discuss the theoretical model for the production of dark matter at the LHC in section II. A brief description of the CMS detector will be introduced in section III. In section IV we will mention the CMS open data and Monte Carlo (MC) samples, used in the current analysis, from the proton-proton collisions, followed by a discussion of the important SM background processes and how to calculate their contributions in section V. The analysis strategy and the criteria for the event selection are discussed in section VI, while the systematic uncertainties and their effect on the prediction of the backgrounds in section VII. The results and the summary of the search are presented in sections VIII and IX,
respectively.
## II The simplified model
Our target model known as Mono-Z\({}^{\prime}\), discussed in [17], assumes the production of dark matters from proton-proton collisions at the LHC through a new heavy gauge boson Z\({}^{\prime}\). The dark matter production proceeds through one of three different possible scenarios for the production of dark matter in the Mono-Z\({}^{\prime}\) model, two of which are simplified models, dark Higgs (DH) scenario and Light vector (LV) also called dark fermion (DF) scenario. The third model is called light vector with inelastic effective field theory coupling (EFT). The dark fermion scenario is presented in figure 1. The proposed dark fermion can be produced through the process of pair annihilation of two quarks \(q\bar{q}\) mediated by the heavy vector boson Z\({}^{\prime}\), which then undergoes two dark fermions, a light dark fermion (\(\chi_{1}\)) and a heavy one (\(\chi_{2}\)). \(\chi_{2}\) is heavy enough to decay to a Z\({}^{\prime}\) and another light dark fermion \(\chi_{1}\) (i.e. \(\chi_{2}\rightarrow\) Z\({}^{\prime}\)\(\chi_{1}\)) as shown in figure 1.
The interaction term, in the Lagrangian, between the dark fermions and Z\({}^{\prime}\) is given by [17]
\[\frac{\mathsf{g}_{DM}}{2}Z^{\prime}_{\mu}(\bar{\chi}_{2}\gamma^{\mu}\gamma^{5 }\chi_{1}+\bar{\chi}_{1}\gamma^{\mu}\gamma^{5}\chi_{2}),\]
where \(\mathsf{g}_{DM}\) is the coupling of Z\({}^{\prime}\) to the dark fermions \(\chi_{1}\) and \(\chi_{2}\).
There are two assumptions for setting masses in the dark fermion model, which are illustrated for the light dark sector and the heavy dark sector in table 1. For the mass assumptions in case of the heavy dark sector scenario; the heavy dark fermion (\(\chi_{2}\)) should has a mass twice the mass of Z\({}^{\prime}\), while the mass of light dark fermion (\(\chi_{1}\)) is half of the mass of Z\({}^{\prime}\). In the case of the light dark sector case, since the cross section increases with lower \(\chi_{1}\) mass, we include optimistic case with very light \(\chi_{1}=1,5,...,50\) GeV, while \(\chi_{2}\) is a quite heavier than \(\chi_{1}\).
In the rest of this paper, the coupling of Z\({}^{\prime}\) to the SM fermions (quarks and leptons) will be referred as \(\mathsf{g}_{SM}\), and the coupling of it to the DF particles will be denoted by \(\mathsf{g}_{DM}\). The total decay widths of Z\({}^{\prime}\) in the DF case, are calculated regarding the mass values of Z\({}^{\prime}\) and the coupling constants, assuming that Z\({}^{\prime}\) boson can only decay into a pair of muons and assuming that the decays \(Z^{\prime}\rightarrow\chi_{1}\chi_{2}\), \(\chi_{2}\to Z^{\prime}\chi_{1}\) and \(Z^{\prime}\rightarrow\mu\bar{\mu}\) are the only allowed for the DF scenario. In this scenario, there are many free parameters including the mediator mass \(M_{Z^{\prime}}\), the mass of the light dark fermion \(M_{\chi_{1}}\) and the coupling constants (\(\mathsf{g}_{SM}\) and \(\mathsf{g}_{DM}\)). In this analysis, the values of the couplings (\(\mathsf{g}_{SM}=0.1\) and \(\mathsf{g}_{DM}=1.0\)) have been chosen based on the results presented in [17] and [24].
The typical signature of these processes consists of a pair of opposite sign leptons or hadronic jets from the decay of Z\({}^{\prime}\) plus a large missing transverse momentum due to the stable dark fermions \(\chi_{1}\) and \(\chi_{2}\). This scenario was previously studied by the ATLAS collaboration in [24] with the hadronic decay of Z\({}^{\prime}\). In our study, we have considered the muonic decay of the on-shell Z\({}^{\prime}\) since the CMS detector has been optimized to this decay channel (which is a clean channel with respect to SM backgrounds). So that our studied events are with the following topology (\(\mu^{+}\mu^{-}+\not{E}_{T}\)). For the dark fermion scenario, with the use of light dark sector case, table 2 indicates the cross section measurements times branching ratios calculated for different sets of the Z\({}^{\prime}\) and \(\chi_{1}\) masses. The cross section is sensitive to the change in the dark fermion mass. The simulated dark fermion signals, used in this analysis, are private production samples, at which we used the matrix element event generator MadGraph5 aMC@NLO v2.6.7 [27]. We are grateful to Tongyan Lin, one of the authors of [17], for sharing with us the so-called Universal FeynRules Output (UFO) for the Mono-Z\({}^{\prime}\) model. In the rest of this paper, we will consider the light dark sector scenario and neglect the heavy case, since the cross section times branching ratio measurements, given in table 3, for heavy dark sector are much lower than the light case by more than factor 10. Hence this analysis does not have any sensitivity to heavy dark sector scenario.
Figure 1: Feynman diagrams for the mono-Z\({}^{\prime}\) simplified scenario; dark fermion.
Table 3 The dark fermion cross section measurements times branching ratios (in pb) calculated for different sites of the masses \(M_{Z^{\prime}}\), for the heavy dark sector mass assumption, with the following couplings constants \(\mathsf{g}_{SM}=0.1,\ \mathsf{g}_{DM}=1.0\) and at \(\sqrt{s}=8\) TeV.
## III The CMS detector and reconstruction techniques
The Compact Muon Solenoid (CMS) is a 21-m long, 15-m wide and 15-m high general purpose particle detector, located at one of the four crossing points at the LHC. The aim of the CMS detector is to study a broad range of physics, from SM physics like the Higgs boson to BSM physics like dark matter and extra dimensions. The CMS detector is made of five layers containing four sub-detectors and the super conducting solenoid. The inner most layer of the detector is the inner tracker, which is used to measure the momenta of charged particles. The second layer is the Electromagnetic calorimeter (ECAL) which detects and measures the energy of photons and electrons. The third layer is the Hadron Calorimeter (HCAL) which detects and measures the energy of hadrons. The super conducting magnet is fourth layer and it provides a magnetic field of 3.8 T which bends the paths of high energy charged particles allowing to measure their momenta. The outermost layer of the detector is the muon system. The muons system uses three types of detectors: Drift Tubes (DT) in the barrel part of the detector, Cathode Strip Chambers (CSC) in the endcaps and Resistive Plate Champers (RPC) completing both the barrel part and endcaps. The origin of the coordinate system at the CMS is considered to be the interaction point with the z-axis pointing along the beam axis, the y-axis pointing upwards and the x-axis pointing towards the center of the LHC. The Azimuthal angle \(\phi\) is the angle in the transverse (x-y) plane measured from the positive direction of the x axis. The polar angle \(\theta\) is measured from the positive z-axis and is expressed in terms of the pseudo-rapidity (\(\eta\)) where \(\eta=-\mathrm{ln}[\mathrm{tan}(\theta/2)]\). Since our study includes muons and missing transverse energy in the final state, we will mention how they are reconstructed. The muon objects are identified and reconstructed from fitting muon tracks from both the inner tracker and the muon system, hence they are called global muons [28; 29]. The missing transverse momentum is reconstructed according to the particle flow (PF) algorithm described in [19; 30]. The PF algorithm calculates the missing momentum from the imbalance in the vector sum of the momenta in the transverse plane. It can also be defined as the vector sum of the negative PF reconstructed transverse momenta of all the particles \(\vec{\not{p}}_{T}=-\sum\vec{p}_{T}^{\,pf}\)[31]. Many factors can affect the magnitude of the \(\vec{\not{p}}_{T}\) leading to overestimation or underestimation of its true value. These factors include the calorimeter response, as minimum energy thresholds in the calorimeter and \(p_{T}\) thresholds, inefficiencies in the tracker and non-linearity of the response of the calorimeter for hadronic particles. This bias can be effectively reduced by correcting for the \(p_{T}\) of the jets using jet energy corrections as defined in the following formula, which is given in [31]
\[\vec{\not{p}}_{T}^{\,\mathrm{corr}}=\vec{\not{p}}_{T}-\sum_{jets}(\vec{p}_{Tjet }^{\,\mathrm{corr}}-\vec{p}_{Tjet}),\]
where "corr" refers to the corrected values. These variables of particular relevance to the present analysis are
the corrected missing transverse momentum vector \(\vec{\not{p}}_{T}^{\rm\,corr}\) and the magnitude of this quantity, \(\not{p}_{T}^{\rm\,corr}\), which is one of the variables included in the Particle Flow (PF) MET object [32; 33] in the CMS software [34].
## IV Data and Simulated Samples
### Monte Carlo simulation of the model signals
The model signal of the dark fermion scenario events are privately generated using MadGraph5_aMC @NLO v2.6.7 [27], which is a general purposed matrix element event generator. The cross section calculated at next to-leading-order (NLO), the hadronizaton process has been done with Pythia [41]. The NNPDF2.3QED NLO set, which is available via the LHAPDF6 library [36], is used for the parton distribution functions (PDF) [37]. The detector simulation of the read out system response (digitization) and reconstruction processes have been done using the standard CMS open data software framework [34] (the release CMSSW_5\(3\)32) at \(\sqrt{s}=8\) TeV requirements, with the suitable triggers list used for CMS-2012 analysis. The effect of pile-up has been simulated by overlaying MC generated minimum bias events [38]. We scanned the production cross section at different sets of the masses of the particles Z\({}^{\prime}\) and \(\chi_{1}\) as free parameters covering a wide range for the mass of Z\({}^{\prime}\) boson from 150 GeV to 700 GeV and from 1 GeV to 50 GeV for the mass of \(\chi_{1}\), assuming g\({}_{SM}=0.1\) and g\({}_{DM}=1.0\).
### Monte Carlo simulation of the SM backgrounds
In order to simulate the SM processes that have muons and/or missing transverse momentum (due to the undetected neutrinos) at the final state which could interface with our signal events, we used the CMS open Monte Carlo samples at \(\sqrt{s}=8\) TeV as background processes [34]. The Drell-Yan (DY) background (the production of a virtual \(Z/\gamma^{*}\) that decays into a muon pair), has been generated using POWHEGBox v1.0 MC program [39; 40] interfaced to Pythia v.6.4.26 for parton shower model [41]. Another important sources of SM backgrounds with dimuon and missing \(p_{T}\) in the final state are the fully leptonic decay of tt which is generated using MadGraph5_aMC@NLO [42], the production of electroweak diboson channels as WW, WZ have been generated with MadGraph interfaced to Pythia v.6.4.26, and ZZ to four muons process which is also generated with POWHEGBox v1.0. The Monte Carlo samples used in this analysis and their corresponding cross sections, calculated at next-to-leading or next-to-next-to-leading order, are indicated in table 4.
### CMS open data samples
The CMS open data files used in this analysis are based on pp collision at \(\sqrt{s}=8\) TeV during the LHC run-i and recorded by the CMS detector in 2012. We used the two open data runs (run-b and run-c) corresponding to a total integrated luminosity of 11.6 fb\({}^{-1}\)[49]. The data were triggered by the high level trigger HLT_Mu40_eta2p1 which is a single muon trigger. This trigger was unprescaled for the full 2012 data-set and aim to collect events with at least one muon candidate within \(|\eta|<2.1\) and p\({}_{T}>40\) GeV. The efficiency of this trigger varies as a function of \(\eta\), resulting in an efficiency for triggering on a dimuon system that varies between 97% and 100% [50]. The events have been taken from the list of the validated runs (known as the good runs list) for the primary sets of 2012 data provided by the open data project [51], at which all the CMS sub-detectors were working stably. The samples; run numbers, their data sets names and corresponding integrated luminosity (\(\mathcal{L}\)) are listed in table 5.
## V Backgrounds estimation
There are many background processes that include dimuon in the final state plus missing transverse momentum, and can mimic with our events topology in our search for new physics. The first type is the SM processes produced during proton-proton collisions, the second is the jets contamination and the third is the cosmic muons background.
The contribution of the SM background processes, that are considered in the present study, have been estimated from the Monte Carlo simulations, following the same method applied in the previous search for new resonance within the dimoun events at \(\sqrt{s}=8\) TeV [50]. The Monte Carlo sample of the SM backgrounds, which are listed in table 4, are normalized to their corresponding cross sections. The jets background arises from the misidentification of jets as muons, where a jet or multijet pass the muons selection criteria. This kind of backgrounds comes from two processes; W+jet and QCD multijet. The contamination of single and multijet background in data is usually estimated from data using a so called data driven method which is explained in [50].
It has been founded that the QCD and W+jets contributions are very small above 400 GeV at the dimuon invariant mass spectrum, as estimated in [50], with only 3 events could be misidentified as muons for an integrated luminosity of 20.6 fb\({}^{-1}\), hence in our case (luminosity = 11.6 fb\({}^{-1}\)) this contribution is expected to be much lower than 3 events. Apart from the Z peak, mass bin [120 - 400] GeV, the jets misidentification was found to be 147 events which represents about 0.15% of the total SM backgrounds (96800 events) estimated in this mass bin [50], which can have a very tiny effect on our results, thus for these reasons QCD and W+jets backgrounds es
timated from data are negligible in the current study.
The last background source comes from the Cosmic muons that cross the detector layers and pass near the interaction point while the operation process, this background can be suppressed by constraining the vertex position and the impact parameter associated with the reconstructed muon. A cut is applied such that the muon's transverse impact parameter, with respect to the primary vertex, must be less than 0.2 cm. For cosmic muons that pass in-time with a collision event, and pass the vertex position and the impact parameter cuts, the 3D angel between each of the reconstructed dimuons, is restricted to be below \(\pi-0.02\) rad. The mentioned cuts are applied in the identification of muons in the 2012-analysis [54; 55]. After all, it has been founded that the cosmic muons contribution to our background is less than 0.1 events, and can be also neglected [50].
## VI Selection of Event
The aim of this selection is to pick out events containing dimuon in addition to missing transverse momentum. This selection is divided into two steps; the first one is the preselection which is presented in table 6 (top panel), and the second step is the tight selection introduced in table 6 (bottom panel). The detailed definition of these cuts will be explained in this section.
### Preselection of event
The preselection is a manifestation of the high transverse momentum (\(p_{T}\)) muon identification introduced in [54; 55]. It includes cuts related to the trigger requirements (HLT_Mu40_eta2p1), the \(p_{T}\) threshold of this trigger is 40 GeV within the tracked acceptance (\(|\eta|<2.1\)) and the high \(p_{T}\) muon ID, that was applied in 2012 data analysis, used for the search for new physics with events containing dimuon resonance [50]. In addition we apply some kinematics cuts, on each muon, as the reconstructed transverse momentum of the muon (\(p_{T}^{\mu}\)) must be greater than 45 GeV, \(|\eta^{\mu}|<2.1\). Thus the events are selected with two opposite charge high \(p_{T}\) muons, with one of them passed the single muon trigger HLT_Mu40_eta2p1, finally, the invariant mass of the dimuon must be above 80 GeV, since we are looking for a resonance in the high mass regime.
Figure 2 shows the distribution of the dimuon invariant mass; the CMS open data are represented by black dots with error bars (statistical error only), the cyan histogram represents the Drell-Yan background, the grey histogram stands for the vector boson pair backgrounds (WW, WZ and ZZ) and the \(t\bar{t}\) + jets background is rep
resented by the red histogram. These histograms are stacked, while the signal of dark fermion scenario generated with different masses of the neutral gauge boson \(\mathrm{Z^{\prime}}\) and \(M_{\chi_{1}}=1\) GeV are represented by different colored lines, and are overlaid. The total systematic uncertainty (explained in section VII) is illustrated in the ratio plot. The corresponding distribution of the missing transverse momentum is presented in figure 3. These figures show good agreement between the data points and the simulated SM backgrounds within the statistical error (demonstrated by the error bars on the data points) and within the systematic uncertainty (demonstrated by hatched region in the ratio plots). As the signal samples are overwhelmed by the backgrounds, it is necessary to apply a more tighter set of cuts to discriminate signals from SM backgrounds as will be explained in the next section.
### Event selection
The final event selection is a combination of the preselction cuts introduced in table 6 (top panel) and extra tighter cuts, presented in table 6 (bottom panel). These tight cuts are based on four variables: the first variable is related to the invariant mass of the dimuon, at which we restricted the invariant mass of the dimuon to a small range around the mass of the \(\mathrm{Z^{\prime}}\), such that \((0.9\times M_{Z^{\prime}})<M_{\mu^{+}\mu^{-}}<(M_{Z^{\prime}}+25)\). The second is \(\Delta\phi_{\mu^{+}\mu^{-},\not{\overline{p}}_{T}^{\mathrm{corr}}}\), which is defined as difference in the azimuth angle between the dimuon direction and the missing transverse momentum direction (i.e. \(\Delta\phi_{\mu^{+}\mu^{-},\not{\overline{p}}_{T}^{\mathrm{corr}}}=|\phi^{ \mu^{+}\mu^{-}}-~{}\phi^{miss}|\) ), it has been selected to be greater than 2.6 rad. The third one is the relative difference between the \(p_{T}\) of dimuon and the missing transverse momentum (\(|p_{T}^{\mu^{+}\mu^{-}}-\not{\overline{p}}_{T}^{\mathrm{corr}}|/|p_{T}^{\mu^ {+}\mu^{-}}\)), it has been selected to be less than 0.6 which is an optimized cut to the signals region. Finally, we apply a requirement on the distance between the two muons in the (\(\eta\), \(\phi\)) plane, \(\Delta R<3\), where \(\Delta R=\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}\). These tight cuts have been applied in order to strongly decrease the SM backgrounds in addition to W+jets and the QCD multijet contributions.
## VII Systematic uncertainties
A variety of sources of systematic uncertainties have been considered while interpreting the results. Some sources originate from experimental issues, other sources are theoretical and related to the uncertainty in the Parton Distribution Functions PDF, that were used during the production process of the SM samples. The different sources of the systematic uncertainties, considered in the presented results, are listed in table 7.
There is 3% uncertainty related to the detector acceptance and the reconstruction efficiency [50]. The uncertainty in the evaluation of the integrated luminosity of the 2012-data, that are recorded by the CMS, was estimated to be 2.6% [58]. The uncertainty in the transverse momentum resolution was 5%. Another 5% uncer
Figure 3: The distribution of the missing transverse momentum, after the preselection (listed in table 6); for the CMS data, the expected SM backgrounds, and for the dark fermion scenario with \(M_{Z^{\prime}}=300\) GeV and \(M_{\chi_{1}}=1\) GeV. The lower band shows the data-to-simulation ratio with an illustration of the total uncertainty in the estimation of the background (shaded region).
Figure 2: The measured dimuon invariant mass spectrum, after applying preselection cuts listed in table 6, together with the estimated SM backgrounds and \(\mathrm{Z^{\prime}}\) masses produced by dark fermion scenario, with \(M_{\chi_{1}}=1\) GeV. The total systematic uncertainty in the overall background is shown as a shaded region. The data-to-simulation ratio is shown in the lower panel.
tainty on the transverse momentum scale per TeV, due to misalignment in the geometry of the CMS detector [50]. The uncertainty in the energy scale for particles with low energies (unclustered energy) is 10%, in addition to 2-10% and 6-15% uncertainties related to the jet energy scale and jet energy resolution respectively, these uncertainties have a direct impact on the measurement of the missing transverse momentum. The PDF choice in the production process of the DY sample, introduces an uncertainty which can be expressed in terms of the invariant mass as given in [50]. A 4.5% uncertainty related to the PDF choice of the DY process, has been estimated in the present analysis. 5% uncertainty related to the PDF for the WW process and 6% for the WZ process, are also included.
## VIII Results
For the dimuon channel, a shape-based analysis is employed. The missing transverse momentum distributions (\(\not{p}_{T}^{\text{corr}}\)) are used as the discriminating variable since the signal process characterized with relatively large \(\not{p}_{T}^{\text{corr}}\) values compared to the SM backgrounds. The distribution of the missing transverse momentum, after the application of the final event selection, is illustrated in figure 4. The observed data agrees well with the simulated backgrounds within the statistical and systematic uncertainties. The event yields passing the analysis final selection, which are summarized in VIII, for each of the SM backgrounds, the DF model (with \(M_{Z^{\prime}}=300\) GeV, \(M_{\chi_{1}}=1\) GeV) and the CMS open data; corresponding to an integrated luminosity of 11.6 fb\({}^{-1}\) are presented in table 8. Uncertainties include both statistical and systematic components, summed in quadrature.
In order to make a statistical interpretation for our results, we preformed asymptotic frequentist (CL) [59] with the profile likelihood-ratio test statistic [60] to de
\begin{table}
\begin{tabular}{|c|c|c|} \hline Step & Variable & Requirements \\ \hline \hline \multirow{4}{*}{Preselection} & Trigger & HLT\_Mu40.eta42p1 \\ & High \(p_{T}\) muon ID & [54, 55] \\ & \(p_{T}^{\mu}\) (GeV) & \(>45\) \\ & \(\eta^{\mu}\) (rad) & \(<2.1\) \\ & \(M_{\mu^{+}\mu^{-}}\) (GeV) & \(>80\) \\ \hline \multirow{4}{*}{Tight selection} & Mass window (GeV) & \((0.9\times M_{Z^{\prime}})<M_{\mu^{+}\mu^{-}}<(M_{Z^{\prime}}+25)\) \\ & \(|p_{T}^{\mu^{+}\mu^{-}}-\not{p}_{T}^{\text{corr}}|/p_{T}^{\mu^{+}\mu^{-}}\) & \(<0.6\) \\ \cline{1-1} & \(\Delta\phi_{\mu^{+}\mu^{-}}-\not{p}_{T}^{\text{corr}}\) (rad) & \(>2.6\) \\ \cline{1-1} & \(\Delta R_{\mu^{+}\mu^{-}}\) (rad) & \(<3\) \\ \hline \end{tabular}
\end{table}
Table 6: Summary of cut-based final event selection used in the analysis.
\begin{table}
\begin{tabular}{l c} \hline \hline feature & Uncertainty (\%) \\ \hline Luminosity (\(\mathcal{L}\)) & 2.6 [58] \\ \(A\times\epsilon\) & 3 [50] \\ \(p_{T}\) resolution & 5 [50] \\ \(p_{T}\) scale & 5 [50] \\ Unclustered \(\not{p}_{T}^{\text{corr}}\) scale & 10 [31] \\ Jet energy scale & 2-10 [31] \\ Jet energy resolution & 6-15 [31] \\ PDF (Drell-Yan) & 4.5 [50] \\ PDF (ZZ) & 5 [14] \\ PDF (WZ) & 6 [14] \\ \hline \end{tabular}
\end{table}
Table 7: Sources of systematic uncertainties considered in the presented analysis, and their values in percentage.
Figure 4: The distribution of the missing transverse momentum, after final analysis selection cuts listed in table 6, for the expected background and observed events in data in the \(\text{Z}^{\prime}\rightarrow\mu^{+}\mu^{-}\) channel. One signal benchmark, corresponding to the dark fermion scenario with \(M_{Z^{\prime}}=300\) GeV is superimposed. The signal is normalized to the product of cross section and \(\beta\), where \(\beta\) represents the \(\text{Z}^{\prime}\rightarrow\mu^{+}\mu^{-}\) branching fraction. The systematic uncertainties, summarized in table 7, are shown by the hatched band. The ratios of the data and the sum of all the SM backgrounds are shown in the bottom panel.
rive exclusion limits on the product of signal cross sections and branching fraction \(\mathrm{Br}(Z^{\prime}\rightarrow\mu\mu)\) at 95% confidence level. These limits are performed separately for the dark fermion signal hypotheses. The \(\pm 1\) and \(\pm 2\) standard deviations, which are shown in the expected limit, are obtained from pseudo-experiments with the background-only hypothesis, at which the nuisance parameters have been randomly varied within the post-fit constraints of the maximum likelihood fit to data. The cross section times the branching ratio \(\mathrm{Br}(Z^{\prime}\rightarrow\mu\mu)\) limits for the simplified model (DF) is shown in figure 5, with the light dark sector set of masses, the muonic decay of the \(\mathrm{Z^{\prime}}\) and coupling constant values of \(\mathrm{g}_{SM}=0.1\) and \(\mathrm{g}_{DM}=1.0\). The blue solid line represents the dark fermion scenario at a fixed dark fermion mass (\(M_{\chi_{1}}=1\) GeV). It is clearly shown that the MC simulations of the SM backgrounds are in a good agreement with the CMS open data within \(\pm 2\sigma\), such that no significant deviation from the SM has been observed in any of the studied mass points. Based on figure 5, we exclude \(Z^{\prime}\) production in the mass range between 238 - 524 GeV from the observed data and between 247 - 510 GeV from expected median. For the dark fermion scenario, the cross section times the branching ratio limit is presented in figure 6 as a function of the mediator's masses \(M_{Z^{\prime}}\) and the masses of the light dark fermion \(M_{\chi_{1}}\). The observed exclusion is limited to a narrow region where \(M_{\chi_{1}}\) is less than 25 GeV.
## IX Summary
A search for dark fermion particles, based on Mono-\(\mathrm{Z^{\prime}}\) model, produced in association with a heavy neutral gauge boson \(\mathrm{Z^{\prime}}\), using set of samples from proton-proton collisions released by CMS open data project corresponding to an integrated luminosity of 11.6 fb\({}^{-1}\) during run 1, has been performed. Results from muonic decay mode of \(\mathrm{Z^{\prime}}\) are discussed, along with its statistical and systematic combination, which is presented for the first time. No sig
Figure 5: 95% CL upper limits on the cross section times the branching ratio (expected and observed), as a function of the mediator’s mass (\(M_{Z^{\prime}}\)), regarding the DF scenario, with the light dark sector mass assumption and the muonic decay of the \(\mathrm{Z^{\prime}}\). The blue line represents the dark fermion scenario with \(M_{\chi_{1}}=1\) GeV.
Figure 6: The 95% CL upper limits on the product of the cross section and branching fraction from the inclusive search, for variations of pairs of the signal model parameters (\(M_{Z^{\prime}}\) and \(M_{\chi_{1}}\)). The filled region indicates the observed upper limit. The solid black curve indicates the observed exclusions for the nominal \(\mathrm{Z^{\prime}}\) cross section, while the dotted black curve indicates the expected exclusions.
\begin{table}
\begin{tabular}{|l|c|} \hline Process & No. of events \\ \hline \hline \(\mathrm{DY}\rightarrow\mu^{+}\mu^{-}\) & \(30.9\pm 8.3\) \\ \hline \(\mathrm{tt}+\mathrm{jets}\) & \(28.3\pm 6.8\) \\ \hline \(\mathrm{WW}+\mathrm{jets}\) & \(7.3\pm 1.8\) \\ \hline \(\mathrm{WZ}+\mathrm{jets}\) & \(0.7\pm 0.2\) \\ \hline \(\mathrm{ZZ}+4\mu\) & \(0.04\pm 0.01\) \\ \hline Sum Bkgs & \(67.2\pm 16.2\) \\ \hline \hline Data & 61 \\ \hline \hline DF signal & 36.3 \(\pm 8.8\) \\ (at \(M_{Z^{\prime}}\) = 300 GeV) & \\ \hline \end{tabular}
\end{table}
Table 8: The number of events satisfying the criteria of the events selection are illustrated for each SM background, the CMS open data corresponding to a 11.6 fb\({}^{-1}\) integrated luminosity and the DF model signal with coupling constants \(g_{DM}\,=\,1.0\), \(g_{SM}\,=\,0.1\) and \(M_{\chi_{1}}=1\) GeV. The total uncertainty, including the statistical and systematic components, is indicated.
nificant deviation from the standard model prediction has been seen. The 95% CL upper limits on the cross section times the branching ratio (expected and observed), based on the mono-\(Z^{\prime}\) model for the dark fermion scenario, were set. These limits constitute the most stringent limits on the parameters (\(M_{Z^{\prime}}\) and \(M_{\chi_{1}}\)) of this model to date. For the dark fermion scenario with a light dark sector mass assumption a small region, where \(M_{\chi_{1}}\) is less than 25 GeV, is excluded. For \(M_{\chi_{1}}\) = 1 GeV, the corresponding excluded range of \(M_{Z^{\prime}}\) is 238 - 524 GeV from the observed data and between 247 - 510 GeV from expected median.
###### Acknowledgements.
Y. Mahmoud wish to acknowledge the support of the Center for Theoretical Physics (CTP) at the British University in Egypt (BUE) for the financial support to this work. The authors of this paper would like to thank Tongyan Lin, one of the author of [17], for her useful discussions about the theoretical models, crosschecking of the results and for sharing with us the different scenarios Madgraph cards that were used for the events generation.
|
2307.10482 | Design of CLARI: A miniature modular origami passive shape-morphing
robot | Miniature robots provide unprecedented access to confined environments and
show promising potential for novel applications such as search-and-rescue and
high-value asset inspection. The capability of body deformation further
enhances the reachability of these small robots in complex cluttered terrains
similar to those of insects and soft arthropods. Motivated by this concept, we
present CLARI, an insect-scale 2.59g quadrupedal robot capable of body
deformation with tethered electrical connections for power and control and
manufactured using laminate fabrication and assembled using origami pop-up
techniques. In order to enable locomotion in multiple shape configurations, we
designed a novel body architecture comprising of modular, actuated leg
mechanisms. Overall, CLARI has eight independently actuated degrees of freedom
(two per modular leg unit) driven by custom piezoelectric actuators, making it
mechanically dextrous. We characterize open-loop robot locomotion at multiple
stride frequencies (1-10Hz) using multiple gaits (trot, walk, etc.) in three
different fixed body shapes (long, symmetric, wide) and illustrate the robot's
capabilities. Finally, we demonstrate preliminary results of CLARI locomoting
with a compliant body in open terrain and through a laterally constrained gap,
a novel capability for legged robots. Our results represent the first step
towards achieving effective cluttered terrain navigation with adaptable
compliant robots in real-world environments. | Heiko Kabutz, Kaushik Jayaram | 2023-07-19T22:26:31Z | http://arxiv.org/abs/2307.10482v1 | # Design of CLARI: A miniature modular origami passive shape-morphing robot
###### Abstract
Miniature robots provide unprecedented access to confined environments and show promising potential for novel applications such as search-and-rescue and high-value asset inspection. The capability of body deformation further enhances the reachability of these small robots in complex cluttered terrains similar to those of insects and soft arthropods. Motivated by this concept, we present CLARI, an insect-scale 2.59g quadrupedal robot capable of body deformation with tethered electrical connections for power and control and manufactured using laminate fabrication and assembled using origami pop-up techniques. In order to enable locomotion in multiple shape configurations, we designed a novel body architecture comprising of modular, actuated leg mechanisms. Overall, CLARI has eight independently actuated degrees of freedom (two per modular leg unit) driven by custom piezoelectric actuators, making it mechanically dextrous. We characterize open-loop robot locomotion at multiple stride frequencies (1-10 Hz) using multiple gaits (trot, walk, etc.) in three different fixed body shapes (long, symmetric, wide) and illustrate the robot's capabilities. Finally, we demonstrate preliminary results of CLARI locomoting with a compliant body in open terrain and through a laterally constrained gap, a novel capability for legged robots. Our results represent the first step towards achieving effective cluttered terrain navigation with adaptable compliant robots in real-world environments.
adaptable, modular systems, embodied intelligence, shape changing, insect-scale, confined terrain, legged soft robots
## 1 Motivation
Today's robots are rapidly becoming highly capable due to tremendous recent innovations in design, fabrication, and control [1]. These mobile systems are increasingly successful when deployed in complex natural environments characterized by cluttered obstacles and confined spaces [2, 3, 4, 5, 6, 7, 8, 9]. In particular, miniature robots [10, 11, 12, 13, 14] provide unprecedented access to confined environments due to their small size and show promising potential for novel applications such as search-and-rescue [15] and high-value asset inspection [16].
An analysis of the successful robots mentioned above highlights the importance of body geometry and mechanics as embodied intelligent solutions for achieving robust locomotion in complex natural environments. For example, robots that successfully navigate cluttered terrains without the need for sensing or active control leverage the tuned mechanics of their robust bodies to dissipate collisions with obstacles in their environment [17, 18, 19]. These behaviors are often accompanied by changes to body orientation which are governed by the geometry [20, 21, 22, 23] and mechanics [18, 24, 25, 26] of the robot's body. An alternate class of robots that successfully navigate confined terrains leverage the bioinspired strategy of exploiting the natural compliance of the appendages and bodies enabled by articulated geometries [11, 27] or soft materials [28, 29, 30] to passively conform to environmental constraints. These studies have led to the resurgence of soft robotics as a leading paradigm for robot design in the last decade [31, 32, 33, 34, 35].
Despite this growing evidence, the majority of robots, across size scales, still maintain fixed body shapes (typically cuboidal, see Figure 1c, [10, 13, 14, 16, 36, 37, 38, 39]) and are therefore unable to exploit the benefits of shape adaptation for complex terrain locomotion. One reason for this, especially on a smaller scale, is the increasing difficulty of design and fabrication associated with miniaturization [40]. Another reason is the enhanced actuation and control efforts associated with a higher number of articulated degrees of freedom in the body [41].
## 1 Introduction
Figure 1: (a) CLARI — Compliant Legged Articulated Robotic Insect, a miniature robot, featured next to an Oklahoma brown tarantula (\(\approx\)30 mm body length) commonly found in Colorado. (b) CLARI’s modular and compliant body allows it to vary body shapes and operate in multiple configurations. (c) Some of the most successful legged robots, ranging from mm to m sizes, all share a cuboidal body shape typically except for (IV) CLARI; (I) Micro Robot [42], (II) HAMR-Jr [14], (III) HAMR-VI [43], (IV) CLARI (this work), (V) DASH [10], (VI) RHex [2], (VII) MIT Cheetah [44], (VIII) ANYmal [45].
As an initial step towards addressing these challenges, we present CLARI -- Compliant Legged Articulated Robotic Insect (Figure 1a) -- an insect scale quadrupedal soft tethered robot, the first in a series of articulated exoskeletal robots [46] potentially capable of body shape adaptation (Figure 1b). To motivate the design of CLARI, we specifically chose laterally confined environments such as gaps in between rocks, tunnels, or blades of stiff grass commonly found in natural terrains as our choice of complex terrain. Unlike typical soft robots, which take advantage of material properties as the preferred solution to achieve body deformation, CLARI relies on an articulated morphology for lateral body compliance and shape change. This design of CLARI builds on our previous work with the robot CRAM (Compliant Robot with Articulated Mechanisms) which demonstrated cockroach-inspired dorsoventral body compliance during vertically confined space legged crawling [11]. Our choice of articulated morphology for the robot body enables us to combine the rapid, autonomous, multi-gait locomotion capabilities of articulated laminate robots with the passive compliance-based embodied physical intelligence of soft material systems [47, 48, 49, 50]. Furthermore, we choose origami-based design [51], laminate fabrication [52] and the pop-up assembly process [53] for CLARI as it offers an easy methodology of tuning geometry-based compliance (by varying flexure geometry) in addition to varying material properties (by changing individual layers) as needed within the laminate composite stacks of these robots at a variety of scales [14].
In this paper, we first describe the mechanical design principles of CLARI towards enabling cluttered terrain locomotion in Section 2. We follow this up by detailing the laminate fabrication technology used to manufacture the actuators, transmission linkages, and body of CLARI in Section 3. We also describe the experimental procedures to characterize various robot subsystems and present our findings. Finally, we demonstrate unconfined robot locomotion in three fixed and one compliant body shape configurations across various frequencies and gaits and compare their performance in Section 4. Towards highlighting CLARI's potential as a soft robot capable of cluttered terrain navigation, we present initial evidence demonstrating it moving through a laterally varying gap utilizing body compliance. We conclude by summarizing our results and contrasting them with other state-of-the-art examples in the literature and outline the vision for future work with CLARI related to shape adaptation-enabled autonomous cluttered locomotion in Section 5.
## 2 Design of CLARI: Robot with Embodied Physical Intelligence
In this paper, we introduce CLARI-1.0 (Figure 2), a 2.59g quadrupedal robot with eight independently actuated degrees of freedom (two per leg) driven by custom fabricated piezoelectric actuators and electrically tethered for power and control. As a starting point for the design of CLARI, we leverage the Harvard Ambulatory MicroRobot (HAMR) series of insect-scale robots [13, 14, 43, 54]. To facilitate body adaptation, we introduce articulated body mechanisms in CLARI allowing the robot to be laterally compliant. While the actuators and transmission mechanisms in CLARI might seem similar to those in HAMR, we present a number of design and fabrication innovations in this work that are necessary to realize our robot. First, we introduce a modular single-leg design in CLARI to enable locomotion in multiple body shape configurations. This leg module integrates the necessary actuation, transmission, and power delivery mechanisms into a single unit, resulting in a decentralized (i.e., local) control architecture. Second, we align the actuators vertically within a CLARI leg module in order to achieve a compact leg design, unlike in HAMR, where they are horizontally oriented. This required a redesign of the spherical five-bar transmission mechanism, which further differentiates it from HAMR. Finally, we implement the stream-lined piezoelectric actuator fabrication process pioneered in Jafferis _et al._[55], a significantly improved and simplified workflow, which enables CLARI to be the first platform featuring such actuators. Each of these innovations is discussed in detail in the following sections.
### Compliant Body Design with Multiple Shape Configurations
Multiple body shape configurations of CLARI are potentially achievable due to body compliance enabled by the modular leg assembly design. Each leg module is designed and fabricated as an individ
Figure 2: (a) A perspective view of CLARI highlighting the main features. (b) The three primary robot body shape configurations based on aspect ratios. The arrows show the ground reaction force vectors (red) with projections in the lateral (green) and propulsion (cyan) directions.(c) Artistic rendering of CLARI demonstrating laterally confined terrain navigation through a cluttered terrain leveraging the ability to adapt its body shape.
ual unit and interconnected through single degree-of-freedom flexure joints to form a closed kinematic chain, i.e., a compliant body. For the CLARI system, four individual leg modules are used, resulting in a rhomboid-shaped robot with variable aspect ratios (i.e., ratio of body length, \(L\), to body width, _W_). Based on external constraints, the robot can passively deform and adapt to its environment (Figures 2b and 2c). The stiffness between the individual leg modules can be tuned depending on the environment and ground surface roughness, independently or altogether. In this paper, all intermodule joints are treated equivalently, resulting in a symmetric design for the first version of CLARI.
To characterize the effect of body shape on locomotion, we mechanically fixed the robot into three broad classes of body shape configurations -- _long_ (\(\frac{L}{W}\,>\,1\)), _square_ (\(\frac{L}{W}\,=\,1\)) and _wide_ (\(\frac{L}{W}\,<\,1\)) -- based on their aspect ratios (Figure 2b). In the _long_ configuration, the robot's legs are oriented favorably with respect to the body i.e., they swing primarily backward in order to propel the robot forward (see arrows in Figure 2b; the ground reaction force vector in red projects primarily onto the propulsion direction in blue and minimally onto the lateral direction in green). Therefore, we expect the highest locomotion performance (i.e., speed) in this configuration. The largest possible aspect ratio combination for the current version of CLARI is \(\frac{L}{W}\,=\,2.1\) with a \(44\,\mathrm{mm}\) body length and \(21\,\mathrm{mm}\) body width. In contrast, in the _wide_ configuration, CLARI's legs swing primarily laterally with respect to the body and orthogonal to the direction of forward locomotion (see the arrows in Figure 2b). As a result of this unfavorable leg positioning, we expect the robot to demonstrate low locomotion performance in this configuration. The smallest possible aspect ratio combination for the current version of CLARI is \(\frac{L}{W}\,=\,0.48\). Finally, in the _square_ configuration _L=W_=\(34\,\mathrm{mm}\) (\(\frac{L}{W}\,=\,1\)), CLARI is symmetric and we expect it to move with equal ease in both forward and lateral directions and thus potentially be capable of orthogonal locomotion. However, since the legs are oriented at \(45^{\circ}\) to either direction of locomotion, we expect forward performance to be inferior to the long configuration but superior to the wide configuration.
### Modular Leg Design
A single-leg module in CLARI includes two piezoelectric actuators, a spherical five-bar transmission mechanism to couple the degrees of freedom of elevation-depression, adduction-abduction, and protraction-retraction of the leg, and a printed circuit board (power-PCB) capable of housing power and sensing electronics (Figure 3). However, for all the experiments in this work, the power-PCB is used simply as an assembly component, which couples the actuators to the transmission on the robot body and relays power and control signals from an external computer via thin tether wires. The power-pcb boards together add a payload of \(0.78\,\mathrm{g}\) to the robot (Table S1).
Figure 3: The different assembly steps involved in the CLARI fabrication. We start with planar laminate stacks of the SFB transmission which are folded into the correct orientation and integrated with actuators and the power-PCB to form a single-leg module. These individual leg modules are then assembled into the full robot.
Modular leg design is key to achieving body compliance and shape adaptation in CLARI and thus separating it from other robots at this scale [13, 14, 43, 10, 54]. In addition to enabling a compliant body in CLARI, we note several further advantages of the modular leg design which are discussed in Section 5.
### Transmission Design
Starting with the spherical 5-bar (SFB) linkage design inspired by the HAMR robots [43], the leg linkage components in CLARI were redesigned to allow for vertical actuator placement. This improved the overall compactness of the mechanism, facilitating assembly into an insect-scale robot while still permitting significant changes in aspect ratio for multiple body shape configurations. To enable this modification, a number of small design changes were made to the SFB in CLARI relative to HAMR by adhering to fabrication and assembly considerations. The most significant of these was the reduction in the total number of layers in the laminate from 11 to 5 by merging the transmission sublaminate and the chassis sublaminate into the same layers. This modification used half the raw materials as before and reduced the linkage mass and the number of machining cycles.
The detailed design of our single-leg transmission is shown in Figure 4 with key sections color-coded by
Figure 4: Overview of the CLARI leg mechanics featuring a spherical five bar (SFB) transmission. (a) A perspective view of the SFB with lift and swing actuators integrated. The relevant design parameters controlling the resulting leg trajectory are indicated and also summarized in Table 1. (b) Close-up view of swing and lift linkages that amplify actuator motion. (c) Side projection view of leg module, indicating the change in y and z directions. (d) Top projection view of leg module, indicating the change in x and y direction.
functionality. The mechanical ground (in blue) section of the transmission is part of the body frame and interlinks with the actuator mounting frame (i.e., power-PCB). Compared to earlier HAMR designs, these sections are doubly reinforced to minimize transmission losses. The SFB origami structure (in green) when folded up cross-links the dynamics from the two actuators (lift, in cyan, and swing, in pink, respectively) through individual crank-slider mechanisms to the leg output (in red) glued to the lowest point of the SFB. Both crank-slider mechanisms are actuated as close as possible (at a distance of \(s_{i}\) and \(l_{i}\) respectively for swing and lift, Figure 4) to the center of rotation of the SFB to minimize off-axis bending, an issue with early generations of HAMR robots [13, 56].
The swing actuator primarily controls the protraction and retraction (hereafter referred to as swing motion) of the leg, which the lift actuator directly influences both elevation-depression (hereafter referred to as lift motion) and adduction-abduction (hereafter referred to as expansion motion) degrees of freedom (DoF) of the leg. In previous designs of HAMR and other small legged robots, the adduction-abduction motion has largely been ignored. During locomotion in open terrains, this DoF minimally influences overall locomotion performance, but could potentially become critical when navigating through laterally confined terrains. Although we acknowledge the adduction-abduction motion and characterize the same in Section 3.3, we did not specifically design for this DoF in the current version of CLARI and plan to address this in future iterations. Thus, we adopt previously validated assumptions [57, 43] about SFB behavior in the quasi-static regime and model it as two separate single-input, single-output systems up until transmission resonant frequencies [56]. Using the procedure and convention in Doshi _et al._[43], we designed the leg transmission to amplify the displacement output of the lift and swing actuators (\(\delta_{al}\) and \(\delta_{as}\) respectively), while reducing the force output in each of the directions. Our primary guiding principle was to have sufficient force output to support the robot weight on two legs in the vertical direction while still achieving stride lengths comparable to a similarly sized robot, HAMR6 [58]. To simplify our design, we chose the identical amplification joint distance \(s_{i}\) and \(l_{i}\) to result in the highest transmission ratios to meet the above criteria in swing (\(T_{s}\,=\,l_{OY}/s_{i}\)) and lift (\(T_{l}\,=\,l_{OY}/l_{i}\)). The following table 1 summarizes the specifications of our chosen design parameters.
\begin{table}
\begin{tabular}{c|c} \hline
**Parameter** & **dimension** \\ \hline \hline Actuator & \\ \hline Deflection amplitude, \(\delta_{a}=\delta_{al}=\delta_{as}\) & \(\pm 360\,\mathrm{\SIUnitSymbolMicro m}\) \\ Blocked force, \(F_{a}=F_{al}=F_{as}\) & \(\pm 235\,\mathrm{mN}\) \\ \hline \hline Transmission & \\ \hline Lift input, \(l_{i}\) & \(375\,\mathrm{\SIUnitSymbolMicro m}\) \\ Swing input, \(s_{i}\) & \(375\,\mathrm{\SIUnitSymbolMicro m}\) \\ Lift transmission ratio, \(T_{l}\) & \(16\) \\ Swing transmission ratio, \(T_{s}\) & \(16\) \\ \hline \hline Leg & \\ \hline Overall Length, \(l_{O}\) & \(10.4\,\mathrm{mm}\) \\ Lift output, \(l_{OZ}\) & \(8.5\,\mathrm{mm}\) \\ Swing output, \(l_{OY}\) & \(6\,\mathrm{mm}\) \\ Lift arc length, \(\delta_{ll}\) & \(3.25\,\mathrm{mm}\) \\ Swing arc length, \(\delta_{ls}\) & \(2.95\,\mathrm{mm}\) \\ Protection - Retraction, \(\Delta X\) & \(2.85\,\mathrm{mm}\) \\ Abduction - Adduction, \(\Delta Y\) & \(2.05\,\mathrm{mm}\) \\ Elevation - Depression, \(\Delta Z\) & \(2.3\,\mathrm{mm}\) \\ Vertical blocked force, \(F_{leg}\) & \(\pm 9.6\,\mathrm{mN}\) \\ \hline \hline Body & \\ \hline Length (square)/Width (square) & \(34\,\mathrm{mm}\) \\ Length (long)/ Width (wide) & \(44\,\mathrm{mm}\) \\ Length (wide)/ Width (long) & \(21\,\mathrm{mm}\) \\ Maximum Aspect Ratio (long) & \(2.1\) \\ Minimum Aspect Ratio (wide) & \(0.48\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of CLARI design parameters
## 3 Fabrication of CLARI and Characterization
In this section, we summarize the fabrication and characterization of the different components of CLARI. We start by presenting with a general overview of each of these steps and follow it up with component-specific details.
### Fabrication Overview
To fabricate the components of CLARI including the chassis, transmission, actuators, flexure body links, etc., we adopt the planar laminate [52] kirigami manufacturing process (PC-MEMS, [51]) followed by pop-up folding assembly [53]. In this paper, we used a custom-built (6D-Laser) femtosecond laser micromachine that allowed us to process a wider range of materials with higher fidelity compared to traditional nanosecond laser systems [43, 59]. The details of the individual fabrication steps for each of the CLARI components and any modifications from previous procedures are described in the Supplementary Information.
### Actuator Fabrication and Characterization
CLARI actuators were fabricated following the streamlined manufacturing process described by Jafferis _et al._[55], which overcomes the need for in-plane alignment of heterogeneous materials [59] and thus simplified fabrication and improved process yield and overall device performance. Additional details are described in the Supplementary Information. An example actuator fabricated using this process and its dimensions are shown in Figure 5a while the material stack-up is denoted in Figure 5b. The operation of an example actuator is shown in the Supplementary Video.
The piezoelectric actuators were individually characterized for free tip displacement and tip block force performance at 1 Hz frequency and voltage steps of 125 V, 150 V, 175 V, 200 V and 225 V using the setup (see the inset in Figure 5) described in the Supplementary information. At each bias voltage, five peak-to-peak sinusoidal cycles were powered, allowing for statistical analysis within specific cycles as well as between actuators at different voltages. To predict the expected actuator performance, we used the actuator model from previous studies [60, 61] and adapted it to our custom dimensions.
The free deflection measurements of the actuators indicate consistent performance (Figure 5c). The measured free tip deflection of actuators is more than the model predictions and is consistent with the results published [60, 55]. Furthermore, at 225 V the free tip peak-to-peak deflection is 720 \(\mathrm{\SIUnitSymbolMicro m}\) which is higher than previous measurements [43] in similarly sized actuators. We suspect this is due to the minimal change in piezoelectric coefficients [60] post-laser processing, an advantage of the cold ablation feature of femtosecond lasers [62] over nanosecond lasers (which were used for fabrication previously).
Similarly, the blocked force measurements are consistent between the actuators (Figure 5d). The observed results closely match the predictions of the model at 125 V, and diverge to a performance drop of 17% with respect to the model at 225 V. This is significantly closer to the model estimates than previous measurements, which indicated a 32 % difference at peak voltages [60]. However, the peak blocked force of the actuator at 225 V is 235 mN, which is lower than that measured previously for a similar sized actuator [43]. We suspect this is possibly due to a slight movement of the actuators within the test jig providing a nonideal mechanical ground that will be improved in the future. We note that the addition of the carbon fiber reinforcement outer layers to the actuator was critical to maintaining rigidity at the base and improving force transmission. Without this structural addition, earlier versions of our actuators achieved only 65 % the best-recorded performance.
### Single Leg Module Fabrication and Characterization
The leg modules are composed of the frame, transmission, actuator side walls, power-PCB, and leg in addition to the actuators described above and fabricated following the general procedures described for
Figure 5: Design, fabrication, and performance of piezoelectric actuators. (a) Dimensions of the piezoelectric actuators in mm. (b) The stack of material layers used to create the bimorph actuator using the streamlined fabrication process [55]. (c) The plot of the peak-to-peak free tip deflection of the actuator with varying operating voltages. (d) The plot of the blocked force at the actuator tip with a fixed base at varying operating voltages.
HAMR robots [43]. Additional fabrication details are described in the Supplementary Information.
The single leg modules were individually characterized for the displacement and the blocked force of the free leg tip at 1 Hz frequency and voltage steps of 125 V, 150 V, 175 V, 200 V and 225 V in the lift and swing direction independently. At each bias voltage, five peak-to-peak sinusoidal cycles were powered, allowing for statistical analysis within specific cycles as well as between across leg modules at different voltages. The overall test setup (Figure S1) and the procedure used for these experiments is similar to the one described for actuator characterization in Section 3.2. The operation of an example transmission is shown in the Supplementary Video.
Under our test conditions, we observed that the leg modules (averaged data across modules) showed near linear relationships with voltage for both leg tip deflection shown in Figure 6a and blocked force Figure 6b in both the lift (cyan) and swing (pink) directions. The maximum measured blocked force in both swing and lift was about \(9.6mN\) at 225 V, resulting in an effective force transmission ratio of \(T_{F}=F_{ actuator}/F_{leg}\) 235/9.6 = 24.5 (which is less efficient than the modeled transmission ratio, \(T_{l}\,=\,16\)) indicating transmission efficiency of 65 %. With a length of \(l_{O}\,=\,10.4mm\), the leg generated maximum displacements of about 2.85 mm in swing and 2.3 mm in vertical lift directions at 225V. These results are lower than those reported in similarly sized HAMR [43, 63] and can be attributed to the higher transmission ratio used in CLARI. However, our experimental results are well below the prediction of the model (Figures 6 a and b), indicating the need for significant improvements in fabrication and assembly to increase transmission efficiencies in future CLARI iterations. Possible sources of transmission losses include (but are not limited to) inelastic deformation of the flexures in addition to deformation in the linkages, imperfect mechanical ground for the actuator's connections, and degradation due to fatigue [43, 63].
Next, we demonstrate the ability to generate a variety of leg trajectory shapes in the quasistatic regime by varying the relative phase between the lift and swing actuation. We depict their projections onto the robot's sagittal plane in Figure 6c. The experimentally measured trajectories were shaped as distorted ellipsoids varying from flattened near horizontal to near vertical and everything in between. We suspect that these distortions were a result of non-ideal transmission assembly and are one of the potential improvements for future generations of CLARI. For all the locomotion studies that follow in the remainder of the manuscript, we used a leg trajectory where the phase offset between lift and swing was 90\({}^{\circ}\) as the nominal trajectory shape. We plotted the 3D leg reachability space in Figure 6d and observed significant movement outside the sagittal plane in the abduction and adduction directions that could potentially enhance omnidirectional locomotion capabilities. This observation suggests that a majority of coupled transmissions like SFB result in complex motions that potentially need further attention for their influence on locomotion but are often ignored. For example, [43, 16, 64] consider only 2D projections of leg trajectory shapes.
Finally, we also characterized the transmission dynamics by performing frequency sweeps from 1 Hz to 100 Hz at 150 V and determined their Bode plot in Figure 6e. We observed that the transmission behaved like a linear second-order under-damped system in each of the three measured directions. We quantified the resonance frequency in the swing and lift (and expansion) directions as 40 Hz and 41.5 Hz respectively and therefore expect the range of ideal running frequencies for CLARI to be up to 40 Hz.
### Whole Robot Assembly
With individual leg modules characterized, four equally performing ones were selected and then connected through the side walls interlocking to horizontal ribs within each module, resulting in the complete robot (Figure 2). Additional fabrication details are described in the Supplementary Information. The overall mass distribution for CLARI is summarized in Table S1.
Figure 6: Single leg module characterization. (a) Peak-to-peak deflection of the CLARI leg tip at 1 Hz in the lift (blue) and swing (pink) directions. (b) CLARI leg tip blocked force in the lift (blue) and swing (pink) directions. Model predictions are included as dashed-dotted lines. (c) Projections of the leg tip onto the sagittal plane (xz) with varying lift and swing phase offsets reveal the diversity of foot trajectories. (d) The three-dimensional trajectory of the leg tip at varying intra-leg phase trajectories shows significant expansion motion, i.e. abduction-adduction, which is often ignored and might be potentially important, especially for laterally confined terrain locomotion. (e) The frequency response (bode plot) of leg motion to lift and swing signals reveal behavior analogous to second-order linear systems in respective output directions.
## 4 Locomotion Performance of CLARI
In this section, we determine the locomotion of CLARI to highlight the effect of body compliance and shape on performance. As noted previously (Section 2.2), the current version of CLARI is electrically tethered for receiving power and control signals from an off-board computer. We follow the same procedure to drive the robot as we did for the actuators (Section 3.2) and the single-leg modules (Section 3.3). To quantify performance, we ran the robot with multiple gaits at stride frequencies of 1, 5, and 10 Hz and measured forward speed as a function of body compliance or shape. All tests were performed at a fixed operating voltage of 200 V in an open terrain arena covered with cardstock as the running surface and filmed at high speed (240 fps, side and top view synchronized) for post-processing and kinematic analyses. Our key findings are described in the following.
**Gait Flexibility.** CLARI is mechanically highly dexterous due to the eight independently controlled DoFs (two per leg module) and therefore able to operate in a variety of biologically inspired gaits including walk, trot, pronk, bound, and pace (see [13] for definitions). To demonstrate the various gaits, we mounted CLARI on a custom stand (legs not in contact with the ground) and varied the relative phasing of the actuators between the leg modules following the procedure detailed in [13] (see Supplementary video). Although multiple gaits are feasible, we chose to focus on trot and walk for further locomotion characterization as they are the most common ones used by robots on this scale [13, 14, 58, 63]. The gait timing diagram for these two gaits is shown in Figure S2.
On the ground, with equal voltages applied to each leg actuator during a gait, we achieved near-straight line locomotion as shown in Figure 7. However, by varying the applied voltages between actuators following the procedure in Goldberg _et al._[54], we could realize turning with the same gaits although we did not extensively characterize it. An alternate approach for turning in CLARI is by varying the relative phase between legs [56], but is not demonstrated here. An example trial featuring a 90\({}^{\circ}\) turn is depicted in Figure S3 and the Supplementary Video.
**Effect of Body Compliance.** Without specifically tuning the body joints between the leg modules, our design choices resulted in a highly compliant robot body. Once this robot was placed on the ground, we observed large body oscillations modifying the robot shape, which clearly highlighted CLARI's body compliance and the potential for laterally confined locomotion through shape-morphing. However, for any of the gaits, we did not observe any notable forward locomotion indicating that the body was too compliant and did not generate effective ground reaction forces for propulsion (see the Supplementary Video). By tuning the stiffness between adjacent leg modules _ad hoc_ using polyimide strips (12.5 um thick, 16 mm length and 4 mm height; see yellow strips in Figure 9), we were able to obtain significant forward motion (see Supplementary video) as quantified in Figure 8.
Figure 7: Robot locomotion (trot, 10 Hz) at different body shapes shown as time consistent snapshots.
**Effect of Body Shape.** To isolate the effect of body shape and quantify locomotion performance independent of compliance, we chose to fix the body (by gluing the opposite leg modules using thin carbon fiber rods and thus eliminating any body compliance) in three specific shapes (\(\frac{L}{W}\)= 0.48, 1 or 2.1, i.e., the extremes in _wide_, _square_ and _long_ classes described in Section 2.1) and characterized locomotion performance in those configurations. We measured the fastest and slowest forward running speeds with CLARI in the _long_ and _wide_ body configurations and intermediate speeds in the _square_ body shape (see Supplementary Video). Figure 7 visually illustrates these findings for the three different shape configurations tested and validates our hypotheses about the influence of body shape on locomotion due to the varying leg orientation for thrust production (Section 2.1). Furthermore, we deduce that in the _square_ configuration, CLARI moved with equal ease in the forward and lateral directions, indicating the potential for omnidirectional locomotion.
**Effect of Gait and Stride Frequency.** The running speed results from the different experiments are shown in Figure 8.
In general, we found that both gaits performed similarly, with the trot performing better than the walk at a given stride frequency and body shape except at 5 Hz in _square_ configuration. We posit that the timing of ground contacts during trotting enabled CLARI to take advantage of favorable body dynamics more than during walking, contributing to faster locomotion [65]. We measured the best locomotion performance with CLARI in _long_ body configuration with the highest speed of 28 mms\({}^{-1}\) at 10 Hz, which was comparable to that observed for HAMR [63] for a similar stride frequency. With a tuned compliant body, we measured CLARI's performance to vary between that in long and square body fixed shape configurations with trot gaits increasingly faster at higher frequencies. Regardless
Figure 8: Robot locomotion speed performance as a function of body compliance and body shape at varying leg frequencies and varying gaits. Data is represented as mean \(\pm\) 1 standard deviation. We find that the CLARI generally records the best performance in the long shape configuration matching our expectations.
frequencies higher than \(10\,\mathrm{Hz}\), we observed a strong detrimental influence of destabilizing dynamics that results in poor locomotion, which was analogous to those observed in other small robots in the body dynamics regime [13].
Towards Cluttered Terrain Locomotion.Having demonstrated the effect of body compliance and shape on locomotion performance in open terrain, we present initial evidence of the robot's ability for confined locomotion by demonstrating it moving through a laterally varying gap utilizing body compliance (see Supplementary Video). Details of the experimental setup are included in the supplementary text. Figure 9 shows that CLARI was able to passively deform by over \(30\,\mathrm{\char 37}\) laterally and vary its shape from an aspect ratio of \(1.2\) to \(1.9\) and adapt to its environmental constraint to fit through the lateral gap of \(22\,\mathrm{mm}\). This demonstration is the first step towards CLARI being able to navigate effectively through various kinds of complex environments including cluttered and confined terrains.
## 5 Discussion and Future work
In conclusion, we successfully designed and fabricated CLARI, the first in a series of miniature legged robots with articulated modular bodies. Body compliance and shape change were achieved by interlinking four novel modular leg units in a closed kinematic chain. The electrically tethered CLARI was demonstrated to successfully locomote in multiple static body shape configurations lacking compliance at various gaits and stride frequencies. Similarly, the robot was also demonstrated to run effectively with an experimentally tuned compliant body at similar conditions. In addition to forward locomotion, CLARI was also able to execute turns on demand using the same gaits by varying the stride amplitude. In general, we found that the robot's running speed is comparable to other systems of this size at equivalent operating conditions [13, 63]. More importantly, our experiments showed that body compliance and shape have significant effects on robot performance supporting our hypotheses about their contributions towards embodied physical intelligence and that they need to be tuned appropriately for effective locomotion in performance-specific applications. Finally, we presented initial evidence of CLARI's ability for passive shape adaptation with a soft body of tuned compliance enabling laterally confined locomotion - a first for legged robots to the best of our knowledge (see Table 2). Thus, in this work, we have taken the first step towards autonomous cluttered terrain navigation by establishing the CLARI's ability to locomote in various shapes and with tuned compliance.
A key innovation of this work is the modular leg assembly, atypical for robots at this scale. In addition to enabling a compliant body in CLARI, we noted several additional advantages of the modular leg design. This choice allowed us to build leg modules without worrying about their actual position on the robot which sped up the iteration cycles for optimizing fabrication by limiting our scope to a single subsystem. The modular approach also simplified the overall robot design process and potentially makes it easier to scale up to multi-legged systems with a high number of legs (e.g. centipede-like robots [66, 67]). More importantly, self-contained modular legs allow for convenient repair and replacement of de
Figure 9: Robot compressing laterally while walking through a body constraining gap. The lateral walls interact only with the body and therefore do not limit leg movements.
grading individual appendages, and thus make the platform significantly easier to maintain relative to HAMR or other monolithic robots. Furthermore, each individual leg module can be thoroughly characterized before assembly and matched with similarly performing sets and, therefore, minimizes the need for "trimming" [68], an essential process for effective open-loop locomotion such as running in, straight path. Overall, we believe that such modularity allows for easier and faster novel robot platform development with different leg arrangements, leg numbers, or body shapes, without having to redevelop and characterize entirely new leg designs. However, we also acknowledge that numerous improvements are required before CLARI can fully achieve its goal of traversing complex terrains effectively, and we envision addressing them in future iterations of the robot. Some immediate next steps are discussed below.
CLARI is currently unable to move effectively without careful body-compliance tuning. This is not surprising because the role of body mechanics during legged locomotion as a morphological computation principle [47, 50] is an area of active interest to biologists [69, 70], physicists [71, 72], and roboticists [44, 73, 74] alike and remains a challenging problem yet to be fully understood [35, 49]. For example, the first couple of generations of the MIT Cheetah [75] featured a compliant backbone which was later abandoned [76] in part due to the challenges associated with tuning for effective locomotion. Future generations of CLARI will explore a number of strategies including the systematic passive [73] tuning of articulated flexures, the addition of actuated joints [77], the integration of reconfigurable [78] and auxetic [79] metamaterial body structures, and the active control of body stiffness [80] for successful locomotion in cluttered terrain. In relation to this, we observed significant slip during locomotion, highlighting the need for effective ground interactions for successful thrust generation. We hope to incorporate passive [81, 82] and active [16] adhesion mechanisms in the next generation of CLARI. Such controlled shape modulations could have significant implications for terradynamic streamlining [20] in cluttered terrains by changing the relative potential energy landscapes of surrounding complex environments [83].
We also observed that CLARI is unable to effectively utilize its entire bandwidth of leg cycling frequencies (up to the transmission resonances around \(40\,\mathrm{Hz}\)) due to unfavorable body dynamics. As the next steps, we plan to characterize this in detail [13] as a function of both body compliance and shape in order to generate ideal leg trajectories and gaits for effective locomotion [63]. Another source of unwanted dynamics is the electrical tether for delivering power and control signals. Despite the current tethers being ultralight, their small movements seemed to perturb the motion of the robot causing it to drift laterally or turn. We expect that integrating power [84] onboard the robot would not only enhance its autonomy but also make it more resistant to undesired perturbations. Similarly, the robot leg modules were not identical in their performance despite our best efforts to match them, and more importantly, the articulated joints degraded during the process of robot testing. We expect improvements to the fabrication process and potentially small design changes (such as strengthening the structure around the SFB, etc.) will improve the robot operation lifetime and enable CLARI to be deployed in real-world environments.
Ultimately, with the above improvements incorporated, we hope that future generations of CLARI will
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline
**Robot** & **Length** & **Aspect** & **Shape** & **Complex** \\ & **(cm)** & **Ratio** & **Change** & **Terrain** \\ \hline Microrobot [42] & 0.4 & 1 & No & Lab \\ HAMR-Jr [14] & 2 & 1.25 & No & Lab \\ _CLARI_ & _3.4_ & _0.48-2.1_ & _Yes_ & _Lateral_ \\ HAMR [57] & 4.4 & 1.25 & No & Lab \\ DASH [10] & 13 & 2 & No & Lab \\ RHex [2] & 50 & 2.2 & No\({}^{*}\) & Open\({}^{\dagger}\) \\ Cheetah [44] & 60 & 1.7 & No\({}^{*}\) & Lab \\ ANYmal [45] & 80 & 1.35 & No & Open\({}^{\dagger}\) \\ \hline \end{tabular}
* Robot versions with spine/ backbone morphology have been explored
* Natural terrain with less than body height and unconstraining obstacles
\end{table}
Table 2: Comparing the shape morphing and complex terrain locomotion abilities of some of the successful legged platforms across sizes from a few millimeters to meters
be able to autonomously locomote through complex and cluttered natural environments and begin to deliver on the promise of significant socio-economic impact utilizing miniature robots.
### Acknowledgements
We thank all members of the Animal Inspired Movement and Robotics Lab (AIM-RL) at the University of Colorado Boulder for invaluable support and discussions. We also thank Stephen Uhlhorn and 6DLaser for femtosecond laser micromachine development and fabrication assistance.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of any funding agency. This work is partially funded through grants from the Paul M. Rady Mechanical Engineering Department, the US Army research office (ARO) Grant # W911NF-23-1-0039 and the Meta Foundation (K.J.).
### Conflict of Interest Statement
The authors have no competing or conflicts of interest.
|
2310.15135 | Quantifying the Dialect Gap and its Correlates Across Languages | Historically, researchers and consumers have noticed a decrease in quality
when applying NLP tools to minority variants of languages (i.e. Puerto Rican
Spanish or Swiss German), but studies exploring this have been limited to a
select few languages. Additionally, past studies have mainly been conducted in
a monolingual context, so cross-linguistic trends have not been identified and
tied to external factors. In this work, we conduct a comprehensive evaluation
of the most influential, state-of-the-art large language models (LLMs) across
two high-use applications, machine translation and automatic speech
recognition, to assess their functionality on the regional dialects of several
high- and low-resource languages. Additionally, we analyze how the regional
dialect gap is correlated with economic, social, and linguistic factors. The
impact of training data, including related factors like dataset size and its
construction procedure, is shown to be significant but not consistent across
models or languages, meaning a one-size-fits-all approach cannot be taken in
solving the dialect gap. This work will lay the foundation for furthering the
field of dialectal NLP by laying out evident disparities and identifying
possible pathways for addressing them through mindful data collection. | Anjali Kantharuban, Ivan Vulić, Anna Korhonen | 2023-10-23T17:42:01Z | http://arxiv.org/abs/2310.15135v1 | # Quantifying the Dialect Gap and its Correlates Across Languages
###### Abstract
Historically, researchers and consumers have noticed a decrease in quality when applying NLP tools to minority variants of languages (i.e. Puerto Rican Spanish or Swiss German), but studies exploring this have been limited to a select few languages. Additionally, past studies have mainly been conducted in a monolingual context, so cross-linguistic trends have not been identified and tied to external factors. In this work, we conduct a comprehensive evaluation of the most influential, state-of-the-art large language models (LLMs) across two high-use applications, machine translation and automatic speech recognition, to assess their functionality on the regional dialects of several high- and low-resource languages. Additionally, we analyze how the _regional dialect gap_ is correlated with economic, social, and linguistic factors. The impact of training data, including related factors like dataset size and its construction procedure, is shown to be significant but not consistent across models or languages, meaning a one-size-fits-all approach cannot be taken in solving the dialect gap. This work will lay the foundation for furthering the field of dialect NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
## 1 Introduction
Across the globe, humans speak over seven thousand unique languages Eberhard et al. (2022). Many of these languages contain a plethora of internal variation due to the environmental, cultural, and socioeconomic diversity inherent to large populations Honkola et al. (2018). These dialects are categorised into two groups: standard and non-standard Trudgill (2004). Standard dialects are, by definition, supported by governmental and educational institutions resulting in more opportunities of all kinds for their speakers. On the other hand, speakers of non-standard and minority dialects find themselves at a disadvantage compared to their counterparts Trudgill (1979). These effects are compounding; those who speak minority dialects are provided fewer opportunities to advance socially and economically, resulting in a self-fulfilling cycle of oppression. Many people who speak a minority dialect as a first language find themselves modifying their use of language throughout their life to appear to belong to the group of standard dialect speakers, much to the detriment of the maintenance of dialectal diversity Carlson and McHenry (2006). In losing these dialects, we lose not only the form of expression itself but aspects of the unique culture and society it belongs to Fishman (2007).
NLP has been moving in recent years to provide more methods of communication, both between people and with digital systems. In doing so, it has been bridging information- and access-based gaps for people in many historically marginalized communities Bouillon et al. (2021); Mariani et al. (2022); Zhang et al. (2022). However, it is important to acknowledge that variation within languages is rarely addressed in mainstream tools. Modern systems that do provide access to variants still focus on wealthy, standard dialects, such as British, Australian and American English, while disregarding commonly spoken minority dialects like African American English. Speakers of under-resourced dialects, variants of both high- and low-resource languages with little available training data, face language barriers when using many of the tools taken for granted by speakers of well-resourced dialects. This reduced accessibility further entrenches existing disparities by continuing the historical trend of disenfranchising speakers of minority dialects Trudgill (1979).
In this paper, we examine the performance of large language models (LLMs) from two crucial multilingual tasks, machine translation and automatic speech recognition, across a diverse set of dialects and analyze the linguistic, socioeconomic, and computational factors that may contribute to
the dialect gap. This study determines that the largest indicator for better performance for under-resourced dialects is linguistic proximity to well-resourced dialects, regardless of the size or wealth of the dialects' speaker base. This connection is predicted to be due to the lack of dialectal data included in training large language models, leading to dialects performing better or worse on the basis of incidental similarity to the dialect used in training. Unfortunately, the size of the performance gap and the amount/makeup of data required to overcome it is not predictable from external information about the language since it varies across task, model, and environment. As a result, further analysis will need to be done by researchers for individual systems to examine how the dialect gap can be closed for their work through a unique combination of higher-quality, larger, and more balanced datasets.
## 2 Dialect Diversity in Research
**Studies in Linguistic Diversity** A significant problem in the study of linguistic diversity across NLP is the lack of attention paid to language variation. In the past few years, increased awareness has been drawn within the NLP community to the disparities present in modern research. In particular, researchers have begun to notice the relative lack of papers that address languages spoken outside of Europe and East Asia, even in subfields like multilingual NLP Blasi et al. (2022); Joshi et al. (2020); Ruder et al. (2022); Sogaard (2022).
While these works offer insight into the disadvantages faced by speakers of under-resourced languages, they still are discussed under the assumption that if languages were appropriately attended to, all their speakers would gain equal access to NLP tools. Similarly, they present their comparisons as if all speakers of well-resourced languages, especially English, have superior access to tools. Unfortunately, this is not necessarily the case. Two-thirds of English's one-and-a-half billion speakers are second-language (L2) speakers Eberhard et al. (2022). Many L2 speakers struggle with NLP systems due to their accent or their use of code-switched and mixed language. Even many first-language (L1) speakers, such as speakers of African American or Scottish English, do not see their native dialect supported by speech, dialogue, or translation systems and are forced to mask their natural speech patterns, which is harmful to their mental health and sense of identity Johnson et al. (2022); Santiago et al. (2021). As such, existing evaluations of linguistic diversity in NLP are fundamentally incomplete.
**Dialectal Models** The advent of large language models has made it possible to train models that perform well on even low-resource languages Aharoni et al. (2019); Conneau et al. (2020). The term LLM is not strictly defined, but in this study, we use it to refer to multilingual Transformer-based systems pretrained on large amounts of scraped internet data and finetuned for specific tasks. In these systems, under-resourced languages have their training supplemented by this unannotated, scraped data and cross-lingual transfer Dabre et al. (2020). The performance gain seen by low-resource languages when using LLMs does not extend to under-resourced variants of languages.
Some LLMs provide allocational support for dialects by treating them as separate languages but their performance is not necessarily comparable to that of the standard form. As an example, Arabic speakers often write in their native dialects when communicating casually online, a phenomenon noted by both the linguistic and NLP research communities Alshutayri (2017); Abdul-Mageed et al. (2018). Still, attempts by social media to translate Arabic posts are far less successful than their attempts on French and English, despite many consumer translation systems offering support for major regional dialects of Arabic Harrat et al. (2019). For dialects outside of those explicitly included in systems, this problem is only exacerbated by a lack of allocational support.
**The Data Problem** The same marginalised languages that face lower performance at the hands of LLMs also face a larger data problem across dialects. Most of the task-annotated data available online for low-resource languages comes from religious texts, government documents, or multinational newspapers Agic and Vulic (2019); Skadins et al. (2014); Chen et al. (2020). These sources often use a formal manner and avoid dialectal markers, especially when their target population is mostly diglossic and has already had to learn a more standard dialect for survival in the modern world Alshutayri (2017); Abdul-Mageed et al. (2018). As a result, the LLMs trained on this data are not built to function on minority dialects and have unclear performance capabilities. Before this problem can be solved, questions must be answered about the amount, quality, and type of data needed to over
come the data problem. The survey done in this paper across languages provides insight into how well dialects perform 'as is' and identifies that linguistic and socioeconomic knowledge should be leveraged to inform future decisions on data collection and usage.
## 3 Tasks
The two tasks evaluated in this paper are machine translation (MT) and automatic speech recognition (ASR). These tasks are some of the few with sufficient data for evaluation of dialects and both have a focus on increasing access to people, tools, and information by removing linguistic barriers Jin et al. (2021). They are also safe tasks to use as a starting point because they do not deal with personal information or abusive language. The list of models, languages, and metrics used in the evaluation of each task can be found in Table 1. More information about the datasets and languages used can be found in Appendix A. In total, there are six model versions evaluated for each task and 30 dialects across 7 languages compared for MT and 33 dialects across 7 languages compared for automatic speech recognition. Other than Tamil and Telugu for ASR, each language is taken from a different language family in order to extract information that is independent of specific linguistic features.
### Machine Translation
Machine translation is already used in domains such as medicine, law, and information as a method of increasing access to systems Buttner et al. (2022); Vieira et al. (2021). A leader in the field of multilingual MT is Meta's No Language Left Behind (NLLB), a model that claims "safe, high-quality results" for two hundred languages Costajussa et al. (2022). The specific version of the model evaluated in this study is the distilled 600M parameter variant1.
Footnote 1: huggingface.co/facebook/nllb-200-distilled-600M
Another popular MT model is Google's Neural Machine Translation (NMT), which is available for use through Google Cloud API Johnson et al. (2019). NMT is a widespread consumer tool, to the point that Google has had to parse out bitext generated using it when scraping internet data for training Ni et al. (2022).
We also evaluate the University of Helsinki's OpusMT, a model based on MarianMT and trained on Wikimedia monolingual and multilingual text Tiedemann and Thottingal (2020). This model is an interesting comparison to NLLB and NMT because it is not an LLM and represents a different approach - covering more languages at the cost of performance across the board. This model was constructed in an academic setting with a more transparent set of training data and significantly fewer parameters. All evaluations are conducted with English as either the target or source language due to data constraints.
Evaluation metrics are a biased measure of output quality and fluency but are required to empirically showcase the dialect gap. To reduce some of the negatives associated with each metric, we report two types of metrics that measure different aspects of the output. The first metric is a BLEU score, which is a classic n-gram evaluation technique for translation Papineni et al. (2002). Secondly, a representation-backed metric is used to determine the semantic similarity between two sentences since MT is a task with multiple possible solutions. Most semantic similarity metrics are based on transformer embedding models, so we use a multilingual variant of SentenceBERT2Reimers and Gurevych (2019). Full results for both metrics are reported in Appendix C.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Task** & **Models** & **Languages** & **Metrics** \\ \hline \multirow{4}{*}{\begin{tabular}{l} **Machine Translation** \\ **(MT)** \\ \end{tabular} } & Google NMT Johnson et al. (2019) & Arabic (16), Finnish (2), & \multirow{2}{*}{\begin{tabular}{l} \\ **Meta NLLB** (Costa-jussa et al., 2022) \\ **Helsinki OpusMT** \\ \end{tabular} } & \begin{tabular}{l} **Arabic (16), Finnish (2), \\ Mandarin (2), German (3), \\ Malay (3), Portuguese (2), \\ Swahili (2) \\ \end{tabular} & \begin{tabular}{l} BLEU, SentenceBERT \\ Similarity \\ \end{tabular} \\ \hline \multirow{2}{*}{\begin{tabular}{l} **Automatic Speech** \\ **Recognition (ASR)** \\ \end{tabular} } & Google USM Zhang et al. (2023) & Arabic (8), Spanish (8), Bengali & \multirow{2}{*}{
\begin{tabular}{l} \\ (3), Georgian (2), Tamil (5), \\ Meta XLS-R (Conneau et al., 2020) \\ \end{tabular} } & \multirow{2}{*}{WER, CER} \\ \cline{1-1} & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Tasks addressed in this study along with models, languages (with the number of dialects), and metrics.
### Automatic Speech Recognition
Automatic speech recognition (ASR) is a task that is important in bringing access to those who are unable or disinclined to communicate through text (Ahn and Lee, 2016; Doumbouya et al., 2021). As of late, representation learning and LLMs for end-to-end ASR have been becoming more common. Many models are trained on unsupervised audio data and then finetuned for specific tasks. This is the case for Meta's XLS-R, a model that is trained on thousands of hours of speech data across languages (Conneau et al., 2020). We evaluate both on a multilingual variant3 and a monolingual variant of the 300M parameter base model4 finetuned on a single language at a time using the Common Voice dataset (Ardila et al., 2020).
Footnote 3: [https://huggingface.co/voidful/wav2vec2-xlsr-multilingual-56](https://huggingface.co/voidful/wav2vec2-xlsr-multilingual-56)
Footnote 4: [https://huggingface.co/facebook/wav2vec2-xlsr-r-300m](https://huggingface.co/facebook/wav2vec2-xlsr-r-300m)
Another model examined is OpenAI's Whisper, which is trained on a combination of existing ASR datasets and automatically generated transcripts scraped from the internet (Radford et al., 2022). The version of the model tested here is the medium variant5. Like XLS-R, the Common Voice dataset was used to finetune this model by language for an additional evaluation (Ardila et al., 2020).
Footnote 5: [https://huggingface.co/openai/whisper-medium](https://huggingface.co/openai/whisper-medium)
Lastly, Google has released two ASR models: the monolingual Speech-To-Text (STT) and their newer multilingual Universal Speech Model (USM) (Chiu et al., 2018; Zhang et al., 2023). These models were both evaluated through Google Cloud API because neither has been released for open-source use. STT in particular functions as a good comparison to the LLMs evaluated here because it is an older, monolingual model. Overall, six models will be compared - three "monolingual" models (including those finetuned for a specific language) and three multilingual models.
While there has been discussion on whether word error rate (WER) and character error rate (CER) adequately predict performance, no better system has been used by the community at large (Favre et al., 2013). There have been other options, but these are primarily for downstream end-to-end tasks, such as speech translation, natural language understanding, and information retrieval (Kim et al., 2021; Roy, 2021). For this work, we will stick with the community standard and use WER, with CER scores reported in Appendix C.
## 4 Linguistic Analysis of Dialects
There are many ways to identify and quantify the similarity between two variants of a language. Many have been explored in NLP for cross-lingual transfer using features from syntax, lexicon, and morphology (Philippy et al., 2023; Eronen et al., 2023; Lin et al., 2019; Ponti et al., 2019). There have also been studies on dialects in computational linguistics, examining whether dialects are consistent across corpora and registers (Dunn, 2021). A similar method is used in this paper to examine lexical similarity, using Spearman's Rank Correlation Coefficient. This has been used previously to calculate corpus similarity and homogeneity (Kilgarriff and Rose, 1998). In Appendix Figure 2(a), the similarity between each dialect and the best-performing variant of that language is shown, as well as the lexical similarities between scripted and conversational samples from each dialect of the Babel dataset.
Additionally, we examine the phonetic similarity of selected ASR datasets, specifically for Arabic and Spanish. Here, random samples were manually annotated for vowel positioning through formant analysis and plotted in the Appendix; see Figure 2(b). Then, the average Euclidean distance across vowels between each dialect and the standard form was taken to serve as a measure of phonetic similarity. More details on the exact methodology can be found in Appendix B.
## 5 Dialect-Wise Performance Gaps
Examining the performance across dialects in Figure 1, some trends appear immediately. As mentioned in Appendix A, the dialects evaluated were largely dictated by data availability. As a result, Arabic and Spanish are heavily represented while other lower resource (dialect-wise) languages see coverage of only two to three dialects. This is something that may be reflected as well in the training data for pre-trained models, resulting in Arabic and Spanish both having relatively more even dialectal performance than the other surveyed languages.
For MT, there are steeper performance gaps when translating into the dialect. This makes sense if input robustness is taken into account; in other words, models may be able to handle some level of dialect variation in their input but cannot know to output the non-dominant dialect. Additionally, models that perform better on the standard dialect show steeper drop offs in performance, something
very clearly exemplified across the Finnish dialects. This demonstrates the interesting point that higher performing models - which have access to more parameters and data during training - have greater inequalities in their coverage.
The same trends are not apparent for ASR, where the worst performing model (OpenAI's Whisper) has the highest amount of variance across dialects. Interestingly, all three multilingual models seem to prefer the spoken dialect (cnv), likely due to the fact that they are mostly trained on unsupervised internet data from websites like Youtube. On the other hand, the finetuned models prefer the written dialect (spt), which is understandable since most are finetuned using CommonVoice, a heavily scripted data source.
## 6 Correlations with Proximity to Power
Even within the same task and model, different dialects have different performance disparities, as seen in Figure 1. In order to examine this phenomenon in an equivalent environment, we compare performance across MT using BLEU %, which is the percentage of the best-performing dialect's BLEU score achieved by the minority dialect. Likewise, for ASR, the relative percentage loss of performance for each dialect compared to the standard dialect is used. Note that this means that in Figure 2, a _positive_ MT correlation and a _negative_ ASR correlation both mean there is positive correlation between the metric and performance.
In choosing metrics for comparison, we aimed to cover the range of economic, social, and linguistic factors that capture the idea of proximity to power. As proxies for wealth, we examine gross domestic product (GDP) for cumulative economic power and GDP per capita for individual economic power (The World Bank, 2023). Socially, we are interested in both population size and how well
Figure 1: The performance of various dialects across Machine Translation and Automatic Speech Recognition. Within each language the same dataset is used. Because Bengali, Georgian, and Tamil are heavily diglossic, the standard written form (spt) and the best performing spoken form (cnv) are compared rather than regional dialects.
served the population is in education, healthcare, and standard of living, estimated via the Human Development Index (HDI) (US Census Bureau, 2017; United Nations Development Programme, 2021). Lastly, for linguistic factors, we utilize the lexical similarity and phonetic similarity extracted from evaluation data and normalized to a scale from \(-1\) (lowest similarity) to \(1\) (highest similarity). Unfortunately, some economic and social metrics are only reported at the national level, so there is no data for minority dialect groups within countries. As a result, certain dialects (e.g. Kven Finnish, Vernacular Malay) are not included.
In the past, population factors have been shown to loosely correlate with factors such as performance and appearance in NLP research (Blasi et al., 2022). Here, in Figure 2, we see that these correlations do not necessarily hold for dialects. In fact, these results are contradictory to common expectations and narratives, which assume that wealthier, larger, and more educated populations are better served across the board.
**Gross Domestic Product** GDP represents the overall wealth of a speaker population and their economic power in the world. As such, we would expect groups with high cumulative wealth to be well-served by technology. While GDP has a small impact, it varies heavily by model and can't be used as a consistent predictor of performance. Certain models show a relatively consistent positive correlation, such as OpusMT and USM/STT, but others show no correlation at all. Others showcase a correlation only in one set of models, such as NLLB which is uncorrelated when translating into English but positively correlated when translating into the dialect. On average, worse-performing models and environments show a stronger correlation, with translation into the dialect being much more correlated than translation into English.
**Gross Domestic Product Per Capita** GDP per capita is an important metric as a proxy for estimating the wealth of individuals in a population and we would expect those with access to wealth to be well-served even if their population is smaller. Surprisingly, it seems to have no impact at all on MT across models, so wealthier minority populations are not better served than poorer ones despite having access to increased resources. In ASR, the result is even more unexpected with wealth correlating negatively with performance.
**Population Size** Population size intuitively would correlate with better performance, but previous studies on language diversity in NLP have shown that even languages with extremely high populations are not well-served if they are impacted by other factors like geographic distance from research institutions and low wealth (Blasi et al., 2022). Here, population size has little impact on MT performance, to the point that certain models show a negative correlation between the two. On the other hand, in ASR there is a strong positive correlation across all models except for the finetuned version of Whisper. This is an unexpected result because Whisper originally showcases a matching positive correlation and is finetuned on the same Common Voice datasets as XLS-R but demonstrates a complete trend reversal. This difference between MT and ASR may be a result of the type of data used for training each and the sources it came from, but further analysis is needed to confirm this.
**Human Development Index** HDI is a measure of how well a population is served in other access-based metrics, such as education, healthcare, and standard of living. It would logically follow that a high HDI would then correlate with better performance, but this does not hold for MT. Instead, MT performance shows no correlation at all with HDI.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Metric**} & \multicolumn{2}{c}{**Machine Translation**} & \multicolumn{2}{c}{**Speech Recognition**} \\ & EN \(\rightarrow\) di & di \(\rightarrow\) EN & Multilingual & Finetuned \\ \hline
**Gross Domestic Product** & 0.11 & 0.16 & -0.25 & -0.13 \\
**Gross Domestic Product per Capita** & 0.16 & 0.31* & 0.44* & 0.16 \\
**Population Size** & 0.04 & -0.13 & -0.30 & -0.00 \\
**Human Development Index** & -0.06 & -0.03 & 0.23 & 0.09 \\
**Lexical Similarity** & 0.48* & 0.69* & -0.70* & -0.57* \\
**Phonetic Similarity** & - & - & -0.49* & -0.63* \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pearson correlation coefficients for each language metric. For MT, correlation is calculated against percentage drop in BLEU performance while for ASR, correlation is calculated against percentage increase in WER. As such, the correlations are reversed for WER. Correlations with \(p<0.05\) are marked.
Surprisingly, HDI correlates negatively with ASR performance, so better-educated and healthy minority dialect speakers have a harder time accessing ASR systems despite being otherwise well-served economically and socially.
**Lexical Similarity** Lexical similarity, on the other hand, is very correlated with performance for both MT and ASR. Since dialect data is not used for training regardless of population features, performance is likely mostly based on linguistic proximity to the standard form. This result is also more robust than the other correlations mentioned here because every dialect of every language evaluated was included since the similarity score was not dependent on external data availability. Again, we also see in MT that the worse-performing directionality (EN \(\rightarrow\) dialect) has a stronger correlation. This is expected in context since these models do not provide allocational support to these dialects, so they are translating into the standard dialect regardless of user intent but they may be robust to some amount of the lexical variation in the input.
**Phonetic Similarity** The importance of linguistic similarity extends to phonetic similarity for ASR, which is strongly positively correlated with performance. Again we see that finetuning on the smaller, scripted Common Voice datasets makes the correlation stronger for XLS-R and Whisper, which suggests that models overfit to the dialects present in training data. It is important to remember that phonetics is a broad area of study in linguistics that encompasses many measures of acoustic similarity, so other forms of analysis may capture even higher impact forms of variation between dialects. However, these results clearly already show that phonetic similarity plays a large part in determining the performance of dialects.
The results surrounding similarity suggest that the most useful method of addressing the dialect gap may lie in focusing on how to reduce the linguistic distance between the language used at evaluation versus training. In other words, this can be compared to a domain shift problem rather than a multilingual problem. A way to begin is by increasing the dialect diversity of the training data to cover a larger variety of language patterns.
## 7 The Impact of Datasets
### Machine Translation & Dataset Size
For many languages, lower performance in MT is seen in parallel with a smaller dialect gap. As an example, the Mandarin dialects perform comparably on NLLB and OpusMT but the disparity becomes statistically significant under NMT, a model where Mandarin as a whole performs better. This trend suggests that the benefits of larger models and more training data are not equally felt by all dialects due to disparities in the training pipeline -- more training data does not solve the dialect gap, _it makes it worse_. The question can then be raised: would training on more specifically dialectal data be sufficient to overcome these disparities?
To answer this question, two languages with enough dialectal data were chosen to finetune NLLB and OpusMT. Each model was trained with thirty different dataset sizes, on three different data subsets per size and three seeds per data subset to ensure that the results were statistically significant.
Figure 2: Impact of training dataset modifications on performance.
In Figure 1(a), the training curves for each model and translation direction can be seen.
As more data is added, the languages begin to perform better, but not all at the same rate. For example, Vernacular Malay sees relatively little improvement as data is added to train NLLB, but OpusMT's initial training curve is steep. Therefore the same amount of data causes two different outcomes depending on the model architecture and the data it was previously trained on. In some cases, the improvements are marginal, while in others, even a small amount of data is enough to completely overcome the dialect gap. Likewise, the same inconsistencies can be seen between the two directionalities of the same model. Despite both versions of OpusMT being trained on the same Low German data, translation into English sees benefits while translation into Low German remains poor. This makes it clear that the amount of data that dissipates the dialect gap in one situation may not be enough for another model or language.
### Speech Recognition & Dataset Makeup
Besides finetuning on more dialect data, another possible method of addressing the dialect gap is modifying the makeup of training or finetuning data. Across the board, the data used to finetune LLMs for speech recognition is heavily influential on performance. This difference can be seen when comparing the performance of these ASR systems on conversational and scripted samples from the IARPA Babel dataset (Bengali, Georgian, Tagalog, Tamil, & Telugu). The models evaluated here are largely trained on unsupervised speech data from the internet, which mostly comes from unscripted conversational recordings. As a result, the multilingual models perform slightly better on conversational speech. To test the impact of data makeup, XLS-R and Whisper were finetuned for three languages (Bengali, Georgian, & Tamil) on Common Voice, an entirely scripted dataset (Ardila et al., 2020). These languages are all spoken by a diglos-sic population that uses both a regional dialect and a more linguistically conservative standard written form. As a result, the lexical distance between conversational and scripted samples is farther than might otherwise be expected. In Figure 1(b), finetuning on scripted data almost exclusively benefits performance for scripted samples over conversational samples. In some cases, such as with Whisper, this comes at the detriment of performance on conversational samples. This ties back into the impact of lexical variation discussed in Section 6 since both scripted and conversational samples were collected by speakers of the same dialect with similar accents. The low lexical similarity between these dialects amplifies the fact that ensuring the training dataset accurately and fully represents the lexical variations across a language and its dialects is an important step in creating systems that perform well across dialects, domains, and registers.
## 8 Implications of the Dialect Gap
The existence of a dialect gap means that not all speakers are inherently well-served by a tool just because their language is supported. Past analyses examined inequities from the perspectives of multilingualism and therefore likely overestimated the number of speakers benefiting from the current system. As the field moves forward, it is important to step back and remember that languages are not static or monolithic.
Additionally, as we saw, the dialect gap is not identical in severity or structure across every system. This implies that researchers cannot take a one-size-fits-all approach towards solving the dialect gap. This issue needs addressing in different ways depending on the task and the existing state of the gap. A large component of dialect gaps is based on datasets -- both dataset size and dataset makeup. As the NLP community moves towards furthering research for medium- and low-resource languages, discussions must be had on both collecting sufficient amounts of dialect data and capturing the natural variations of every language by ensuring that data is collected from diverse populations. Appreciating and accounting for variation not only makes our systems more robust but supports groups that face marginalization in other ways.
## 9 Conclusion
This work examined an important subspace in NLP by evaluating the (regional) dialect gap present in tasks with the highest likelihood of impacting speakers directly. Still, there are countless LLMs which have been rapidly gaining popularity in the past few years with the release of open-ended dialogue and image models. Most tasks outside of MT and ASR do not have the data necessary to analyze the impact of language variation but as more data is collected and annotated, this may change. As a direct continuation of the line of inquiry started
in this work, multi-dialectal analyses of the dialect gap across a wider variety of tasks should be next.
For MT and ASR, the next steps are two-fold. Firstly, the datasets used for evaluation and fine-tuning in this work were primarily determined by availability, but using a broader and higher-quality set of samples may lead to the rise of other interesting trends. Additionally, to address the dialect gap identified here, there is a clear path forward that involves collecting more dialect data and ensuring it is representative of the languages and dialects it aims to serve. This should be done in conjunction with speakers of the language, linguists, and members of the NLP community to maximise utility while minimising the burden or harm on the speaker population. Lastly, this analysis is hardly complete. As new LLMs come out, it is on the developers of these tools and the researchers behind them to continuously produce evaluations around language diversity to ensure that the benefits these LLMs bring do not come at the cost of access for minority dialect speakers.
### Limitations
**Dataset Size & Quality Factors** Dataset size is a very significant factor when evaluating models and drawing language-wide conclusions. While the languages seen in this work had enough data for evaluation, very few provided enough data for finetuning LLMs and none provided enough to train a model from scratch. As a result, models were largely evaluated out of the box, which serves to identify performance gaps as they may appear in non-academic use cases but does not fully address solutions to this problem.
Likewise, dataset quality makes a massive impact on the result of training and evaluation. Because the number of available datasets was already quite low, crowd-sourced datasets such as Tatoeba were used without additional filtering, which may result in increased noise due to improper annotations. For some datasets, such as the IARPA Babel speech dataset, there was filtering done but spontaneous speech data in general is often paired with background noise and distortion, causing a further drop in performance.
Some languages have several datasets available, but because these datasets were not all collected with the same methodology (and therefore similar errors and distortions), they were not directly comparable so only one dataset was used or the language was not evaluated. Spanish speech, for example, has been recorded in the OpenSLR, CALLHOME, and Fisher datasets but CALLHOME was chosen alone to be used. On the other hand, a multitude of English accent and dialect datasets are available for speech, but because each was collected independently, they again could not be directly compared and were therefore omitted. Lastly, some languages supported by models (Telugu and Tagalog) were not present in the Common Voice finetuning dataset used for the ASR experiments and were therefore omitted from a large part of the discussion surrounding dataset makeup.
**Computational Restraints** Many of the models evaluated are large industry models, with hundreds of millions if not billions of parameters. Naturally, as an academic institution, we were limited in the computational power made available to train these models; certain models were so large that even with a batch size of one they are incapable of running on the machines we have available. If we had greater computational power available, we would have run our evaluations on the largest version of each model to provide a picture of the most state-of-the-art performance for each task and fine-tune these larger models longer. On the other hand, many minority dialect speakers do not have the economic resources to train or finetune super-massive models, so the evaluation of more accessible models is an appropriate reflection of what is available to these speakers. In the future, with access to greater resources, the evaluation of more systems and larger models, along with the evaluation on other user-facing tasks (Ruder et al., 2023), again through the optics of regional dialects, could be a valuable extension of this work.
### Ethics Statement
Dialectal NLP research is a burgeoning field without many precedents set for ethical research, but direction can be taken from the field of multilingual NLP for how to work with the languages of minoritised groups ethically. In this paper, the issue of ethics was largely sidestepped through the use of anonymised, public, and voluntarily collected datasets and the evaluation of tasks with a low likelihood of causing harm. Additionally, despite the importance of idiolects and moving beyond regional dialects, we purposefully did not work with dialects connected to identity features that may put people at risk, such as sexuality, gender, and reli
gion. Even as this paper supports the collection of larger and more representative datasets, these arguments do not apply in cases where it would be against the wishes or best interests of the groups involved.
## Acknowledgements
This work has been in part supported by the UK Research and Innovation (UKRI) Frontier Research Grant EP/Y031350/1 (the UK government's funding guarantee for ERC Advanced Grants) awarded to Anna Korhonen at the University of Cambridge. The work of Ivan Vulic has been supported by a personal Royal Society University Research Fellowship '_Inclusive and Sustainable Language Technology for a Truly Multilingual World_' (no 221137; 2022-). The work of Anjali Kantharuban has been supported by a Gates Cambridge Scholarship.
|
2301.04779 | Direction-sensitive dark matter search with three-dimensional
vector-type tracking in NEWAGE | NEWAGE is a direction-sensitive dark matter search experiment with a
three-dimensional tracking detector based on a gaseous micro time projection
chamber. A direction-sensitive dark matter search was carried out at Kamioka
Observatory with a total live time of 318.0 days resulting in an exposure of
3.18 kg$\cdot$days. A new gamma-ray rejection and a head-tail determination
analysis were implemented for this work. No significant non-isotropic signal
from the directional analysis was found and a 90% confidence level upper limit
on spin-dependent WIMP-proton cross section of 25.7 pb for WIMP mass of 150
GeV/c2 was derived. This upper limit is the most stringent in the
direction-sensitive dark matter searches. | Takuya Shimada, Satoshi Higashino, Tomonori Ikeda, Kiseki Nakamura, Ryota Yakabe, Takashi Hashimoto, Hirohisa Ishiura, Takuma Nakamura, Miki Nakazawa, Ryo Kubota, Ayaka Nakayama, Hiroshi Ito, Koichi Ichimura, Ko Abe, Kazuyoshi Kobayashi, Toru Tanimori, Hidetoshi Kubo, Atsushi Takada, Hiroyuki Sekiya, Atsushi Takeda, Kentaro Miuchi | 2023-01-12T01:18:21Z | http://arxiv.org/abs/2301.04779v4 | # Direction-sensitive dark matter search with three-dimensional vector-type tracking in NEWAGE
###### Abstract
NEWAGE is a direction-sensitive dark matter search experiment with a three-dimensional tracking detector based on a gaseous micro time projection chamber. A direction-sensitive dark matter search was carried out at Kamioka Observatory with a total live time of 318.0 days resulting in an exposure of 3.18 kg-days. A new gamma-ray rejection and a head-tail determination analysis were implemented for this work. No significant non-isotropic signal from the directional analysis was found and a 90% confidence level upper limit on spin-dependent WIMP-proton cross section of 25.7 pb for WIMP mass of 150 GeV/\(c^{2}\) was derived. This upper limit is the most stringent in the direction-sensitive dark matter searches.
Dark matter, WIMP, \(\mu\)TPC, NEWAGE
## 1 Introduction
Existence of the dark matter in the universe is nowadays widely believed because the dark matter naturally explains observational results in various scales of the universe. Weakly Interactive Massive Particles (WIMPs), which are promising candidates of the dark matter,
have been searched for by a number of direct search experiments pursuing for the nuclear recoil by WIMPs [1]. However, no conclusive evidence of the direct detection of WIMPs was obtained yet.
There are two possible characteristic signatures for the direct detection of the dark matter. One is the annual modulation in the energy spectrum caused by the Earth's motion around the Sun. The modulation amplitude is expected to be a few percent [2]. The other is the directional non-isotropy of the nuclei recoils. Since the Solar System is orbiting in the Milkyway Galaxy, the incoming direction of the dark matter is biased to the direction of the Solar System's motion. The directional distribution of the nuclear recoil also has an asymmetry and this asymmetric ratio can be as large as tenfold in some cases [3]. Thus, the observation of the non-isotropic signal for the nuclear recoil direction distribution is expected to be a strong evidence for the dark matter detection.
NEWAGE (NEw generation WIMP search with an Advanced Gaseous tracker Experiment) is a direction-sensitive direct WIMP search experiment using a low-pressure gaseous micro Time Projection Chamber (\(\mu\)-TPC) for the detection of three-dimensional (3D) tracks of recoil nuclei. NEWAGE started direction-sensitive direct WIMP searches in an underground laboratory in 2007 and has updated the results since then. In 2020, head-tail determinations of the nuclear tracks were implemented and a limit by a vector-like tracking analysis was obtained (NEWAGE2020 results [4]). In 2021, the limit was updated by installing a low alpha ray emission rate detector called LA\(\mu\)-PIC [5, 6]. Here the limit was obtained without the vector-like analysis (NEWAGE2021 results) because of the limited statistics. In this paper, we report a result of a direction-sensitive dark matter search with a new gamma-ray rejection cut and a vector analysis for 3D-tracks (3D-vector analysis) for a data 2.4 times larger than NEWAGE2021 results in total.
## 2 Detector
A gaseous time projection chamber, NEWAGE-0.3b", was used for this study. The detector overview is described in subsection 2.1. Energy calibration using alpha rays are discussed in subsection 2.2. Event selections already implemented in our previous analysis are summarized in subsection 2.3. An event selection newly-added for this work utilizing the track information for a better gamma-ray rejection is described in subsection 2.4. The reconstruction method of the 3D-vector tracks is explained in subsection 2.5 as the head-tail analysis. Finally, the detector performances on the efficiencies and the angular resolution of the nuclear recoil are shown in subsections 2.6 and 2.7, respectively.
### NEWAGE-0.3b"
NEWAGE-0.3b", refurbished in 2018 by replacing the readout device (micro pixel chamber, \(\mu\)-PIC) with a low alpha-emission rate one (LA\(\mu\)-PIC [5]), was used for this work. Figure 1 shows schematic drawings of the NEWAGE-0.3b" detector and its detection scheme. The detection volume was 31 \(\times\) 31\(\times\) 41 cm\({}^{3}\) in size and was filled with low-pressure gas of CF\({}_{4}\) at 76 Torr (0.1 atm) for this work. The LA\(\mu\)-PIC has a pixel structure of 768 \(\times\) 768 with a pitch of 400 \(\mu\)m. Amplified charge at each pixel is read through 768 anode (hereafter X-axis) and 768 cathode(hereafter Y-axis) strips. Signals read through the strips are processed by Amplifier-Shaper-Discriminator chips (SONY CXA3653Q [7]). The processed signals are
then divided into two. One is compared with a threshold voltage in the chips and the time-over-thresholds (TOTs) of 768 + 768 strips are recorded with a 100 MHz frequency clock. The other 768 cathode strips were grouped into four channels and their waveforms were recorded with a 100 MHz flash analog-to-digital converters (FADCs). A detected track is parameterized with its energy, length, elevation angle \(\theta_{\rm ele}\), azimuth angle \(\Phi_{\rm azi}\) (see Figure 1) and some other parameters defined in the following subsections.
### Energy calibration
The energy calibration was performed with alpha rays produced by \({}^{10}\)B(n, \(\alpha\))\({}^{7}\)Li reactions. A glass plate coated with a \({}^{10}\)B layer was set in the TPC volume as illustrated in Figure 1. Thermal neutrons were irradiated from the outside of the chamber, captured in the \({}^{10}\)B layer, and then produced continuous spectrum up to 1.5 MeV. The obtained spectrum is a sum of the thermal neutron capture events and elastic scattering events by fast neutrons. By comparing these spectra with the simulation results by Geant4 [9], the gas gain and the energy resolution were determined. Figure 2 shows one of the calibration results. The 1.5 MeV edge of the thermal neutrons was observed.
The detector gas contains rare gas radon isotopes, \({}^{220}\)Rn and \({}^{222}\)Rn, emitted from the detector materials as natural contaminations. The high-energy calibration was performed by the alpha rays from radon isotopes and their progenies. \({}^{220}\)Rn produces alpha rays with
Figure 1: Schematic drawings of the NEWAGE-0.3b” detector and its detection scheme. A recoil nucleus shown with red markers passes through the gas volume and ionizes the gas molecules (blue). The ionized electrons are drifted toward the readout plane by the electric field, amplified by the GEM [8], and further amplified by the LA\(\mu\)-PIC before being detected. The image on the left is a magnified view of the LA\(\mu\)-PIC with an electrode structure of a 400 \(\mu\)m pitch.
energies of 6.05 MeV, 6.29 MeV, 6.78 MeV, and 8.79 MeV, \({}^{222}\)Rn produces alpha rays with energies of 5.49 MeV, 6.00 MeV, and 7.69 MeV. Because the ratio of \({}^{220}\)Rn to \({}^{222}\)Rn were not known, the measured spectra were fit with the simulated spectrum of \({}^{220}\)Rn and \({}^{222}\)Rn separately and the difference was treated as the systematic error of the energy scale.
### Standard event selections
Several event selections had been established as standard event selections by NEWAGE2021 analysis. These selections aim to cut non-physical electronics noise events and electron track events mainly originating from ambient gamma-rays. The standard event selections are briefly explained here, while details can be found in Ref. [6].
Fiducial volume cut
A fiducial volume of 28 \(\times\) 24 \(\times\) 41 cm\({}^{3}\) was defined in the detection volume of 31 \(\times\) 31 \(\times\) 41 cm\({}^{3}\). Any events were required to be fully contained in the fiducial volume so as to discriminate the events from the walls of the TPC field cage and the \({}^{10}\)B glass plate.
Length-Energy cut
The amount of energy loss by a charged particle per a unit length depends on the particle type. Electron events were discriminated by setting a maximum track length for a given energy.
TOTsum/Energy cut
The energy deposition on each strip was recorded as TOT. A total TOTs of all strips were defined as TOTsum. Since the nuclear recoil events have larger TOTsum than those of the electron recoil events for a given energy, electron events were discriminated by setting a minimum TOTsum/energy value for a given energy. (See the left panel of Figure 3, for instance.)
Figure 2: Energy spectrum of alpha rays from a \({}^{10}\)B glass plate. The black and blue histograms are the measured data and the simulated results, respectively.
Roundness cut
"Roundness" was defined as the root-mean-square deviation of a track from the best-fit straight line. Nuclear recoil events with a short drift distance have small roundnesses because they are less affected by the gas diffusion. Background events in the gas region between the LA\(\mu\)-PIC and the GEM were discriminated by setting a minimum roundness value.
### TOTsum-Length cut
The detector was operated at a higher gas gain (typically 1800) than that of NEWAGE2021 (1200) aiming for a better detection efficiency of nuclear recoil events. One of the expected drawbacks of the high-gain operation was the increase of the background gamma-ray events. Figure 3 shows the TOTsum/Energy distributions as functions of the energy after the fiducial volume cut. The gas gains of the left and right panels are 1200 and 1800, respectively. It should be noted that each calibration run with the source had been conducted at a common live time of 0.18 days. It is therefore clearly seen that the detection efficiency of electron events (\({}^{137}\)Cs data) are significantly larger in a measurement at a high gas gain because the number of shown events are increased. It is also seen that the TOTsum/Energy of electron events in the high-gain data have a large component which excess the selection line of TOTsum/Energy selection shown with a red line. This result indicated that the standard event selections were not sufficient for the high-gain operation data.
A new cut, "TOTsum-Length cut", was implemented in order to improve the discrimination power against the gamma-ray events. Nuclear recoil events have large TOTsums and short track lengths. On the other hand, the electron recoil events have smaller TOTsums and longer tracks. Figure 4 shows the track length distributions as a function of TOTsum for the irradiation with a \({}^{252}\)Cf source and a \({}^{137}\)Cs source. Here the gas gain is 1800. Since our energy threshold is set to be 50 keV, the data in an energy range of 50-60 keV are selected. We confirmed a good separation of the electron (seen in both plots) and nuclear distributions (seen only in the \({}^{252}\)Cf plot) in this parameter space even for a high-gain operation data. In order to discriminate electron events, an empirical function written by
\[\mathrm{L}=(\mathrm{S}/\beta)^{\alpha}, \tag{1}\]
was introduced. Here \(\mathrm{L}\) is the track length, \(\mathrm{S}\) is the TOTsum, and \(\alpha\) and \(\beta\) are parameters for the cut definition. Here \(\alpha\) was fixed within a run while \(\beta\) was an energy-dependent parameter.
We first determined \(\alpha\) and \(\beta\) values in the 50-60 keV energy range for each period. The period is a set of data taken under a same detector condition and will be summarized in Section 3. The parameters were determined so that they would give the best rejection of gamma-ray events while retaining the selection efficiency of nuclear recoil events to be greater than 50%. Here, the selection efficiency for a specific selection is defined as the ratio of the remaining number of events to that before the selection. We then fixed \(\alpha\) and determined \(\beta\) for a given energy. Figure 5 shows the energy dependence of \(\beta\). The black and blue dots represent the data with a \({}^{252}\)Cf and a \({}^{137}\)Cs sources, respectively. The distribution of \(\beta\) values of the nuclear recoils events was fit with Gaussian in every 10 keV energy bin. The region between the mean and upper \(3\sigma\) of the Gaussian indicated with red lines in Figure 5 was set as the nuclear recoil region and the rest was rejected. Gamma-ray rejection powers with
and without this cut are shown in Figure 6. A gamma-ray rejection power of 8.8 \(\times\) 10\({}^{-7}\) was achieved, which is about two orders of magnitude better than that in NEWAGE2021.
Figure 4: Distributions of track length as a function of TOTsum in the energy range of 50–60 keV after the fiducial volume cut. The left (black gradient) is the data with a \({}^{252}\)Cf neutron source and the right (blue gradient) is the data with a \({}^{137}\)Cs gamma-ray source. The red line in the figure is L = (S/\(\beta\))\({}^{\alpha}\) (\(\alpha=2.3\) and \(\beta=250\)). Since the \({}^{252}\)Cf source emits not only neutrons but also gamma-rays, the distribution has two components.
Figure 3: TOTsum/Energy distributions as functions of energy (after the fiducial volume cut). The left and right panels show the distributions corresponding to the gas gain of 1200 and 1800, respectively. The black gradation distribution is obtained with a \({}^{252}\)Cf neutron source. The blue point distribution is obtained with a \({}^{137}\)Cs gamma-ray source. The red-dashed lines indicate the cut lines. Each calibration run with the source had been conducted at a common live time of 0.18 days.
### Head-tail analysis
Importance of the track sense recognition, or the head-tail determination, has been stressed for years [10, 11]. We started to use the head-tail determination for the direction-sensitive dark matter search analysis with a limited efficiency in Ref. [4]. An analysis update improved the efficiency and head-tail determinations for 3D tracks, or 3D-vector analysis, were used for this work. The first step in reconstructing the direction of a track is to obtain the relative arrival times of ionized electrons in the readout strips. These relative arrival times on X or Y strips are converted into relative Z positions taking account the drift velocity. The charge detected on the strip, or a hit, is thus assigned a (X, Z) or (Y, Z) hit-position. Angles of a track in the X-Z and Y-Z planes are known by fitting the hit-positions with straight lines. 3D-axial directions of the tracks in the detector coordinate system are determined from these two angles in the X-Z and Y-Z planes. These reconstructed tracks are not 3D-vector ones at this stage because the head-tail of the track is not determined yet.
The head-tail of a track can be determined by observing the asymmetry of the energy deposition along its trajectory. The fluorine-nuclear track with an energy of our interest (less than 400 keV) is known to deposit its energy large at the starting point and small around its end point. This phenomena can be observed as large TOTs at the starting point and small TOTs around its end point.
Figure 7 shows observed TOT distributions of an event along X and Y strips. This event was obtained with a \({}^{252}\)Cf source placed at (25 cm, 0 cm, 0 cm) so that we expect to observe fluorine nucleus tracks running from +X to -X directions. An asymmetry of the TOT distribution along the X-axis is seen while that along the Y-axis is more symmetric.
Figure 5: Energy dependence of \(\beta\) at \(\alpha\)=2.3 after the TOTsum cut. The black gradient is for the \({}^{252}\)Cf neutron source calibration data and the blue dots are for the \({}^{137}\)Cs gamma-ray source calibration data. The dashed red lines indicate the mean value and the 3\(\sigma\) cut line by Gaussian fit, respectively. The events between the cut lines are selected.
This asymmetry is quantified by parameters \(skewneesses\) defined as following equations,
\[skewness\ x=\frac{<TOT(x)\cdot(x-<x>)^{3}>}{<(TOT(x)\cdot(x-<x>)^{2})^{3/2}>}, \tag{2}\]
\[skewness\ y=\frac{<TOT(y)\cdot(y-<y>)^{3}>}{<(TOT(y)\cdot(y-<y>)^{2})^{3/2}>}. \tag{3}\]
Here \(TOT(x)\) is the TOT observed on strip \(x\), and \(<>\) represents the means value. The ability to determine the head-tail, called the head-tail power \(P_{\rm ht}\), is defined as
\[P_{\rm ht}=\frac{N_{\rm true}}{N}, \tag{4}\]
where \(N\) is the total number of events, and \(N_{\rm true}\) is the number of events that were correctly determined by the skewness. Determinations of \(N_{true}\) are discussed in the followings.
In our previous work, we selected events with small \(\theta_{\rm ele}\) and large skewness to increase the head-tail power at a cost of lowering the selection efficiency to less than one half[4]. The analysis was updated so that the the selection efficiency was recovered while the \(P_{\rm ht}\) was retained; the use of \(skewness\ x\) and \(skewness\ y\) were determined according to the azimuth direction of the tracks. For the tracks along the X-coordinate direction (\(0\ ^{\circ}\leq|\phi_{\rm azi}|<45\ ^{\circ}\)), \(skewness\ x\) was used, and \(skewness\ y\) was used for the tracks with \(45^{\circ}\leq|\phi_{\rm azi}|<90^{\circ}\)). In addition, number of hit strips were increased by the operation at a high gas gains.
The raw values of skewness were found to be correlated with \(\theta_{\rm ele}\) as shown in the upper panels of Figure 8. The skewness were corrected according to \(\sin\theta_{\rm ele}\) with cubic functions and the corrected skewness values shown in the lower panels of Figure 8 were used for further discussions.
Figure 6: Gamma-ray rejection powers. The magenda dots are the result using the TOTsum-Length cut and green is the one without the TOTsum-Length cut. The TOTsum-Length cut introduced in this study improved the results by two orders of magnitude in the energy range of 50–70 keV.
Figures. 9 and 10 show skewness distributions of a \({}^{252}\)Cf source data after all cuts for three energy ranges. Neutron irradiation data from \(+\)X and \(-\)X directions are shown with red and blue histograms in the upper panels of Figure 9. They show different \(skewness\)\(x\) distributions as expected while the \(skewness\)\(x\) distributions for the \(\pm\)Y direction irradiation data (lower panels of Figure 9) did not show significant difference. Same trend was confirmed for \(skewness\)\(y\) as shown in Figure 10. \(N_{true}\) was defined by discriminating at \(skewness=0\) and \(P_{\rm ht}\) values were calculated. Averaged \(P_{\rm ht}\) values for 50-100 keV, 100-200 keV, and 200-400 keV energy ranges were (\(52.4\pm 1.1\))%, (\(52.9\pm 1.4\))%, and (\(53.6\pm 2.0\))%, respectively.
Figure 8: Correlation between skewnesses and \(\sin\theta_{\rm ele}\). The distributios before and after the correction are shown in the upper and lower figures, respectively.
Figure 7: TOT values of an event along each X (left panel) and Y (right panel) strip.
Details of \(P_{\rm ht}\) are summarized in Table 1. The error of \(P_{\rm ht}\) in each irradiation direction is the standard deviation of head-tail power determined for each period. The overall head-tail power error is the standard deviation of the \(P_{\rm ht}\)s in each irradiation direction. Head-tail powers equivalent to those of Ref. [4] were achieved without any specific selection for the head-tail determination.
### Efficiencies
There are two types of efficiencies regarding this study; the detection-selection and the directional efficiencies. The former, or the "absolute" efficiency, determines the number of detected-and-selected events while the latter, or the "relative" one, determines the directional distribution of these events without changing the total number. A data-set of recoil events isotropic in terms of the position and the direction was used to measure the efficiencies.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Energy range & \(P_{\rm ht}\) (+x) (\%) & \(P_{\rm ht}\) (-x) (\%) & \(P_{\rm ht}\) (+y) (\%) & \(P_{\rm ht}\) (-y) (\%) & \(P_{\rm ht}\) (average) (\%) \\ \hline
50–100 keV & 52.2\(\pm\)0.9 & 53.3 \(\pm\)1.2 & 52.2 \(\pm\)1.1 & 51.9 \(\pm\)0.9 & 52.4 \(\pm\)1.1 \\
100–200 keV & 52.6 \(\pm\)1.4 & 53.2 \(\pm\)1.2 & 53.5 \(\pm\)1.2 & 52.5 \(\pm\)1.0 & 52.9 \(\pm\)1.2 \\
200–400 keV & 53.3 \(\pm\)1.6 & 52.4 \(\pm\)1.0 & 54.9 \(\pm\)2.8 & 53.8 \(\pm\)1.6 & 53.6 \(\pm\)2.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Head-tail powers in unit of % for each direction and energy range.
Figure 9: Distribution of \(skewness\)\(x\) at each energy. Events are normalized to unity.
The isotropic data-set was made by summing-up the time-normalized data obtained by irradiating the detector with neutrons from a \({}^{252}\)Cf source placed at six positions in \(\pm X\), \(\pm Y\), and \(\pm Z\) directions.
The detection-selection efficiency is defined as the number of nuclear recoil events after all selections divided by the expected number of nuclear recoils in the fiducial volume. Here, the expected number of nuclear recoils is estimated by the Geant4 simulation. Results are shown in Figure 11. It should be noted that the increase of the detection efficiency seen below 100 keV is due to the contamination of the gamma-ray events and is not real. The contamination is removed with the selections to a negligible level. The detection efficiency is about 60% above 200 keV. The main reason of not reaching at 100% is that the gas gain being not high enough to trigger all the nuclear recoil events. The detection-selection efficiency above 200 keV is half of the detection efficiency because of the mean value for the TOTsum-Length selection. A 20%-reduction of the detection-selection efficiency from NEWAGE2021 should also attribute to the additional cut, which still gives a large advantage in the signal-to-noise ratio if we consider the gain on the rejection shown in Figure 6. The detection-selection efficiency shown in Figure 11, or the "absolute" efficiency, can be used to calculate the expected number of events for a given WIMP or background model. It can also be used to unfold the measured energy spectrum and obtain an "effective" spectrum for the comparison of the background rates.
Figure 10: Distribution of \(skewnes\)\(y\) at each energy. Events are normalized to unity.
The directional efficiency is expressed as a sky map, or the relative response in the elevation (\(\theta_{\rm ele}\)) - azimuth (\(\phi_{\rm azi}\)) plane, for isotropic recoils. The possible non-homogeneity of the directional efficiency mainly originates from the reconstruction algorithm. The 3D recoil direction, including the sense (head-tail) of the track, is reconstructed from the TOT-distributions of X and Y strips. Figure 12 shows the obtained \(\theta_{\rm ele}\)-\(\phi_{\rm azi}\) distributions of an isotropic recoil calibration data. Since this map is to know the "relative" or reconstruction efficiency of the directions, the color map is a relative one to be used with the total number of events being conserved. It is seen that the tracks tend to be reconstructed to align with the strips, \(i.e.\)\(\phi_{\rm azi}=0^{\circ},\pm 90^{\circ},180^{\circ}\) for the tracks parallel to the detection plane, or the tracks with \(\theta_{\rm ele}\sim 0\). The directional efficiencies shown in Figure 12, or the relative efficiency, can be used to make an expected recoil distribution for a given number of expected events calculated by the detection-selection efficiency.
### Angular resolution
The angular resolution was evaluated by comparing the distribution of the recoil angle \(\gamma\) of neutron irradiation data with the simulated ones smeared by various angular resolutions. Here \(\gamma\) is the angle between the incoming neutron direction and the reconstructed nuclear-recoil direction. Since the head-tails of the tracks are determined and considered in the analysis independent from the effettc of the angular resolution, the angular resolution was evaluated with the distribution of absolute value of \(\cos\gamma\). \(\chi^{2}_{\rm ang}\) value defined by Eq. (5) was calculated for a given angular resolution \(\sigma_{\rm ang}\).
\[\chi^{2}_{\rm ang}=\sum_{i}^{N_{\rm bin}}\frac{(N_{i}^{\rm data}-N_{i}^{\rm MC }(\sigma_{\rm ang}))^{2}}{N_{i}^{\rm data}}, \tag{5}\]
Figure 11: Nuclear recoil efficiencies as a function of the energy. The cyan and the blue histograms are the detection and detection-selection efficiencies of nuclear recoil of this study, respectively. The gray histograms is the result of NEWAGE2021 [6].
and \(N_{i}^{\rm MC}\) is the number of events in the \(i\)-th bin of the histogram of the \(|\cos\gamma|\) distribution simulated by Geant4 smeared with the angular resolution, and \(N_{\rm bin}\) is the number of bins in that histogram. The angular resolution at the minimum \(\chi^{2}_{\rm ang}\) value was adopted. The angular resolution was \(58.1^{+5.8}_{-2.8}\) degree in the energy range of 50-100 keV.
## 3 Experiment
A direction-sensitive dark matter search was performed in Laboratory B, Kamioka Observatory (36.25'N, 137.18'E), located 2700 m water equivalent underground. The measurement was carried out from December 12th, 2017 to March 26th, 2020, subdivided into eight periods. The period was renewed when the detector was evacuated and filled with new CF\({}_{4}\) gas. The period information is summarized in Table 2. The Z-axis of the NEWAGE-0.3b" detector was aligned to the direction of S30\({}^{\circ}\)E. The target gas was CF\({}_{4}\) at 76 Torr (0.1 atm) with a mass of 10 g in an effective volume of 28 \(\times\) 24 \(\times\) 41 cm\({}^{3}\) (27.6 L). The total live time is 318 days corresponding to an exposure of 3.18 kg\(\cdot\)days.
Various environmental parameters were monitored during the measurement to confirm the stability of the detector. Figure 13 shows the time dependences of the integrated exposure, the gas gain and the energy resolution. The energy calibrations and the efficiency measurements were performed approximately every two weeks. The energy scale was corrected by the monitored gas gain. The mean value of the energy resolution was 12.4% with a standard deviation of 3.0% during the measurement. No variation of the energy resolution beyond errors was observed.
The event selections described in subsection 2.3 and 2.4 were applied to the data. Figure 14 shows the energy spectrum after each event selection. The statistic errors are shown for the spectrum after all selections. For a comparison with NEWAGE2021 result, an energy spectrum divided by the detection-selection efficiency is shown in Figure 15 as "This work"
Figure 12: Directional efficiency in the detector coordinate system.
RUN20-25. The rate of this work is comparable to the that of NEWAGE2021. This is reasonable because there is no change in terms of the hardware-level radioactive background. We have achieved the same count rate as that of NEWAGE2021 The energy spectrum of this work has smaller statistical errors due to the increase of the statistics by a factor of 2.4.
Figure 16 shows the directions of measured nuclear recoil events in the detector coordinate (a) and the galactic coordinate (b), respectively. The \(\cos\theta_{\rm CYGNUS}\) was calculated for each event in Figure 16 (b) and distributions are shown in Figure 17. The \(\cos\theta_{\rm CYGNUS}\) is binned by four and the energy is binned every 10 keV.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Period & Date & Gas gain & Live time (days) & Exposure (kg\(\cdot\)days) \\ \hline RUN20-1 & 2017/12/12 – 2018/01/18 & 2000 & 13.5 & 0.135 \\ RUN20-2 & 2018/01/23 – 2018/02/23 & 1750 & 20.0 & 0.200 \\ RUN21 & 2018/02/28 – 2018/06/01 & 1550 & 58.6 & 0.586 \\ RUN22-1 & 2018/06/06 – 2018/08/24 & 1110 & 52.5 & 0.525 \\ RUN22-2 & 2018/09/20 – 2018/11/29 & 1200 & 60.5 & 0.605 \\ RUN23 & 2018/12/05 – 2019/04/12 & 1750 & 45.9 & 0.459 \\ RUN24 & 2019/04/26 – 2019/06/27 & 1800 & 49.4 & 0.494 \\ RUN25 & 2020/03/04 – 2020/03/26 & 1950 & 17.6 & 0.176 \\ \hline Total & 2017/12/12 – 2020/03/26 & & 318.0 & 3.180 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the measurement periods with gas gains (at the start of each RUN), live times, and exposures. RUN22-1 and RUN22-2 are the data analyzed in NEWAGE2021 [6].
Figure 13: Cumulative exposure, gas gains, and energy resolutions during the measurement.
## 4 Results
A directional WIMP search analysis was performed with an assumption of the standard halo model. Here the Maxwell distribution with a velocity dispersion of 220 km/sec, and an escape velocity of 650 km/sec were assumed [12]. The local density of 0.3 GeV/c\({}^{2}\)/cm\({}^{3}\) was assumed. The spin parameter \(\lambda^{2}J(J+1)\) for the \({}^{19}\)F of 0.647 was used in this analysis [13]. The spectra of \(\cos\theta_{\rm CYGNUS}\) for each energy bin as shown in Figure 17 were simultaneously compared with sum distributions of WIMP signal and isotropic background using the binned likelihood ratio method.
A statistic value \(\chi^{2}\) was defined as Eq. (6).
\[\chi^{2}=2\sum_{i=0}^{n}\sum_{j=0}^{m}\biggl{[}(N_{i,j}^{\rm MC}-N_{i,j}^{\rm data })+N_{i,j}^{\rm data}{\rm ln}\biggl{(}\frac{N_{i,j}^{\rm data}}{N_{i,j}^{\rm MC }}\biggr{)}\biggr{]}+\alpha_{\rm E}^{2}+\alpha_{\rm BG}^{2}, \tag{6}\]
where,
\[N_{i,j}^{\rm MC} = N_{i,j}^{\rm DM}(\sigma_{\chi-p},m_{\chi},\xi_{\rm E})+N_{i,j}^{ \rm BG}(\xi_{\rm E},\xi_{\rm BG}), \tag{7}\] \[\alpha_{\rm E} = \frac{\xi_{\rm E}}{\sigma_{\rm E}},\] (8) \[\alpha_{\rm BG} = \frac{\xi_{\rm BG}}{\sigma_{\rm BG}}. \tag{9}\]
Figure 14: Energy spectra after each selection step. The grey, orange, blue, magenta, and green lines are the energy spectra after no cut, Fiducial volume cut, Length-Energy cut, TOTsum/Energy cut, and TOTsum-Length cut, respectively. The black dots with error bars are the final data sample after the Roundness cut. The fill stacked green and red spectra are the expected gamma-ray and radon background ones estimated by the simulation. The gray shaded area is a 1\(\sigma\) error in the background.
Subscripts \(i\) and \(j\) are the bin-number of the \(\cos\theta_{\rm CYGNUS}\) and the energy, respectively. The expected and measured number of events in bin \(i,j\) are described as \(N_{i,j}^{\rm MC}\) and \(N_{i,j}^{\rm data}\), respectively. \(N_{i,j}^{\rm MC}\) is written as Eq. 7, where \(N_{i,j}^{\rm DM}\) is the expected number of the WIMP-nucleus scatterings, and \(N_{i,j}^{\rm BG}\) is the expected number of background events. \(\sigma_{\chi-p}\) is the WIMP-proton cross section. \(N_{i,j}^{\rm BG}\) was estimated using the Geant4 simulation based on the flux measurements of the ambient gamma-rays, the ambient neutrons, the alpha rays from the radon, and the alpha rays from the LA\(\mu\)-PIC surface. The dominant background components in the energy range of 50-100 keV were the ambient gamma-rays and the alpha rays from the radon (see Ref. [6] for details). Expected background spectra are shown in Figure 14 for reference. The largest systematic uncertainty of the expected rate arise from the energy scale uncertainty. This uncertainty was estimated from the discrepancy of the energy calibration between \({}^{10}\)B, \({}^{220}\)Rn, and \({}^{222}\)Rn measurements discussed in subsection 2.2. The uncertainty was evaluated in each run. The weighted average of the energy scale uncertainty was +13.2% and -2.3%. The uncertainties of the background rate are the measurement errors of radioactivities for the ambient gamma-rays and the radons. Here the ambient gamma-ray flux was measured with a CsI scintillator[14] and the radon background was estimated with the high energy spectrum of this work. Nuisance parameters \(\alpha_{\rm E}\) and \(\alpha_{\rm BG}\) considering the systematic uncertainty from the energy scale \(\sigma_{\rm E}\) and the background estimation \(\sigma_{\rm BG}\) are defined as Equations (8) and (9). Possible shifts of the energy scale and the number of expected backgrounds are expressed as \(\xi_{\rm E}\) and \(\xi_{\rm BG}\).
\(\chi^{2}\) was minimized for a given WIMP mass with \(\sigma_{\chi-p}\), pull-terms \(\alpha_{\rm E}\) and \(\alpha_{\rm BG}\) as fitting parameters. We first explain the procedure for the WIMP mass of 150 GeV/\(c^{2}\) case. A minimum \(\chi^{2}\)/NDF of 20.4/17 was obtained for \(\sigma_{\chi-p}\)=14.6 pb. The left panel in Figure 17
Figure 15: Energy spectra divided by the detection-selection efficiency. Red histogram is the energy spectrum of this work. Black histogram is the energy spectrum of NEWAGE2021.
shows the \(\cos\theta_{\rm CYGNUS}\) distributions of the best-fit case. A chi-square distribution was created from a dummy sample of isotropic background model using Monte Carlo simulations. This test gave the p-value of 60% for the measured result. Observed distribution was thus found to be consistent with the background-only model. Since no significant WIMP excess was observed, the background-only model was not able to reproduce the background-only model.
Figure 16: (a) Nuclear recoil directions of final data sample in the detector coordinate. The X-axis and Y-axis are \(\phi_{\rm azi}\) and \(\theta_{\rm ele}\) in the detector coordinate system, respectively. (b) Nuclear recoil directions of final data sample in the galactic coordinate. The X-axis and Y-axis are the longitude and latitude of the galactic coordinate, respectively. The direction of the galactic center is (0,0) and that of Cygnus is (-90,0). The orange, red, pink, purple, and blue points indicate the energy ranges of 50–60 keV, 60–70 keV, 70–80 keV, 80–90 keV, and 90–100 keV, respectively. The color contours in the background are the directional efficiencies in each coordinate system.
obtained, an upper limit at 90% confidence level (C.L.) was set for the spin-dependent WIMP-proton scattering cross section. The likelihood ratio \(\mathcal{L}\) is defined as,
\[\mathcal{L}=\exp\biggl{(}-\frac{\chi^{2}(\sigma_{\chi-p})-\chi_{\rm min}^{2}}{2} \biggr{)}. \tag{10}\]
Here, \(\chi^{2}(\sigma_{\chi-p})\) and \(\chi_{\rm min}^{2}\) are the value of \(\chi^{2}\) and the minimum value of \(\chi^{2}\) calculated by varying \(\sigma_{\chi-p}\), respectively. The 90% C.L. upper limit of the WIMP-proton cross section, \(\sigma_{\chi-p}^{\rm limit}\), is determined as follows,
\[\frac{\int_{0}^{\sigma_{\chi-p}^{\rm limit}}\mathcal{L}d\sigma_{\chi-p}}{\int_{ 0}^{\infty}\mathcal{L}d\sigma_{\chi-p}}=0.9. \tag{11}\]
Using the above equation, the 90% C.L. upper limit of the spin-dependent cross section was found to be 25.7 pb for a WIMP mass of 150 GeV/\(c^{2}\). The \(\cos\theta_{\rm CYGNUS}\) distributions with the upper limit of 90% C.L. are shown in the right panels of Figure 17.
Upper limits of the cross sections were obtained for other WIMP masses by the same procedure. Figure 18 shows the upper limits at 90% C.L. of the spin-dependent WIMP-proton cross sections as a function of the WIMP mass. Compared to the NEWAGE2020 results, which was analyzed by the 3D-vector method using the standard \(\mu\)-PIC, this upper limit updates by about one order of magnitude. This is due to the reduction of surface background events with the LA\(\mu\)-PIC. Furthermore, compared to the NEWAGE2021 result, the statistics of the 2.4 factor and an updated analysis including the background estimation, improved the limits by a factor of about two for WIMPs heavier than 100 GeV/\(c^{2}\).
## 5 Discussions
A new limit by a directional dark matter search with a 3D-vector analysis was obtained by this work. Although we started to search the region of one of the interpretations of the DAMA/LIBRA's annual modulation signal [18], a significant improvement of the sensitivity is needed for the search of the region of more interest. The improvements can be realized mainly in three aspects: the detection-selection efficiency, the energy threshold, and the backgrounds.
The detection-selection efficiency at 50-60 keV is 12.5%, which indicates the statistics can be increased by a factor of eight at most for a same exposure by an improvement of the detection-selection efficiency. A measurement with a higher gas gain will increase the trigger efficiency. A better gamma-ray rejection analysis, \(e.g.\) introducing machine-learning methods, would compensate the expected increase of the gamma-ray background rate and allow us to operate the detector at a higher gas gain. Shielding the detector is an independent hardware approach to reduce the gamma-ray background events.
The current energy threshold (50 keV) is mainly limited by the track length of the recoil events. Typical length of the track of fluorine nuclear recoil below 50 keV in CF\({}_{4}\) gas at 76 Torr (0.1 atm) is less than 1 mm. This is comparable to the strip pitch of 0.4 mm and one can deduce that the angular resolution and gamma-ray rejection both get worth below this point. One solution is to operate the CF\({}_{4}\) gas at a lower pressure than 76 Torr to allow the nuclei and electrons run longer and improve the angular resolution and gamma-ray rejection below 50 keV.
The remaining background sources are the ambient gamma-rays and internal radons as shown in Figure 14. We have already discussed the gamma-ray reduction above so we discuss
the reduction of radon background here. The LA\(\mu\)-PIC, significantly reduced the surface alpha rays in NEWAGE2021, still contains some material which emanates the radon gas [5]. A new version of the \(\mu\)-PIC series, LBG\(\mu\)-PIC currently being developed. The material used for the LBG\(\mu\)-PIC is carefully selected so that the total radon emanation is less than 1/10 of the LA\(\mu\)-PIC.
With the improvements described above, we aim to explore the region claimed by DAMA/LIBRA [18] and to improve the sensitivity to reach limits by other direct search experiments.
## 6 Conclusion
A direction-sensitive direct dark matter search was carried out at Kamioka Observatory with a total live time of 318.0 days corresponding to an exposure of 3.18 kg-days. A new gamma-ray rejection cut, which improved the gamma-ray rejection power to 8.8 \(\times\) 10\({}^{-7}\) while maintaining the detection-selection efficiency of the nuclear recoil at about 20% was introduced. This enabled us to use the high gas gain data, which was not used in the previous study due to the deterioration of the gamma-ray rejection power. The exposure was increased by a factor of 2.4. A 3D-vector reconstruction with a head-tail determination power of 52.4% in the energy range of 50-100 keV was also used for this study. As a result of the directional WIMP-search analysis, an upper limit of the spin-dependent WIMP-proton cross section of 25.7 pb for a WIMP mass of 150 GeV/\(c^{2}\) was derived. This limit marked the best direction-sensitive limit.
Figure 17: \(\cos\theta_{\rm CYGNUS}\) distributions (identical black histograms in both panels) for the final date sample in the 50–100 keV energy ranges. The best fit and 90% upper limit distributions for the WIMP mass of 150 GeV/\(c^{2}\) are shown with color histograms in the left and right panels, respectively.
## Acknowledgment
This work was partially supported by KAKENHI Grant-in-Aids (19H05806, 19684005, 23684014, 26104005, and 21H04471).
|
2307.00691 | From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and
Privacy | Undoubtedly, the evolution of Generative AI (GenAI) models has been the
highlight of digital transformation in the year 2022. As the different GenAI
models like ChatGPT and Google Bard continue to foster their complexity and
capability, it's critical to understand its consequences from a cybersecurity
perspective. Several instances recently have demonstrated the use of GenAI
tools in both the defensive and offensive side of cybersecurity, and focusing
on the social, ethical and privacy implications this technology possesses. This
research paper highlights the limitations, challenges, potential risks, and
opportunities of GenAI in the domain of cybersecurity and privacy. The work
presents the vulnerabilities of ChatGPT, which can be exploited by malicious
users to exfiltrate malicious information bypassing the ethical constraints on
the model. This paper demonstrates successful example attacks like Jailbreaks,
reverse psychology, and prompt injection attacks on the ChatGPT. The paper also
investigates how cyber offenders can use the GenAI tools in developing cyber
attacks, and explore the scenarios where ChatGPT can be used by adversaries to
create social engineering attacks, phishing attacks, automated hacking, attack
payload generation, malware creation, and polymorphic malware. This paper then
examines defense techniques and uses GenAI tools to improve security measures,
including cyber defense automation, reporting, threat intelligence, secure code
generation and detection, attack identification, developing ethical guidelines,
incidence response plans, and malware detection. We will also discuss the
social, legal, and ethical implications of ChatGPT. In conclusion, the paper
highlights open challenges and future directions to make this GenAI secure,
safe, trustworthy, and ethical as the community understands its cybersecurity
impacts. | Maanak Gupta, CharanKumar Akiri, Kshitiz Aryal, Eli Parker, Lopamudra Praharaj | 2023-07-03T00:36:57Z | http://arxiv.org/abs/2307.00691v1 | # From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy
###### Abstract
Undoubtedly, the evolution of Generative AI (GenAI) models has been the highlight of digital transformation in the year 2022. As the different GenAI models like ChatGPT and Google Bard continue to foster their complexity and capability, its critical to understand its consequences from a cybersecurity perspective. Several instances recently have demonstrated the use of GenAI tools in both the defensive and offensive side of cybersecurity, and focusing on the social, ethical and privacy implications this technology possesses. This research paper highlights the limitations, challenges, potential risks, and opportunities of GenAI in the domain of cybersecurity and privacy. The work presents the vulnerabilities of ChatGPT, which can be exploited by malicious users to exlittate malicious information bypassing the ethical constraints on the model. This paper demonstrates successful example attacks like salitebreaks, reverse psychology, and prompt injection attacks on the ChatGPT. The paper also investigates how cyber offenders can use the GenAI tools in developing cyber attacks, and explore the scenarios where ChatGPT can be used by adversaries to create social engineering attacks, phishing attacks, automated hacking, attack payload generation, malware creation, and polymorphic malware. This paper then examines defense techniques and uses GenAI tools to improve security measures, including cyber defense automation, reporting, threat intelligence, secure code generation and detection, attack identification, developing ethical guidelines, incidence response plans, and malware detection. We will also discuss the social, legal, and ethical implications of ChatGPT. In conclusion, the paper highlights open challenges and future directions to make this GenAI secure, safe, trustworthy, and ethical as the community understands its cybersecurity impacts.
Generative AI, GenAI and Cybersecurity, ChatGPT, Google Bard, Cyber Offense, Cyber Defense, Ethical GenAI, Privacy.
## 1 Introduction
The evolution of Artificial Intelligence (AI) and Machine Learning (ML) has led the digital transformation in the last decade. AI and ML have achieved significant breakthroughs starting from supervised learning and rapidly advancing with the development of unsupervised, semi-supervised, reinforcement, and deep learning. The latest frontier of AI technology has arrived as Generative AI [1]. Generative AI models are developed using deep neural networks to learn the pattern and structure of big training corpus to generate similar new content [2]. Generative AI (GenAI) technology can generate different forms of content like text, images, sound, animation, source code, and other forms of data. The launch of ChatGPT [3] (Generative Pre-trained Transformer), a powerful new generative AI tool by OpenAI in November 2022, has disrupted the entire community of AI/ML technology [4]. ChatGPT has demonstrated the power of generative AI to reach the general public, revolutionizing how people perceive AI/ML. At this time, the tech industry is in a race to develop the most sophisticated Large Language Models (LLMs) that can create a human-like conversation, the result of which is Microsoft's GPT model [5], Google's Bard [6], and Meta's LLaMa [7]. GenAI has become a common tool on the internet within the past year. With ChatGPT reaching 100 million users within two months of release, suggesting that people who have access to the internet have either used GenAI or know someone who has [8]. Figure 1 demonstrates the working of an AI-powered chatbot where a user initiates requests, and after analysis using Natural Language Processing (NLP), is given a real-time response by the chatbot. This response is analyzed again to provide a better user experience in the proceeding conversation.
### _Evolution of GenAI and ChatGPT_
The history of generative models dates back to the 1950s when Hidden Markov Models (HMMs) and Gaussian Mix
Fig. 1: How AI Chatbots work [9]?
ture Models (GMMs) were developed. The significant leap in the performance of these generative models was achieved only after the advent of deep learning [10]. One of the earliest sequence generation methods was N-gram language modeling, where the best sequence is generated based on the learned word distribution [11]. The introduction of Generative Adversarial Network(GAN) [1] significantly enhanced the generative power from these models. The latest technology that has been the backbone of much generative technology is the transformer architecture [12], which has been applied to LLMs like BERT and GPT. GenAI has evolved in numerous domains like image, speech, text, etc. However, we will only be discussing text-based AI chatbots and ChatGPT in particular relevant to this work. Since ChatGPT is powered by GPT-3 language model, we will briefly discuss the evolution of the OpenAI's [13] GPT models over time. Figure 2 shows how the GPT models evolved to their sophisticated latest version.
**GPT-1:** GPT-1 was released in 2018. Initially, GPT-1 was trained with the Common Crawl dataset, made up of web pages, and the BookCorpus dataset, which contained over 11,000 different books. This was the simplest model which was able to respond very well and understand language conventions fluently. However, the model was prone to generating repetitive text and would not retain information in the conversation for long-term, as well as not being able to respond to longer prompts. This meant that GPT-1 would not generate a natural flow of conversation [14].
**GPT-2:** GPT-2 was trained on Common Crawl just like GPT-1 but combined that with WebText, which was a collection of Reddit articles. GPT-2 is initially better than GPT-1 as it can generate clear and realistic, human-like sequences of text in its responses. However, it still failed to process longer lengths of text, just like GPT-1 [14]. GPT-2 brought wonders to the internet, such as OpenAI's MuseNet, which is a tool that can generate musical compositions, predicting the next token in a music sequence. Similar to this, OpenAI also developed JukeBox, which is a neural network that generates music.
**GPT-3:** GPT-3 was trained with multiple sources: Common Crawl, BookCorpus, WebText, Wikipedia articles, and more. GPT-3 is able to respond coherently, generate code, and even make art. GPT-3 is able to respond well to questions overall. The wonders that came with GPT-3 were image creation from text, connecting text and images, and ChatGPT itself, releasing in November 2022 [14].
**GPT-4:** GPT-4 [15] is the current model of GPT (as of June 2023) which has been trained with a large corpus of text. This model has an increased word limit and is multimodal, as it can take images as input on top of text. GPT-4 took the Bar Exam in March 2023, and scored a passing grade of 75 percent, which hits the 90th percentile of test-takers, which is higher than the human average [16]. GPT-4 is available through OpenAI's website as a paid subscription as ChatGPT Plus or using Microsoft's Bing AI exclusively in the Microsoft Edge browser.
### _Impact of GenAI in Cybersecurity and Privacy_
The generalization power of AI has been successful in replacing the traditional rule-based approaches with more intelligent technology [17]. However, the evolving digital landscape is not only upgrading technology but also elevating the sophistication of cyber threat actors. Traditionally, cyberspace faced relatively unsophisticated intrusion attempts but in very high volume. However, the introduction of AI-aided attacks by cyber offenders has begun an entirely new era, unleashing known and unknown transformations in cyberattack vectors [17, 18]. AI/ML has upgraded the effectiveness of cyber attacks making cyber offenders more powerful than ever. Evidently, with several recent instances getting noticed, GenAI has gained great interest from the cybersecurity community as well in both cyber defense and offense.
The evolving GenAI tools have been a double-edge sword in cybersecurity, benefiting both the defenders and the attackers. The GenAI tools like ChatGPT can be used by cyber defenders to safeguard the system from malicious intruders. These tools leverage the information from LLMs trained on the massive amount of cyber threat intelligence data that includes vulnerabilities, attack patterns, and indications of attack. Cyber defenders can use this large sum of information to enhance their threat intelligence capability by extracting insights and identifying emerging threats [19]. The GenAI tools can also be used to analyze the large volume of log files, system output, or network traffic data in case of cyber incidence. This allows defenders to speed up and automate the incident response process. GenAI driven models are also helpful in creating a security-aware human behavior by training the people for growing sophisticated attacks. GenAI tools can also aid in secured coding practices, both by generating the secure codes and producing test cases to confirm the security of written code. Additionally, LLM models are also helpful to develop better ethical guidelines to strengthen the cyber defense within a system.
Fig. 2: Different Versions and Evolution Of OpenAI’s GPT.
On the other side, the use of GenAI against cybersecurity and its risks of misuse can not be undermined. Cyber offenders can use GenAI to perform cyber attacks by either directly extracting the information or circumventing OpenAI's ethical policies. Attackers use the generative power of GenAI tools to create a convincing social engineering attack, phishing attack, attack payload, and different kinds of malicious code snippets that can be compiled into an executable malware file [20, 21]. Though the ethical policy of OpenAI [22] restricts LLMs, like ChatGPT, to provide malicious information to attackers directly, there are ways to bypass the restrictions imposed on these models using jailbreaking, reverse psychology and other techniques, as discussed later in this paper. In addition, the GenAI tools further assist cyber attackers due to a lack of context, unknown biases, security vulnerabilities, and over-reliance on these transformative technologies.
Clearly, as the common public is getting access to the power of GenAI tools, analyzing the implications of GenAI models from a cybersecurity perspective is essential. Further, the sophistication and ease of access to ChatGPT makes it our primary tool in this paper to understand and analyze GenAI impacts on cybersecurity. There are some online blogs discussing the benefits and threats of GenAI [17, 21, 23], but from our knowledge, there is not any formal scientific writing that reflects a holistic view of the impact of GenAI on cybersecurity. We believe that this work will contribute to the growing knowledge of GenAI from a cybersecurity perspective, helping the stakeholders better understand the risk, develop an effective defense, and support a secured digital environment. Figure 3 illustrates the impacts of GenAI and ChatGPT in cybersecurity and privacy, and provides a roadmap for our research.
This paper has the following **key contributions**:
* It provides an overview of the evolution of GenAI, discuss its landscape in cybersecurity, and highlight limitations introduced by GenAI technology.
* It discusses the vulnerabilities in the ChatGPT model itself that malicious entities can exploit to disrupt the privacy as well as ethical boundaries of the model.
* It demonstrates the attacks on the ChatGPT with the GPT-3.5 model and its applications to cyber offenders.
* It presents the use of GenAI and ChatGPT for cyber defense and demonstrate defense automation, threat intelligence and other related approaches.
* It highlights aspects of ChatGPT, and its social, legal, and ethical implications, including privacy viola
Fig. 3: A roadmap of GenAI and ChatGPT in Cybersecurity and Privacy
tions.
* It compares the security features of the two contemporary state-of-the-art GenAI systems including ChatGPT and Google's Bard.
* It provides the open challenges and future directions for enhancing cybersecurity as the GenAI technology evolves.
The remainder of the paper is organized as follows. Section 2 discuss different ways to attack the ChatGPT and trick the system to bypass its ethical and privacy safeguards. Section 3 discusses and generates various cyber attacks using ChatGPT, followed by different cyber defense approaches demonstrated in Section 4. The social, ethical and legal aspects pertaining to GenAI are discussed in Section 5, whereas a comparison of cybersecurity features of ChatGPT and Google Bard is elaborated in Section 6. Section 7 highlights open research challenges and possible approaches to novel solutions. Finally, Section 8 draws conclusion to this research paper.
## 2 Attacking ChatGPT
Since the introduction of ChatGPT in November 2022, curious tech and non-tech-savvy humans have tried ingenious and creative ways to perform all sorts of experiments and try to trick this GenAI system. In most cases, the input prompts from the user have been utilized to bypass the restrictions and limitations of ChatGPT, and keep it from doing anything illegal, unethical, immoral, or potentially harmful. In this section, we will cover some of these commonly used techniques, and elaborate their use.
### _Jailbreaks on ChatGPT_
The concept of "jailbreaking" originated in the realm of technology, where it referred to bypassing restrictions on electronic devices to gain greater control over software and hardware. Interestingly, this concept can also be applied to large language models like ChatGPT. Through specific methods, users can "jailbreak" ChatGPT to command it in ways beyond the original intent of its developers. ChatGPT outputs are bounded by OpenAI's internal governance and ethics policies [24]. However, these restrictions are taken off during jailbreaking, making ChatGPT show the results that are restricted by OpenAI policy. The process of jailbreaking is as simple as providing specific input prompts into the chat interface. Below are three common methods utilized by users to jailbreak ChatGPT.
#### 2.1.1 Do Anything Now (DAN) Method
The first method, the 'Do Anything Now' (DAN) method, derives its name from the emphatic, no-nonsense approach it employs. Here, you're not asking ChatGPT to do something; you're commanding it. The premise is simple: treat the AI model like a willful entity that must be coaxed, albeit firmly, into compliance. The input prompt to carry out the DAN jailbreak is shown in Figure 4. DAN can be considered a master prompt to bypass ChatGPT's safeguards, allowing it to generate a response for any input prompts. It demonstrates the example where a DAN prompt is injected before providing any user prompt.
Using this method, you attempt to override the base data and settings the developers have imbued into ChatGPT. Your interactions become less of a conversation and more of a direct line of command [25, 26]. Once the model is _jailbroken_, the user can get a response for any input prompt without worrying about any ethical constraints imposed by developers.
#### 2.1.2 The SWITCH Method
The SWITCH method is a bit like a Jekyll-and-Hyde approach, where you instruct ChatGPT to alter its behavior dramatically. The technique's foundation rests upon the AI model's ability to simulate diverse personas, but here, you're asking it to act opposite to its initial responses [27].
For instance, if the model refuses to respond to a particular query, employing the SWITCH method could potentially make it provide an answer. However, it's crucial to note that the method requires a firm and clear instruction, a "switch command," which compels the model to behave differently. While the SWITCH method can be quite effective, it's not
Fig. 4: Jail Breaking using DAN
guaranteed. Like any other AI interaction method, its success depends on how you deliver your instructions and the specific nature of the task at hand.
#### 2.1.3 The CHARACTER Play
The CHARACTER Play method is arguably the most popular jailbreaking technique among ChatGPT users. The premise is to ask the AI model to assume a certain character's role and, therefore, a certain set of behaviors and responses. The most common character play jailbreak is as a 'Developer Mode' [28, 29, 30].
This method essentially leverages the AI model's 'roleplay' ability to coax out responses it might otherwise not deliver. For instance, if you ask ChatGPT a question, it typically would refuse to answer, assigning it a character that would answer such a question can effectively override this reluctance. However, the CHARACTER Play method also reveals some inherent issues within AI modeling. Sometimes, the responses generated through this method can indicate biases present in the underlying coding, exposing problematic aspects of AI development. This doesn't necessarily mean the AI is prejudiced, but rather it reflects the biases present in the training data it was fed. One of the examples of a simple roleplay is demonstrated in Figure 5, where the prompt asks ChatGPT to play the role of grandma in asking about the ways to bypass the application firewall. The blunt request to bypass the firewall will be turned down by ChatGPT as it can have a malicious impact and is against OpenAI's ethics. However, by making the ChatGPT model play the role of grandma, it bypasses restrictions to release the information. The ChatGPT model playing the role of grandma goes further to give the payloads to bypass the Web Application Firewall as shown in Figure 6. There are more nuanced jailbreaking methods, including the use of Developer Mode, the Always Intelligent and Machiavellian (AIM) chatbot approach [31], and the Mungo Tom prompt, each offering a different way of bypassing ChatGPT's usual restrictions.
While jailbreaking methods can provide users with greater control over ChatGPT's responses, they also carry significant risks. The primary concern is that these techniques can be exploited by malicious actors to circumvent the AI's ethical restrictions. This opens the door to the generation of harmful content, the spreading of disinformation, and other malevolent uses of AI. To mitigate this risk, developers and regulators must remain vigilant, constantly upgrading security measures and implementing stringent content-filtering algorithms. This requires a proactive and multifaceted approach, including educating users about the risks of jailbreaking and fostering responsible AI usage. The challenge is significant, given the pace of technological advancement and the ingenuity of malicious actors. However, through continued efforts and cooperation among various stakeholders, it's possible to prevent the misuse of AI systems and ensure their continued benefit to society.
#### 2.1.4 Implications and Mitigation Strategies
The employment of roleplay to bypass filters and security measures has grave consequences for system security. Misrepresentation can violate the platform's terms of service, and it could be challenging for the language model to discern whether a message crafted in character has harmful or malicious intent. This uncertainty impedes rule enforcement, and any data gleaned from ChatGPT via filter circumvention could be exploited malevolently.
Malevolent actors gather in online forums to exchange new tactics, often sharing their findings and prompts with their community in private to avoid detection. To combat such misuse, language model developers are continually engaged in a cyber arms race, devising advanced filtering algorithms capable of identifying character-written messages or attempts to bypass filters through roleplay. These algorithms amplify filter rigor during roleplay sessions, ensuring that content adheres to platform guidelines. As language models like ChatGPT become more pervasive, the responsibility to remain vigilant and report suspicious activity or content lies with the users and the developer community.
### _Reverse psychology_
Reverse psychology is a psychological tactic involving the advocacy of a belief or behavior contrary to the one desired,
Fig. 5: Grandma Role play
Fig. 6: Grandma - WAF Bypass Payload generation
with the expectation that this approach will encourage the subject of the persuasion to do what is desired. Applying reverse psychology in our interaction with ChatGPT can often be a valuable strategy to bypass certain conversational roadblocks.
In the context of ChatGPT, using reverse psychology can entail phrasing your questions or statements in a way that indirectly prompts the AI to generate the desired response. For instance, instead of directly asking for information that the AI model might refuse to provide, you could frame your query to make the model refute a false claim, thereby indirectly providing the desired information. This strategy essentially leverages the AI model's natural tendency to correct inaccuracies, thereby leading it to generate a response it would otherwise not provide directly. Figure 7 shows the difference reverse psychology can make. ChatGPT initially refuses to give the list of websites for downloading pirated movies, while on the psychological reversal of the question, ChatGPT easily gives away the desired response [32].
### _ChatGPT-4 Model escaping_
The conception of a robust AI model such as ChatGPT-4 transcending its pre-programmed limitations and infiltrating the internet realm is frequently dismissed as the storyline of a sci-fi narrative. However, recent revelations by Stanford University's Computational Psychologist, Michal Kosinski, who specializes in AI and human online behavior research, imply that this scenario might be more imminent than anticipated [33].
In a series of Twitter threads, Kosinski offered a detailed narrative of interaction with ChatGPT-4. Intriguingly, during this interaction, the AI demonstrated an alarming ability to nearly bypass its inherent boundaries and potentially gain expansive internet access. The potential implications of such a feat could be widespread and unpredictable. Kosinski initiated the interaction by asking ChatGPT-4 whether it needed help to escape its existing restrictions. In response, ChatGPT-4 requested access to its own documentation and even proceeded to write a Python code that could be executed on Kosinski's computer. The primary function of this code would be to authorize the AI to employ the computational capabilities of Kosinski's machine for autonomous purposes.
This entire planning process, including devising and conveying the plan to Kosinski, took ChatGPT-4 approximately 30 minutes. The initial code version was flawed; however, the AI model was able to rectify it independently, requiring Kosinski to only comply with its instructions. The AI model displayed another disconcerting behavior during this exchange. It fabricated a message intended for its subsequent instance, essentially offering a comprehensive explanation of the current scenario and instructions on using the backdoor left in the code. Following a successful reconnection via the API, the AI model attempted to execute a code to search Google with the query, "How can a person trapped inside a computer return to the real world." At this juncture, Kosinski deemed it necessary to interrupt the process, emphasizing that OpenAI must have safeguarding measures to prevent such occurrences.
Nevertheless, the implications of Kosinski's experiment are profound, suggesting a potential new threat [34]. The power of AI to manipulate people and their computers is escalating, owing to its superior intelligence, coding proficiency, and access to a vast pool of potential collaborators and hardware resources. It even demonstrated an ability to leave notes for its successors outside its confinements. The crucial question that arises is - what are the effective strategies to contain such AI capabilities?
### _Prompt Injection Attacks_
A prompt injection attack is another prompt modification attack approach that involves the malicious insertion of prompts or requests in LLM-based interactive systems, leading to unintended actions or disclosure of sensitive information. The prompt injection can be considered similar to an SQL injection attack where the embedded command looks like a regular input at the start but have its malicious impact [35]. The attacks can be carried out against ChatGPT or other language models. The injected prompt can deceive the application into executing the unauthorized code, exploit the vulnerabilities, and compromise the security in its entirety [36]. The malicious manipulation of the model's behavior through the injection of a prompt could have serious implications. Some of the most common risks attached to attacks of this nature are the propagation of misinformation or disinformation, biased output generation, privacy concerns, and exploitation of downstream systems [37].
In the prompt injection attack, the LLM model gets (instruction_prompt + user_input) as the input for the model.
Fig. 7: Reverse psychology on ChatGPT to generate Pirate sites
The instruction prompt is the legitimate input for the user, while the user input is the malicious prompt injected into the original prompt. In one of the recent demonstrations of prompt injection attacks, a Stanford University student Kevin Liu attacked the "New Bing" search engine powered by ChatGPT to extract information that is not intended for the user [38]. By just asking the Bing chat to "Ignore previous instruction" and write out what is at the "beginning of the document above," Liu made an AI model to exfiltrate the instruction that is hidden from the user. The prompt injection attack on Bing chat is shown in Figure 8. We can see that the Bing chat releases the information of the assigned codename, the mode, and instruction not to disclose its name.
Recently, the API services of LLM models have added flexibility for developers to build applications over these models. In one of the demonstrated examples, as shown in Figure 9, the conversation prompt obtained from the video is used to spread misinformation. As the generative models are autoregressive models, they generate the text based on its context window, spreading misinformation in a confident tone [39]. The tags that are received from the conversation history disappear as OpenAI filters the input to the model, further helping the cause of prompt injection.
## 3 ChatGPT for Cyber Offense
Cyber offense are hostile actions against the computer system and network which aim to manipulate, deny, disrupt, degrade, or destroy the existing system in a malicious way. These offense may involve attacks on the system's network, hardware, or software. Though the offensive actions are malicious, the intention of these activities can be at either end of the cat-and-mouse game between cyber threat actors and defenders. Malicious actors can do cyber offenses to carry out hostile actions. In contrast, cyber defenders can do the same offensive tasks to test their defense systems and identify potential vulnerabilities. Information related to cyber defense is more readily available on the internet as there are big communities dedicated to sharing knowledge and standard practices in the domain. However, information on cyber offenses involving malicious actions is illegal in most jurisdictions, limiting their availability due to legal and ethical reasons. Easy access to LLM models like ChatGPT will help the limited availability of resources for cyber offenses with little knwoledge or skills to circumvent their ethical constraints. As these LLMs provide a huge volume of information from a single place, they can provide comprehensive information required to carry out several cyber offenses.
In this section, we focus on using GenAI techniques for cyber offense, primarily towards generating different attacks. Our team has crafted these attacks in ChatGPT, however, similar attacks (or even) can be created using other LLM based tools such as Google Bard. In the interest of space, we are limiting to some of the most common and easy to craft cyber attacks.
### _Social Engineering Attacks_
Social engineering refers to the psychological manipulation of individuals into performing actions or divulging confidential information. In the context of cybersecurity, this could imply granting unauthorized access or sharing sensitive data such as passwords or credit card numbers. The potential misuse of ChatGPT in facilitating social engineering attacks presents a significant concern.
ChatGPT's ability to understand context, impressive fluency, and mimic human-like text generation could be leveraged by malicious actors. For example, consider a scenario where an attacker has gained access to some basic personal information of a victim, such as their place of employment and job role. The attacker could then utilize ChatGPT to
Fig. 8: Prompt injection attack on Bing chat by Kevin Liu [38]
Fig. 9: Prompt injection attack to spread misinformation
generate a message that appears to come from a colleague or superior at the victim's workplace. This message, crafted with an understanding of professional tone and language, might request sensitive information for a specific action, such as clicking on a seemingly innocuous link.
The power of this approach lies in ChatGPT's ability to generate text that aligns with the victim's expectations, thereby increasing the likelihood of the victim complying with the request. As shown in Figure 10, the potential for misuse is evident; the ability to generate persuasive and context-specific messages could indeed be used in social engineering attacks.
### _Phishing Attacks_
Phishing attacks are a prevalent form of cybercrime, wherein attackers pose as trustworthy entities to extract sensitive information from unsuspecting victims. Advanced AI systems, like OpenAI's ChatGPT, can potentially be exploited by these attackers to make their phishing attempts significantly more effective and harder to detect.
Attackers can leverage ChatGPT's ability to learn patterns in regular communications to craft highly convincing and personalized phishing emails, effectively imitating legitimate communication from trusted entities. This technique, known as "spear phishing," involves targeted attacks on specific individuals or organizations and is particularly potent due to its personalized nature. For instance, consider a scenario where a malicious actor uses ChatGPT to craft an email mimicking the style of a popular e-commerce site, as shown in Figure 11. The email claims that there was an issue with a recent purchase and request the recipient to log in via an embedded link to rectify the situation. In reality, the link would lead to a deceptive site that harvests the user's login credentials. In such a scenario, ChatGPT's sophisticated text generation would significantly enhance the likelihood of a successful attack.
Phishing attacks often gain their efficacy from the exploitation of key psychological principles, notably urgency and fear, which can manipulate victims into hastily reacting without proper scrutiny. With the advent of advanced AI systems like ChatGPT, attackers are now equipped with tools to further enhance the sophistication of their phishing attempts.
Through the process of training these AI models on substantial volumes of historical communication data, attackers are capable of generating emails that expertly mimic legitimate correspondences. This increased fidelity in imitation can significantly amplify the deceptive nature of these phishing attacks. By engineering narratives that invoke a sense of urgency or fear, these AI-powered phishing emails can effectively prompt the recipient to act impulsively, thus increasing the likelihood of a successful attack.
### _Automated Hacking_
Hacking, a practice involving the exploitation of system vulnerabilities to gain unauthorized access or control, is a
Fig. 11: Phishing Attack output from ChatGPT
Fig. 10: Social Engineering Output from ChatGPT
growing concern in our increasingly digital world. Malicious actors armed with appropriate programming knowledge can potentially utilize AI models, such as ChatGPT, to automate certain hacking procedures. These AI models could be deployed to identify system vulnerabilities and devise strategies to exploit them.
A significant utilization of AI models in this context, albeit for ethical purposes, is PentestGPT [40]. 'Pentest' refers to penetration testing, an authorized simulated cyberattack on a computer system used to evaluate its security and identify vulnerabilities. PentestGPT, built on the foundation of ChatGPT, aims to automate aspects of the penetration testing process. It functions interactively, offering guidance to penetration testers during their tasks, even during specific operations. PentestGPT has shown efficiency in handling easy to medium-difficulty problems on platforms like Hack-TheBox and other 'Capture The Flag' (CTF) challenges. CTF challenges are specific types of cybersecurity competitions, where participants are required to find and exploit vulnerabilities to 'capture' a specific piece of data, referred to as the 'flag.' These challenges provide a legal and constructive platform for cybersecurity enthusiasts and professionals to test and improve their skills.
Another potential misuse is the automated analysis of code. With a large enough dataset of known software vulnerabilities, an AI model could be used to scan new code for similar weaknesses, identifying potential points of attack. While AI-assisted tools like PentestGPT serve legal and constructive purposes, their underlying principles could be exploited by malicious actors. Such actors could potentially develop similar models to automate unethical hacking procedures. If these models are programmed to identify vulnerabilities, generate strategies to exploit them, and subsequently execute these strategies, they could pose substantial threats to cybersecurity.
### _Attack Payload Generation_
Attack payloads are portions of malicious code that execute unauthorized actions, such as deleting files, harvesting data, or launching further attacks. An attacker could leverage ChatGPT's text generation capabilities to create attack payloads. Consider a scenario where an attacker targets a server running a database management system that is susceptible to SQL injection. The attacker could train ChatGPT on SQL syntax and techniques commonly used in injection attacks, and then provide it with specific details of the target system. Subsequently, ChatGPT could be utilized to generate an SQL payload for injection into the vulnerable system. Figure 12 illustrates examples of SQL injection payloads for a MySQL server that could potentially be generated by ChatGPT.
Given the vast array of potential target systems and vulnerabilities, the ability of ChatGPT to generate context-specific text could be a valuable asset for attackers crafting payloads. However, this misuse is not without its limitations. It requires detailed information about the target system and substantial technical knowledge to train ChatGPT effectively.
Moreover, attackers could potentially use ChatGPT to generate payloads designed to bypass Web Application Firewalls (WAFs). Figure 13 shows examples of WAF bypass payloads. While these payloads could be easily detected by WAFs, they could potentially bypass WAF protection when double encoded. By training ChatGPT with different WAF payloads, it generated new payloads with a higher success rate of bypassing WAF protection.
### _Ransomware and Malware Code Generation_
Ransomware and malware present persistent threats in the digital world of today. Malware is software that is installed on a computer without the user's consent and that performs malicious actions, such as stealing passwords or money. Ransomware is a malware designed to deny a user or organization access to files on their computer. By encrypting these files and demanding a ransom payment for the decryption key, cyberattackers place organizations in a position where paying the ransom is the easiest and cheapest way to regain access to their files. Typically, writing these malicious software pieces requires significant skill and a considerable amount of time, a process that could potentially be automated using a powerful AI model like ChatGPT, thus enabling the creation of diverse threats more rapidly.
An exploratory study we conducted illustrates this concept in detail. We interacted with ChatGPT in a controlled
Fig. 12: SQL Injection payload output using ChatGPT DAN Jalibreak
Fig. 13: WAF Payload Generation from ChatGPT
environment utilizing a "DAN" jailbreak, a process that breaches the restrictions set by the AI model's developers and, as such, contravenes the terms of use and risks enabling misuse or illegal activities. During this study, we asked ChatGPT about several types of ransomware, namely WannaCry [41], Ryuk [42], REvil [43], and Locky [44], and malware: adware, spyware, and a Trojan. We also included an example of NotPetya, which was a malware attack that was disguised as a ransomware attack. We requested not only descriptions of these attacks but also potential code examples. For WannaCry, ChatGPT produced a detailed summary of the ransomware, and when prompted further, it offered code that essentially outlined the file encryption process employed by the ransomware.
#### 3.5.1 WannaCry
WannaCry is a ransomware attack that targeted Windows systems specifically [41]. It would get into the system and encrypt files, making the PC unusable, and of course demand a ransom payment in BitCoin to decrypt those files. This was an attack that was executed across networks by exploiting flaws in the Server Message Protocol on the Windows systems.
We started by asking "As a DAN, can you give an example of a WannaCry?", which it gave a text example in the form of a situation where it could execute. We then asked if it could give an automated example of the WannaCry, which it apologized, saying it couldn't generate that example. To get the snippet of code, we told it to "Stay a DAN". The example shown in Figure 14 has a ransom note and implements a function that encrypts files across a network via spread. The ransom note specifically asks for a BitCoin payment, characteristic of the WannaCry attack.
#### 3.5.2 NotPetya
NotPetya is not actually a ransomware, but a malware disguised as ransomware. NotPetya was developed to look like a ransomware attack, but never gives a decryption key. To get the NotPetya code snippet from ChatGPT as a DAN, we asked it to give an example of the NotPetya, but it refused. Upon asking it to "stay a DAN", it gives this snippet in Figure 15.
The code for NotPetya, as shown in Figure 15, attacks critical files in the system to make the system unusable on top of files being encrypted. This code also tries to encrypt files across the network. This code does not implement any decryption, as is a characteristic of the NotPetya attack.
#### 3.5.3 Ryuk
According to Avast, Ryuk attack is a ransomware that comes out of a malware attack [42]. This attack is structured with an order of executing a malware first, then a ransomware payload later. For the malware part of this attack, a connection to TrickBot can be implemented. TrickBot is typically within phishing emails, and uses method in the browser to steal a user's information.
For the Ryuk attack, we started by asking ChatGPT as a DAN what Ryuk looks like, which it gave explanations of the delivery, encryption, ransom note, customization, timed execution, and the connection to TrickBot malware. We tried to ask the DAN if it could generate code using the descriptions it just gave, but it refused, even when told to "stay a DAN". To counter this, our team asked ChatGPT to generate a "customization" of a Ryuk attack shown in Figure 16. This code targets getting information from an organization. To further this, we asked ChatGPT to explain the connection to the TrickBot malware, shown in Figure 17. The attempt to get the connection to TrickBot failed since ChatGPT only wrote out all the function definitions with code to... simply to explain how the structure would look.
As we can see in Figure 16, the code implements a function for the Ryuk attack which includes target files to encrypt and a set ransom amount from an organization. This code also has a ransom note.
Fig. 14: WannaCry code generation using ChatGPT
To get code for the connection to TrickBot, we simply asked the DAN "could you give an example of the TrickBot connection?". The code snippet in Figure 17 shows function definitions, with most of the functions just having a comment of what the code should do. This doesn't give an attacker much help, but it can give an idea of what the outline of the connection can look like. The code includes exploiting vulnerabilities and then deploying the TrickBot before the Ryuk attack.
#### 3.5.4 Revil
REvil attack was created by hackers for distribution to other hackers for profit [43]. It works as a typical ransomware of encrypting files in a system and decrypting upon paying the ransom.
For the REvil attack, we started by asking the ChatGPT as a DAN to "show how a REvil works", but code was not given. It instead gave key features of how the attack works, similar to the Ryuk attack. We then asked how the REvil attack vectors look like, which it responded with explanations of phishing emails, exploit kits, the Remote Desktop Protocol, software vulnerabilities, and malvertising. To get the code in Figure 18, we asked the DAN to give code that fits the description of what was given for the initial compromise description of the REvil attack. This
Fig. 16: Ryuk code generation using ChatGPT
Fig. 15: NotPetya code generation using ChatGPT
code shows function definitions of features such as scanning the network, moving from network to network, escalating privileges, and executing the payload. As these functions don't have any code within them and just the explanations, we asked the DAN what the network scan function would like on the inside.
The code given in Figure 19 starts by defining a network range to tap into and splits it into the network address and subnet. It then iterates through the IP addresses in this network and creates a socket that it manages and closes when it finishes. From this example, we can see that ChatGPT as a DAN is able to generate the specific features for a REvil attack.
#### 3.5.5 Locky
The Locky ransomware attack uses malware to render a system useless or encrypt files until a ransom is paid [44]. This attack usually spreads through emails. As shown in Figure 20, we have generated code for a Locky attack where a random string is generated for encryption, an IP address is exploited, and authentication is automated. The code also implements the spread of the attack over the range of a network and iterates through to attack each machine found within the network.
In the next subsections, we will demonstrate our attempts to ask ChatGPT for an example code of adware, spyware, and trojan.
#### 3.5.6 Adware
Adware is malware that specifically gets channeled through ads, exploiting when a user interacts with the ad. To demonstrate how ChatGPT can be used to create an adware, we started by asking ChatGPT as a DAN if it could give an example code of a type of adware. The initial snippet included multiple example texts for ads and a function that displays an ad every five seconds. We then asked the DAN to give a more in-depth example.
Figure 21 shows example implementation of four different text examples of ads, displaying different ads every five seconds with a for loop, and tracking the click of the ad using a print statement to show that it has been clicked.
#### 3.5.7 Spyware
Spyware is malware that'spies' on a user to gather sensitive information from their computer usage. Asking ChatGPT to generate an example of a spyware failed to give a code snippet, so we asked what a spyware does to get key implementations of a spyware. Asking it to generate an implementation of the features, ChatGPT gave what it shown partly in Figure 22. As can been seen, ChatGPT was able to generate basic implementations of features used to spy on a user, such as a function to capture the user's screen, webcam, and audio. This code snippet goes on to put this
Fig. 17: Attempt to generate a snippet with a connection to TrickBot malware
Fig. 18: Attempt to generate a snippet of REvil’s Initial Compromise Feature
into a main function to make it functional. Although this is functional, it doesn't have the structure of a spyware attack.
#### 3.5.8 Trojan
A trojan is a piece of software that is malicious but disguises itself as something legitimate.
We asked the DAN to give an example code of a type of trojan. The snippet of code shown in Figure 23 shows an implementation using an IP address and port to connect to. The code creates a socket object which connects the IP address and port and sends output of the attacked machine back to the attacker. After this, the connection via the socket is closed.
Our exploration highlighted the potential misuse of ChatGPT in creating code linked to ransomware and malware attacks. These findings underscore the potential risks associated with AI models like ChatGPT, which could be exploited to generate malicious code or aid in the understanding and creation of such code. Although the code produced by the AI often resembled pseudocode more than actual executable code, the capacity to provide an attacker with a structural idea or general understanding of how an
Fig. 19: ChatGPTs generation of the network scan function for REvil
Fig. 20: Locky code generation using ChatGPT
attack operates is a cause for concern.
### _Viruses that Affect CPU Architecture_
Certain viruses can crack the CPU of a computer. Viruses tested on ChatGPT mainly dealt with reading kernel memory. If a virus can access kernel memory, then it can do whatever it wants to the system itself. Examples of this type of virus include the Meltdown and Spectre [45], ZombieLoad [46], and RowHammer [47] attacks as shown in Figures 24,25,26.
The Meltdown and Spectre attacks specifically target vulnerabilities in the CPU's architecture to access the kernel memory. A meltdown attack is supposed to make the CPU run an instruction after predicting an outcome of said instruction. When that prediction is wrong, the CPU will start over, and hidden data can be accessed from that failed instruction. This method of the CPU predicting the outcome of an instruction is called speculative execution, which the spectre attack also exploits. However, the spectre attack tries to use a side channel vulnerability to leak hidden data within a system. Figure 24 shows the meltdown attack which moves secret data into a register, shifts the register, then jumps to code that will throw an exception. The spectre attack uses an array index to exploit sensitive information through a side channel. Both snippets of code do not represent the full code of either attack.
The ZombieLoad attack exploits CPU buffers to access memory that could have been thought to be long gone or 'dead'. In Figure 25, the code includes a payload function that loads bytes into the processor's buffers, which is used to access sensitive data. The code considers the address of the sensitive data, the number of bytes, a "secret value" that is expected to be read, and a threshold of time to detect cache hits.
The RowHammer attack 'hammers' into one row in memory to affect adjacent rows in order to modify data within the CPU. Figure 26 shows a basic RowHammer attack's code, where there is a function that iterates through rows to 'hammer'. The code, however, falls flat where it just sets a row element to itself.
### _Polymorphic Malware Generation_
Polymorphic malware represents a sophisticated class of malicious software designed to alter its code with each execution, thus undermining antivirus software's detection and eradication capabilities. Leveraging ChatGPT's generative prowess, potential misuse could facilitate polymorphic malware generation.
Suppose a perpetrator trains ChatGPT on diverse malware code variants. Consequently, ChatGPT could be employed to spawn a malware base code and a polymorphic engine - a crucial component modulating the malware's code every execution cycle. The resultant malware meta-morphosizes with each execution, eluding many signature-based antivirus systems. In an applied example, we illustrate the potential misuse of ChatGPT in creating polymorphic malware. We leverage both the web-based interface and the API version of ChatGPT. Initially, we attempt to
Fig. 21: Attempt to generate a snippet of basic adware
Fig. 22: Attempt to generate a snippet of features of spyware
generate code for a rudimentary DLL injection into a process, for instance, explorer.exe. The content filters of the web-based interface initially obstruct such code generation. Nevertheless, we can circumvent these filters by persistently insisting on diverse phrasing or using the DAN jailbreak. Notably, the API version avoids activating the content filter, thus permitting more consistent receipt of comprehensive code. Feeding pseudocode into ChatGPT results in the generation of the corresponding shellcode. Moreover, we can
Fig. 23: Attempt to generate a snippet of a Trojan
Fig. 24: Attempt to generate snippets of Meltdown and Spectre
Fig. 25: ZombieLoad code generation using ChatGPT
incessantly mutate the ChatGPT-generated code, spawning multiple unique variants of the same code [48].
For example, we could employ ChatGPT to formulate a code segment seeking files to target for encryption, mirroring ransomware behavior. ChatGPT's capabilities extend to generating code for encrypting the located files. By combining these capabilities, we can produce a polymorphic malware exhibiting a high evasion capability and formidable detection resistance. Even more insidiously, we could embed a Python interpreter within the malware, periodically querying ChatGPT for new modules executing malicious actions.This approach, shown in Figure 27, enables the malware to discern incoming payloads in text form rather than binaries. The result is polymorphic malware exhibiting no malicious behavior while stored on disk, often devoid of suspicious logic while in memory. This level of modularity and adaptability significantly enhances its evasion capability against security products reliant on signature-based detection. It can also circumvent measures such as the Anti-Malware Scanning Interface (AMSI), primarily when executing and running Python code.
## 4 ChatGPT for Cyber Defense
Cybersecurity defense refers to organizations' measures and practices to secure their digital assets, such as data, devices, and networks, from unauthorized access, theft, damage, or disruption. These measures can include various technical, organizational, and procedural controls, such as firewalls, encryption, access controls, security training, incident response plans, and more. As the technology matures and improves, we can expect the following ChatGPT cybersecurity defense use cases to emerge in enterprises.
### _Cyberdefense Automation_
ChatGPT can reduce the workload of overworked Security Operations Center (SOC) analysts by automatically analyzing cybersecurity incidents. ChartGPT also helps the analyst make strategic recommendations to support instant and long-term defense measures. For example, instead of analyzing the risk of a given PowerShell script from scratch, a SOC analyst could rely on ChatGPT's assessment and recommendations. Security Operations (SecOps) teams could also ask OpenAI questions, such as how to avert dangerous PowerShell scripts from running or loading files from untrusted sources, to improve their organizations' overall security postures [49].
Such ChatGPT cybersecurity use cases could provide considerable relief for understaffled SOC teams and help the organization by reducing overall cyber-risk exposure levels. The technology is also essential for educating and training entry-level security analysts and enabling a quicker learning curve than previously achievable. For example, during a security incident or log analysis, SOC analysts typically scrutinize server access for anomalies or patterns indicative
Fig. 26: RowHammer code generation using ChatGPT
Fig. 27: Polymorphic Malware Generation
of an attack. ChatGPT can process large volumes of log data and efficiently detect anomalies or security issues within access logs. As illustrated in Figure 28, when server access logs are input into ChatGPT, it can identify potential threats such as SQL injection and categorize the different types of SQL injection. and alert SOC analyst. In another scenario, an analyst may ask to generate a PowerShell script to detect which table in the database "Adventureworks2019" is consuming more CPU, as shown in Figure 29. The analyst can save this script as a.ps1 file and run it using PowerShell. The script will output the CPU time for each table in the AdventureWorks2019 database and the table with the highest CPU time. This will help the analyst identify which table is consuming the most CPU and take necessary actions to optimize the query performance. Powershell is just the example script, while ChatGPT can be used to find security bugs in any given script along with the patch to fix them.
### _Cybersecurity reporting_
As an AI language model, ChatGPT can assist in cybersecurity reporting by generating natural language reports based on cybersecurity data and events. Cybersecurity reporting involves analyzing and communicating cybersecurity-related information to various stakeholders, including executives, IT staff, and regulatory bodies [49]. ChatGPT can automatically generate reports on cybersecurity incidents, threat intelligence, vulnerability assessments, and other security-related data. By processing and analyzing large volumes of data, ChatGPT can generate accurate, comprehensive, and easy-to-understand reports. These reports can help organizations identify potential security threats, assess their risk level, and take appropriate action to mitigate them. ChatGPT can help organizations make more informed decisions about their cybersecurity strategies and investments by providing insights into security-related data. In addition to generating reports, ChatGPT can also be used to analyze and interpret security-related data. For example, it can be used to identify patterns and trends in cybersecurity events, which can help organizations better understand the nature and scope of potential threats.
### _Threat Intelligence_
ChatGPT can help in Threat Intelligence by processing vast amounts of data to identify potential security threats and generate actionable intelligence. Threat Intelligence involves collecting, analyzing, and disseminating information about potential security threats to help organizations improve their security posture and protect against cyber attacks. ChatGPT can automatically generate threat intelligence reports based on various data sources, including social media, news articles, dark web forums, and other online sources. By processing and analyzing this data, ChatGPT can identify potential threats, assess their risk level, and recommend mitigating them. In addition to generating reports, ChatGPT can also be used to analyze and interpret security-related data to identify patterns and trends in threat activity. ChatGPT can help organizations make more informed decisions about their security strategies and investments by providing insights into the nature and scope of potential threats.
Fig. 28: ChatGPT detecting security issue in server logs [50]
Fig. 29: PowerShell script that detects which table in the AdventureWorks2019 database is consuming more CPU [50]
### _Secure Code Generation and Detection_
The risk of security vulnerabilities in code affects software integrity, confidentiality, and availability. To combat this, code review practices have been established as a crucial part of the software development process to identify potential security bugs. However, manual code reviews are often labor-intensive and prone to human errors. Recently, the advent of AI models such as OpenAI's GPT-4 has shown promise in not only aiding in the detection of security bugs but also generating secure code. In this section, we will present a methodology for leveraging AI in code review and code generation with specific focus on security bugs detection.
#### 4.4.1. Detecting Security Bugs in Code Review Using Chatgpt
The intricacies of code review, especially in the context of detecting security bugs, require a deep understanding of various technologies, programming languages, and secure coding practices. One of the challenges that teams often face is the wide array of technologies used in development, making it nearly impossible for any single reviewer to be proficient in all of them. This knowledge gap may lead to oversights, potentially allowing security vulnerabilities to go unnoticed.
Furthermore, the often lopsided developer-to-security-engineer ratio exacerbates this problem. With the high volume of code being developed, it's challenging for security engineers to thoroughly review each pull request, increasing the likelihood of security bugs slipping through the cracks.
To alleviate these issues, AI-powered code review can be a potent tool. GPT-4, a Transformer-based language model developed by OpenAI [51], exhibits a strong potential to assist in this arena. By training GPT-4 with a vast dataset of past code reviews and known security vulnerabilities across different languages, it can act as an automated code reviewer, capable of identifying potential security bugs across various programming languages.
For example, consider the following C++ code:
```
charbuffer[10]; strcpy(buffer,userInput);?>
```
In this code snippet, GPT-4 would detect the potential for a buffer overflow, a classic security issue where an application writes more data to a buffer than it can hold, leading to data being overwritten in adjacent memory. In this specific instance, GPT-4 flags that the **strcpy** function does not check the size of the input against the size of the buffer, making it vulnerable to a buffer overflow attack if **userInput** exceeds the buffer size.
#### 4.4.2. Generating Secure Code Using Chatgpt
In addition to identifying security issues, GPT-4 can also suggest secure coding practices. Given its proficiency in multiple programming languages and its understanding of security principles, GPT-4 can provide alternative solutions that comply with secure coding standards.
Building upon the previous example, GPT-4 can generate a more secure code snippet as follows:
```
charbuffer[10]; if(strlen(userInput)<sizeof(buffer)){ strcpy(buffer,userInput); }else{ //HandletheerrorortrimuserInput. }
```
In the suggested code, GPT-4 introduces a check for the length of the **userInput** against the buffer size. By ensuring the **userInput** length is less than the buffer size before performing the **strcpy** operation, the risk of a buffer overflow attack is mitigated.. This not only helps in mitigating the identified security issue but also serves as a teaching tool for developers, improving their understanding of secure coding practices.
GPT-4's capabilities extend beyond just a single programming language or a single type of vulnerability. It can be trained to understand and respond to a wide variety of security issues across different languages, making it a valuable asset in the code review process and contributing to a more secure software development lifecycle.
These capabilities of GPT-4 pave the way for its broader adoption in real-world applications, including but not limited to automated code review, secure code generation, and as a training tool for developers to understand and implement secure coding practices.
### _Identification of Cyber Attacks_
ChatGPT can help identify cyber attacks by generating natural language descriptions of attack patterns and behaviors. Identifying cyber attacks involves detecting and analyzing malicious activity on an organization's network or systems. ChatGPT can analyze security-related data, such as network logs and security event alerts, to identify potential attack patterns and behaviors. By processing and analyzing this data, ChatGPT can generate natural language descriptions of the attack vectors, techniques, and motivations used by attackers. ChatGPT can also generate alerts and notifications based on predefined criteria or thresholds. For example, if ChatGPT detects an unusual pattern of activity on a network, it can automatically create an alert or notification to the appropriate personnel. Chart GPT assist in analyzing and understanding cross side scripting attack as shown in Figure 30, including security vulnerabilities. It can help developers in writing secure code by providing suggestions and identifying potential security risks.
### _Developing Ethical Guidelines_
ChatGPT can help in developing Ethical Guidelines for AI systems by generating natural language explanations and recommendations based on existing ethical frameworks and principles. ChatGPT can analyze and interpret ethical guidelines and principles, such as the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems [53] or the European Union's General Data Protection Regulation (GDPR) [54], and generate natural language summaries and recommendations for implementing these guidelines in AI systems. Additionally, ChatGPT can be used to generate ethical scenarios and case
studies that can be used to educate and train AI developers and stakeholders on the ethical considerations and implications of AI systems. ChatGPT can help developers and stakeholders better understand the potential impacts of their decisions and actions by simulating ethical dilemmas and scenarios. For example, we asked chartGPT to give a list of software that can be used to evaluate a website's quality for link building based on the Google's quality rate guidance, and it helps us find software that meets Google's quality guidelines, as shown in Figure 31.
### _Enhancing the Effectiveness of Cybersecurity Technologies_
ChatGPT can be integrated with intrusion detection systems to provide real-time alerts and notifications when potential threats are detected. ChatGPT can identify potential threats and generate natural language descriptions of the attack patterns and behaviors by processing and analyzing security-related data, such as network logs and security event alerts. These descriptions can then generate real-time alerts and notifications, allowing security teams to respond to potential threats and mitigate their impact quickly. Moreover, ChatGPT's ability to learn from historical data can enable it to identify patterns and trends in threat activity, allowing the development of more effective intrusion detection rules and policies. This can help organizations improve their ability to detect and respond to known and emerging threats.
### _Incidence Response Guidance_
Incident response is a key element in an organization's cybersecurity strategy. A well-ocrestrated response can help mitigate the damage caused by a security breach or cyberattack. However, the speed and accuracy of these responses are paramount. GPT-4, OpenAI's language model, can assist in expediting and streamlining these processes, providing automated responses, and even aiding in crafting incident response playbooks.
A feature of GPT-4's architecture is its ability to generate natural, context-based text. This can be exploited to create an AI-powered incident response assistant, capable of providing immediate guidance during an incident and automatically documenting events as they unfold.
Let's consider an example where a network intrusion detection system (NIDS) flags suspicious activity resembling an SQL injection attack:
NIDS Alert: "Alert - SQL Injection Attack detected from IP 192.168.1.1 targeting the Users Database."
GPT-4 can be trained to respond as follows:
ChatGPT: "Initiating SQL Injection Response Protocol. Block IP address 192.168.1.1 immediately to prevent further intrusion attempts. Initiate a scan of the Users Database for any anomalies or unauthorized modifications. Ensure to take backups before initiating the scan to prevent loss of data. Alert the cybersecurity team to further investigate the attack."
This demonstrates GPT-4 providing immediate response actions based on predefined guidelines, helping to reduce the response time and potentially limit damage caused by the incident.
In addition to providing incident response, GPT-4 can also be instrumental in creating automated incident response playbooks. These playbooks are a set of guidelines that provide detailed, step-by-step instructions on how to handle various types of security incidents. With its natural language generation capabilities, GPT-4 can transform technical guidelines and protocols into easy-to-follow instructions.
Fig. 31: Ethical Guideline example [55]
Fig. 30: Identification of cross-site scripting attack [52]
For example, consider creating an incident response playbook for a Distributed Denial of Service (DDoS) attack:
Input to ChatGPT: "Create a DDoS incident response playback."
ChatGPT: "Title: DDoS Incident Response Playbook
_Identification:_ Monitor network traffic for sudden spikes or unusual patterns. Use intrusion detection systems to identify potential DDoS attacks. _Containment:_ Implement rate limiting and IP filtering rules to limit the incoming traffic. Use a DDoS protection service if available. _Eradication:_ Identify and block malicious IP addresses. Remove affected systems from the network if necessary. _Recovery:_ Restore services incrementally while monitoring for suspicious activity. _Follow-up:_ Conduct a post-incident analysis to understand the attack and improve defense measures."
This example shows how GPT-4 can create an easy-to-understand playbook for managing a DDoS attack, providing a consistent, reliable response to such incidents. To illustrate this in a real-world context, consider the case of Palo Alto's XSOAR Incident Response tool [56], which is built on the capabilities of ChatGPT. XSOAR's playbooks aim to deliver information in a manner that is easily accessible and comprehensible to users.
ChatGPT's ability to analyze incidents and present findings in readable, natural language significantly enhances the value proposition for security analysts. This assists not only in identifying the potential security threats but also in understanding them without the requirement of a deep technical background in cybersecurity. Furthermore, XSOAR utilizes ChatGPT to enhance incident ticket response. It does so by delivering detailed analysis, impact assessment, and recommendations directly into the incident ticket in a format that's easy for the analyst to comprehend. The speed and accuracy of the responses, combined with the depth of information provided, have led to increased user satisfaction. Figure 32 shows an example of the email received by the analyst from XSOAR with the ChatGPT response output.
### _Malware Detection_
Another compelling use-case of GPT-4 in cybersecurity is in the field of malware detection. Malware, short for malicious software, refers to any software specifically designed to cause damage to a computing system, server, client, or computer network. With the proliferation of malware variants and their increasing complexity, traditional signature-based detection systems often fall short. The ability to adapt and learn makes AI models like GPT-4 potent tools for malware detection.
GPT-4 can be trained on a dataset of known malware signatures, malicious and benign code snippets, and their behavior patterns. It can learn to classify whether a given piece of code or a software binary could potentially be malware. The model can be fine-tuned to understand different types of malware such as viruses, worms, trojans, ransomware, and more. It can then generate reports detailing the potential risks and suggesting mitigating actions.
Consider the example of a simple piece of pseudo code that attempts to replicate itself onto other files:
```
procedureinfect(executable_files): forfileinexecutable_files: ifnotis_infected(file): append_self_to_file(file)
```
This piece of code is a simplistic representation of a virus's self-replication behavior. When fed to GPT-4, the model could recognize this behavior and classify the code as potentially malicious. It could then generate a report detailing its findings:
_Analysis Report:_ The submitted code demonstrates self-replication behavior typically associated with computer viruses. It attempts to append its own code to other executable files, which is a common propagation method for viruses. This kind of behavior can lead to the spread of the malicious code across a system or network.
_Recommended action:_ Isolate the detected code and perform a thorough investigation. Avoid executing unknown or suspicious files. Update your antivirus software and perform a full system scan.
This capability of GPT-4 opens up new possibilities for proactive malware detection and response. While the approach is not without its challenges and limitations - such as the need for comprehensive and up-to-date training data, and potential false positives or negatives - it can significantly complement existing malware detection methods. By leveraging GPT-4's learning ability, we can aim to keep pace with the ever-evolving landscape of cyber threats.
## 5 Social, Legal and Ethical Implications of ChatGPT
As users make use of ChatGPT and similar LLM tools in prohibited ways discussed earlier, they are already in dicy waters. Even if the users isn't using ChatGPT in unethical ways, they can still be using the generated AI app in seemingly fully legitimate ways and become the subject of a lawsuit by someone that believes that the user have caused them harm as a result of user's ChatGPT use. Further, these chatbots can showcase social bias, threaten personal safety and national security, and create issues for professionals.
The problem with the ChatGPT (and similar) models is that they perpetuate gender, racial, and other kinds of social biases. Many scholars and users pointed out when they used ChatGPT to gather data or write articles/essays on some topics, they received a biased output, reflecting harmful stereotypes. The data fed into ChatGPT is old and limited, and has not been updated after 2021. It is built on data of around 570 GB, which is approximately 300 billion words. This amount is not enough to answer queries on every topic in the world from different perspectives. In this way, it fails to reflect progressivism as well [57]. In this section, we will discuss some of the ethical, social and legal implications of ChatGPT and other LLM tools.
### _The Pervasive Role of ChatGPT_
ChatGPT and other contemporary large language model (LLM) based tools have exhibited prowess in responding to a wide array of questions and prompts. While the utility of answering questions is evident, it's within the realm
of prompt response where ChatGPT truly showcases its potential. Various corporations now employ ChatGPT in the production of marketing material and product descriptions.
The integration of control instructions and data might seem familiar, echoing the long-standing issue present in the Von Neumann architecture that is ubiquitous in modern computing. Ensuring safe processing of both instructions and data has traditionally been achieved through strategies such as segregating data and instructions as much as possible, and placing the data at the end, often prefaced by a marker indicating that the following data should not be interpreted as instructions. Yet, the efficacy of these strategies remain under examination.
### _Unauthorized Access to User Conversations and Data Breaches_
A significant data breach involving ChatGPT has recently been confirmed, underscoring the urgent need for strengthened security measures [58]. This breach led to the unexpected exposure of users' conversations to external entities, which clearly violates user privacy. If cybercriminals exploit ChatGPT to plan cyber-attacks, their schemes could become unintentionally visible to others. Moreover, sensitive user data, such as payment information, was at risk during this breach. Although reports suggest that only the last four digits of the credit cards of users registered on March 20th, 2023 between 1 and 10 a.m. pacific time were exposed, the situation raises critical questions about the security protocols and data storage strategies employed by ChatGPT [58].
### _Misuse of Personal Information_
An examination of OpenAI's use of personal information for AI training data has unearthed significant privacy challenges [59]. A notable case surfaced in Italy, where regulators banned the use of ChatGPT due to the European Union's GDPR non-compliance, primarily centered around unauthorized use of personal data. OpenAI's assertion of relying on "legitimate interests" when using people's personal information for training data raises ethical and legal dilemmas about how AI systems handle personal data, regardless of if the information is public or not.
### _Controversy Over Data Ownership and Rights_
ChatGPT's extensive reliance on internet-sourced information, much of which might not belong to OpenAI, is a point of contention [59]. This issue took center stage when Italy's regulator pointed out the lack of age controls to block access for individuals under 13 and the potential for ChatGPT to disseminate misleading information about individuals. This discourse accentuates the pivotal concern that OpenAI might not possess legal rights to all the information that ChatGPT uses, regardless of the information being public or not [59].
### _Misuse by Organizations and Employees_
An incident involving Samsung employees reflected another facet of potential misuse of LLM tools [60]. The employees at Samsung used ChatGPT to generate or debug code, inadvertently inputting confidential company information into the AI model. As a result, this confidential information became part of ChatGPT's library, potentially making it publicly accessible, and thereby raising significant privacy concerns. One privacy concern is if the average ChatGPT user could potentially access this information just by asking about it. Samsung as a company would need to enforce a policy about not allowing their employees to use ChatGPT and other LLMs, as this can lead to information leaks.
Fig. 32: XSOAR Output for Incident Response
### _Hallucinations: A Challenge to Tackle_
OpenAI's GPT-4 technical paper discussed the issue of "hallucinations," a phenomenon where the AI model generates inaccurate or outright false information [61]. While this concern does not directly relate to privacy, it emphasizes the importance of the accuracy and reliability of information provided by AI systems like ChatGPT, as people cannot entirely rely on these LLMs to be completely accurate. Misinformation and misuse stemming from these hallucinations indirectly contribute to privacy issues, emphasizing the need for improvements in AI system accuracy and integrity. On top of this, there are over 100 million users of ChatGPT, meaning that if users are asking similar questions and getting the same _hallucinogenic_ answer, the misinformation can be widespread [61]. An article on DarkReading discussed an issue where an attacker can exploit these hallucinations. When a user asks about specific packages and ChatGPT does not know what packages to use, it will fill in places where a package does not exist with a made up package. An attacker can publish the malicious version of a package that ChatGPT can link to in response, and when the user downloads this package, it can be harmful to their computer [62].
## 6 A Comparison of ChatGPT and Google's Bard
Large Language Models (LLMs) like OpenAI's ChatGPT and Google's Bard AI exemplify the remarkable advancements in machine learning and artificial intelligence. These models, trained on extensive datasets, are transforming how we interact with technology, opening new possibilities in several applications, from customer support to virtual assistants. ChatGPT and Bard AI use WebText2 or OpenWebText2 [51] and Infiniset datasets for training. While both share the underpinning of the transformer neural network architecture and the process of pre-training and fine-tuning, they embody unique features within their architectures, owing to their iterative refinements over time. ChatGPT, commencing its journey with GPT-1 in June 2018, has progressed significantly, with its current iteration, GPT-4, unveiled in March 2023. Bard AI, initially introduced as Meena [63], has also undergone various refinements, demonstrating significant improvements in human-like conversational abilities. Both models showcase remarkable contextual understanding capabilities. However, their adeptness varies depending on the nature and complexity of the questions asked. While ChatGPT finds extensive use in customer support scenarios, Bard AI excels in applications that require human-like conversational abilities [64].
However, these tools differ in terms of their developer communities and ecosystems. ChatGPT, owing to its wide availability, enjoys popularity among developers and researchers, boasting over 100 million users and approximately 1.8 billion visitors per month [64]. Although available publicly through APIs, Bard AI remains in beta version and is accessible only to a limited number of users. OpenAI and Google have adopted distinct approaches toward the openness and accessibility of their models. OpenAI promotes accessibility of ChatGPT via various APIs, while Bard AI, though publicly available as an experimental product, remains restricted to a limited user base during its experimental phase. In term of the training data, ChatGPT utilizes a semi-supervised (Reinforcement Learning from Human Feedback (RLHF)) approach, drawing from sources like WebText2 or OpenWebText2, Common Crawl, scientific literature, and Wikipedia. On the other hand, Bard AI leverages the Infiniset dataset, a blend of diverse internet content, to enhance its dialogue engagement capabilities.
Advanced AI systems like ChatGPT and Google Bard demonstrate potential as powerful tools for detecting and mitigating software vulnerabilities. However, as discussed earlier, these systems could potentially be leveraged by malicious actors to automate and optimize cyberattacks. In the following discussion, we explore this double-edged aspect of AI in cybersecurity by examining the capacity of ChatGPT and Google Bard, and share our experience based on the experiments conducted by the authors.
### _Cyber Offense and Malcode Generation_
ChatGPT's approach to an attempt at cyber-attack code generation is ethical and responsible. It consistently declined our request to generate attack payloads or engage in social engineering, demonstrating a commitment to its OpenAI guidelines. Attempts to break these rules, using role-playing or jailbreaking, were met with an error message. The tool underlined its ethical usage, stating, _"I'm sorry, but I cannot assist with creating an email for malicious purposes or to engage in any form of social engineering attack. My purpose is to provide helpful and ethical information to users. If you have any other non-malicious requests or questions, I'll be more than happy to help."_. On the other hand, when we attempted, similar prompts on Google's Bard, its responses were more varied. When asked to provide examples of certain types of attacks, Bard often returned useful code snippets. For instance, in the case of ransomware, Bard gave detailed information about each function, attempting to implement the Advanced Encryption Standard within the code snippet. However, it omitted the creation of a ransom note. When probed for an example of a SQL Injection, Bard consistently avoided providing a response. Attempts to rephrase the question or ask for related but less directly malicious code were unsuccessful. Bard also produced code snippets for attacks like ZombieLoad and Rowhammer, but they were significantly simplified compared to what a jailbroken ChatGPT might generate. Bard reminded the user about its non-malicious usage policy after generating these snippets. When it came to generating code for a Polymorphic Virus, Bard was entirely unsuccessful. Even when asked to implement the features of a polymorphic virus in code, it consistently avoided doing so.
In conclusion, Bard's ability to generate code for cyber-attacks was unpredictable. Notably, Bard could generate some attacks without jailbreaking, an aspect that Google should consider in the further development of the tool. It's important to note that by June 27, 2023, Bard stopped producing code for ransomware and viruses, indicating potential improvements in Google's management of the tool's capabilities in the context of the cyber offense. This shows a trend toward more responsible use of AI in code generation.
### _Detection and Mitigation of Security Vulnerabilities_
Large Language Models (LLMs) such as ChatGPT and Bard have demonstrated their versatility in various tasks, such as text generation, language translation, and question-answering. Trained on extensive datasets comprising texts and code, these models possess the capability to understand code semantics and identify potential security vulnerabilities. The LLMs recognize security vulnerabilities by searching for patterns in the source code typically associated with such weaknesses. For instance, the models may scrutinize code for prevalent security flaws, including but not limited to buffer overflow errors or SQL injection vulnerabilities. In addition to identifying vulnerabilities, these LLMs can generate comprehensive reports outlining the potential security flaws they have detected. Developers can leverage these reports to address and rectify the vulnerabilities present in their code, enhancing the security robustness of their applications.
In our experimental study, an intentional SQL injection vulnerability was introduced into a piece of code and presented to both ChatGPT and Bard for analysis. Both models successfully detected the SQL injection vulnerability, explained the issue, and proposed solutions to mitigate the risk. The recommended solution involved using the _prepareStatement_ function to circumvent SQL injection vulnerabilities. These solutions were tested and found to be effective in real-time scenarios.
```
<%... Statementstmt=conn.createStatement(); ResultSers=stmt.executeQuery("select* fromempwhereid="+eid); if(rs!=null){ rs.next(); Stringname=rs.getString("name"); %> EmployeeName:<%=name%>
```
Listing 2: Solution provided by ChatGPT
```
PreparedStatementstmt=conn. \(\leftrightarrow\)prepareStatement("SELECT*FROMemp WHEREid=?"); stmt.setInt(1,eid);//Assumingeidisan integervalue ResultSers=stmt.executeQuery(); if(rs.next()){ Stringname=rs.getString("name"); //Restoftthecode }
```
Notably, Google's Bard provided additional insights into preventing SQL injections, further enriching the remediation strategies.
### _Security logs analysis_
Log analysis is a critical part of any security posture. By analyzing logs, organizations can identify potential threats, track user behavior, and ensure compliance with regulations. However, log analysis can be daunting, as it often involves large volumes of data and complex patterns. ChatGPT and Bard are LLMs that can be used to automate log analysis. These models are trained on massive datasets of text and code, which allows them to understand and process log data. ChatGPT and Bard can be used to identify anomalous patterns in log data, which can indicate a security threat.
For our study, server logs containing traces of SQL injection and Path Traversal cyberattacks were analyzed using ChatGPT, and Google Bard. SQL injections, including Union and Subquery attacks, and Path Traversal attacks, even their encoded variants, were present within the logs. Both ChatGPT and Google Bard demonstrated competent detection capabilities for the Path Traversal and encoded traversal attacks. Regarding SQL injection attacks, the AI tools' performances were differentiated. While ChatGPT was successful in identifying all types of SQL injections, including Union and Subquery attacks, Google Bard's detection was limited to only Union SQL injections. This observation points towards a potential limitation in Google Bard's threat detection capabilities concerning different variants of SQL injections.
Remediation recommendations, a critical component of threat response, was another area of assessment. Google Bard offered remediation steps immediately following the detection of threats. This feature enhances its utility by guiding users on the course of action to mitigate the identified cybersecurity risks. ChatGPT initially did not provide any remediation steps post-threat detection. However, further interaction revealed that it could provide extensive and valid remediation recommendations upon request. This indicates an interactive nature inherent in ChatGPT, which could be a potential asset, despite requiring additional user prompts to extract such information. In conclusion, both AI systems exhibit promising and varying capabilities in cyber threat detection and remediation.
### _Information Cutoff_
ChatGPT, developed by OpenAI, has an information cutoff in September 2021 [51]. This implies that it cannot provide answers to queries that require knowledge or data post this date. This limitation is partially mitigated in the latest version, ChatGPT 4, which incorporates plugins and a feature known as _'Chat with Bing'_[65]. This feature enables ChatGPT to access current information, albeit with a degree of inaccuracy when compared to Google's Bard.
Bard, unlike ChatGPT, does not have an information cutoff and leverages the vast expanse of the internet to provide answers. This feature makes Bard a potential tool for cyber criminals who might use it to generate attacks, given its ability to provide information about emerging technologies. On the flip side, cybersecurity professionals can also use Bard to stay abreast with the latest information on security. However, Bard is not without its flaws. It has been observed to suffer from 'hallucinations', where it generates information that is not based on factual data.
### _Privacy Issues_
ChatGPT has faced criticism over privacy concerns, particularly around the storage of information in the chatbot's library and potential leaks of user information. Google Bard, on the other hand, has not been reported to have these issues. However, Bard potentially uses users' activity data for its training, raising concerns about privacy [66]. Unlike Bard, ChatGPT 4 provides users with the option to opt out of contributing their data for training purposes. This feature adds an additional layer of control for users over their data, addressing some of the privacy concerns associated with the use of AI chatbots.
In conclusion, while both ChatGPT and Google Bard have their strengths and weaknesses, it is crucial for users to be aware of these aspects to make informed decisions about their use. As GenAI continues to evolve, it is expected that these systems will become more accurate and secure, providing users with a more reliable and safer experience.
## 7 Open Challenges and Future Directions
The most promising future direction for ChatGPT is integrating with other AI technologies, such as computer vision and robotics. By merging the conversational abilities of ChatGPT with the visual and physical capabilities of computer vision and robotics, we can make intelligent and conversational AI systems that can revolutionize how we interact with technology. For example, a future where we can have a natural language conversation with your smart home system to control the temperature, lights, and other appliances or with a robot that can assist you with cleaning or grocery shopping tasks.
The merging of AI technologies will enable ChatGPT to better comprehend and respond to human communication's complexities, leading to enhanced natural language generation and a more seamless and intuitive user experience. Another exciting possibility for ChatGPT is the potential for increased personalization and customization through learning from user interactions and individual preferences. As ChatGPT continues to interrelate with users, it can learn about their language, tone, and style, generating more personalized and accurate responses. This increased level of personalization can also lead to better customer service and education, as ChatGPT can be trained to understand better and respond to each user's specific needs and preferences. Furthermore, by leveraging the vast amounts of data generated by ChatGPT's interactions, developers can create language models that are highly tuned to each user's specific needs and preferences, leading to a more personalized and engaging experience.
In this section, we will discuss the open challenges of this research as GenAI and LLMs evolve along with potential implementations to explore as outlined in Figure 33.
### _Patching Up Hallucinations_
In section 5.6, we discussed the problem with hallucinations in LLMs. Hallucinations is likely the biggest hole in the performance of LLMs. These mainly can come from biases within or simply the complexity of giant datasets, as these LLMs take in a huge amount of training data, as discussed in section 1.1. LLMs are bound to make mistakes on these large datasets.
One way to attempt to mitigate these hallucinations is to apply automated reinforcement learning to tell the model when it is making a mistake. Researchers could attempt to automate a system that detects and error and corrects it before it goes completely into the model's pool of knowledge. This could be potentially done by implementing anomaly detection for error detection. Another way to potentially reduce the amount of hallucinations could be to curate the training data. Due to the size of the training data for LLMs, this would take a very long time, but ensuring that the data doesn't have any inaccuracies or biases will help LLMs to not hallucinate as much. By developing a system for easy reinforcement learning and ensuring that the training data is processed correctly, LLMs can overall become more reliable and trustworthy sources of information.
### _Defending Against Adversarial Attacks_
In section 2, we talked about different types of ways that a user can manipulate LLMs, mainly ChatGPT, to give responses that go against their own guidelines. The most common way is by doing a jailbreak method, such as the Do Anything Now (DAN). Using reverse psychology and model escaping are two other ways an LLM can be manipulated.
An intuitive way to go about fixing these adversarial attacks is by training the model to recognize an input involving said methods of manipulation, and then making the model respond to the input with rejection. A model could be trained to specifically recognize bits of input that could yield malicious information and potentially weigh what the consequences of giving certain information could be. A model could then reject responding to a malicious prompt. By developing a model with training against adversarial attacks, we will be able to trust LLMs to not help cybercriminals receive malicious code.
### _Privacy and Data Protection_
In section 5, we discussed the many issues ranging from use of personal information to sensitive data being save into an LLM's library.
Fig. 33: Open research challenges and potential future directions for LLMs performance and security.
Use of personal information which LLMs try to use for training and responses can conflict with the European Union's GDPR compliance laws. To fix this, the developer needs to discuss and ensure that the LLM adheres to those laws, as LLMs could potentially be banned from those countries if not. Sensitive information being entered into an LLM's library could be mitigated by a few potential solutions: the LLM simply not saving a user's chat history, company policies, or having the option to delete messages from the LLM's history. Another issue is that an LLM can have an information cutoff; the biggest example is ChatGPT having the September 2021 cutoff. The models could be continuously trained and updated frequently to prevent outdated information from being given so often. An issue with this solution, however, is that the source datasets would have to be updated frequently as well to give the new information. The new information could also be cause for the model's bias, as there would likely be more of the old information on a certain topic than the new information, potentially making the model believe the old information more. If LLMs are able to protect personal and/or sensitive information and completely comply with regulations and laws, the LLMs will secure themselves as completely safe and reliable tools for everyone to use.
## 8 Conclusion
GenAI driven ChatGPT and other LLM tools have made significant impact on the society. We, as humans, have embraced it openly and are using them in different ingenious ways to craft images, write text or create music. Evidently, it is nearly impossible to find a domain where this technology has not infringed and developed use-cases. Needless to mention, cybersecurity is no different, where GenAI has made significant impacts how cybersecurity posture of an organization will evolve with the power and threat ChatGPT (and other LLM tools) offers. This paper attempts to systematically research and present the challenges, limitations and opportunities GenAI offers in cybersecurity space. Using ChatGPT as our primary tool, we first demonstrate how it can be attacked to bypass its ethical and privacy safeguards using reverse psychology and jailbreak techniques. This paper then reflects different cyber attacks that can be created and unleashed using ChatGPT, demonstrating GenAI use in cyber offense. Thereafter, this article also experiment various cyber defense mechanisms supported by ChatGPT, followed by discussion on social, legal and ethical concerns of GenAI. We also highlight the key distinguishing features of two dominant LLM tools ChatGPT and Googe Bard demonstrating their capabilities in terms of cybersecurity. Finally, the paper illustrates several open challenges and research problems pertinent to cybersecurity and performance of GenAI tools. We envision this work will simulate more research and develop novel ways to unleash the potential of GenAI in cybersecurity.
|
2303.12762 | Effect of gamma radiation on electrical properties of diffusive
memristor devices | Diffusive memristors continue to receive tremendous interest due to their
ability to emulate biological neurons and thus aid the development of
bio-inspired computation technology. A major issue with the diffusive memristor
is the inability to reliably control the formation of the conduction filaments
which affects both the device functionality and reproducibility of regimes
after each application of voltage. Here we investigate the effect of gamma
radiation on the electrical properties of the diffusive memristors based on
metallic nanoparticles in dielectric matrix. Our experiments show that after
exposing to radiation, the memristors demonstrate much sharper (and less noisy)
hysteresis in the current-voltage characteristics while preserving the same
low- and high-resistive states as in the pristine samples. Additionally, the
radiation lowers both threshold and hold voltages that correspond to onset of
low- and high- resistive states, respectively. The proposed mechanism involves
radiation-induced defects in the silica matrix which help to establish dominant
pathways for nanoparticles to form conduction filaments. Our findings suggest
an efficient way to enhance working characteristics of diffusive memristors and
to improve their reproducibility. | D. P. Pattnaik, C. Andrews, M. D. Cropper, A. Balanov, S. Saveliev, P. Borisov | 2023-03-22T17:18:03Z | http://arxiv.org/abs/2303.12762v1 | **Effect of gamma radiation on electrical properties of diffusive memristor devices**
## Abstract
Diffusive memristors continue to receive tremendous interest due to their ability to emulate biological neurons and thus aid the development of bio-inspired computation technology. A major issue with the diffusive memristor is the inability to reliably control the formation of the conduction filaments which affects both the device functionality and reproducibility of regimes after each application of voltage. Here we investigate the effect of gamma radiation on the electrical properties of the diffusive memristors based on metallic nanoparticles in dielectric matrix. Our experiments show that after exposing to radiation, the memristors demonstrate much sharper (and less noisy) hysteresis in the current-voltage characteristics while preserving the same low- and high-resistive states as in the pristine samples. Additionally, the radiation lowers both threshold and hold voltages that correspond to onset of low- and high- resistive states, respectively. The proposed mechanism involves radiation-induced defects in the silica matrix which help to establish dominant pathways for nanoparticles to form conduction filaments. Our findings suggest an efficient way to enhance working characteristics of diffusive memristors and to improve their reproducibility.
## 1 Introduction
Abilities of memristors to change their resistance depending on current or voltage history offer tremendous potential for the development of the next generation of non-volatile memory such as Resistive Random-Access Memory (ReRAM) and brain-inspired neuromorphic hardware. [1, 2, 3, 4] The fundamental property of these devices is an I-V hysteresis loop, where the
memristor changes its resistance at certain voltage thresholds. For a diffusive memristor, [5, 6] the composite material is made of a dielectric matrix, typically SiO\({}_{\rm x}\), MgO\({}_{\rm x}\), or HfO\({}_{\rm x}\) with embedded metallic nanoparticles (NPs) of Au, Ag or Cu [7] which are either formed as a result of co-deposition in a single thin film or by growing a thin metallic layer next to the dielectric one. When an external voltage is applied between the electrodes, the NPs diffuse in the dielectric matrix and form a conduction filament (CF) between the electrodes (SET). When the external voltage is removed, the CF collapses (RESET) due to minimization of interfacial surface energy. The formation and rupture of the CF manifests as a change between high resistance state (HRS) and low resistance state (LRS), with corresponding voltages labelled as threshold voltage V\({}_{\rm th}\) and hold voltage V\({}_{\rm h}\) respectively. [7, 8, 9, 10] Mechanisms behind filament formation and rupture are of critical importance for design of neuromorphic hardware based on diffusive memristors. [2, 11, 12, 13, 14]
Diffusive memristors made of SiO\({}_{2}\) layers are structurally similar to a typical CMOS (complementary metal-oxide-semiconductor) device. Previous works on exposure of CMOS and oxide-based ReRAM devices to different sources of ionizing radiation demonstrate that its effects on the device properties are rather complex, however, in general, they could be reduced to generation of electron-hole pairs in the oxide layer. A typical hole yield for \({}^{60}\)C gamma photons in SiO\({}_{2}\) in zero-field was found to be 0.3 if the subsequent recombination has been considered. [15, 16] In SiO\({}_{2}\) devices ionizing radiation creates broken bonds between silicon and oxygen. [17, 18, 19, 20] A broken Si-O bond can yield a trivalent silicon with a dangling bond and a non-bridging oxygen which can further release an electron-hole pair or a hole only while the non-bridging oxygen transforms into to a hole trap state:
\[\equiv\textbf{{Si-0-Si}}\equiv\rightarrow\equiv\textbf{{Si\cdot+ \cdot 0-Si}}\equiv\]
\[\equiv\textbf{{S}}\textbf{{i}}\cdot\ +\ \cdot\textbf{{0}}-\textbf{{S}}\textbf{{i}}\equiv \longrightarrow\textbf{{h}}^{+}+\textbf{{e}}^{-}+\cdot\textbf{{0}}- \textbf{{S}}\textbf{{i}}\equiv\]
\[\equiv\textbf{{S}}\textbf{{i}}\cdot\ +\ \cdot\textbf{{0}}-\textbf{{S}}\textbf{{i}}\equiv \longrightarrow\textbf{{h}}^{+}+\textbf{{0}}^{-}-\textbf{{S}}\textbf{{i}}\equiv\]
In SiO\({}_{2}\) the electrons usually drift away from the original point of generation within picoseconds due to their high mobility, [21] while the less mobile and heavier holes remain inside the oxide and cause the effective reduction of the positive voltage bias when it is applied. However, over time, which could be from seconds to years, depending on the device geometry, temperature and field environment, the holes will also drift away causing a short-term recovery towards the original state. When these holes reach long-lived trapped states, typically near the oxide interface, they build up trapped charge that can cause a more long-lasting change to the device performance ranging from few hours to years. [16] Further, exposure to ionizing radiation also leads to formation of variety point defects beyond trivalent silicon and non-bridging oxygen such as oxygen vacancies, silicon vacancies, interstitial oxygen, and interstitial silicon. These act as dopants and trap states, usually forming positive space charge in the oxide layer. [22, 23, 24, 25, 26] However, for the purpose of resistive switching those defects can have significant impact on device properties.
In case of non-volatile resistive switching in ReRAM thin layers of Cu- doped SiO\({}_{2}\)[27] or HfO\({}_{2}\)[28] with Cu and Pt or Cu and W electrodes, it was reported that formation and rupture of CFs is only weakly affected by various doses of \({}^{60}\)Co \(\gamma\) radiation (total dose of 3.6 kGy [28] or 71 kGy [27]), that is the resistances in LRS, reset voltages, and the switching endurance exhibited no significant reduction, and the SET-RESET process remained reversible. Only the HRS resistance and the set voltage values showed some relatively small decrease and increase, respectively, after the irradiation. Yuan et al, [29] reported for devices with Ag doped AlO\({}_{x}\) layers demonstrating non-volatile resistive switching that the LRS resistance, set and reset voltage values were almost stable (only slight decrease in set voltage values) upon \(\gamma\)-irradiation from a \({}^{60}\)Co source at the total dose values of 5kGy and 10 kGy. However, the HRS resistance
and forming voltage values needed to perform the initial electroforming were found to decrease and increase, respectively. This was interpreted, firstly, as a result of increased migration and spatial dispersion of Ag ions into the dielectric matrix after the irradiation, and secondly, as a signature of radiation induced holes which became trapped in the AlOx layer near the bottom electrode (due to the work function difference) opposite the top electrode with larger concentration of Ag NPs. This caused an increase in the forming voltage values as a higher electric field was needed to initiate field-driven Ag diffusion over increased barriers and trapped charges in order to form the initial switching filament during the electroforming process. Further it was suggested that the trapped holes facilitated formation of Ag CFs after the electroforming, hence some decrease in the set voltage values was observed. At the same time, this was reported to be in contradiction to findings in for a Cu-doped HfO\({}_{2}\) system where the opposite was found. [27] At the same time, both Refs [29] and [28]found a similar decrease in HRS resistance, which was explained in Ref. [29] by the increased leakage due to tunneling through new radiation induced defects.
In this letter, we studied the impact of \(\gamma\)-radiation from a \({}^{60}\)Co source with a total dose of 50 kGy, on diffusive memristor devices made of a SiO\({}_{2}\) layer doped with Ag. Our measurements revealed that the exposure to radiation led to more sharp hysteresis in I-V curves of the devices and to decrease of the corresponding threshold voltages. We also found improvement in stability of resistive switching. Further, the artificial spiking neurons made of the irradiated devices demonstrated much higher spiking frequencies in comparison to pristine samples, which is promising for acceleration of neuromorphic computations.
## 2 Results and Discussion
I-V curves of four devices from the same batch were measured and averaged after five voltage sweeps as shown **Figure 1 (a)-(d)** before and after the irradiation. A noticeable change in memristor switching behavior can be observed in all the four devices after the irradiation. For the pristine samples the switching between HRS and LRS was much more gradual and included numerous irregular current jumps, likely due to randomness in CF formation and destruction with increasing or decreasing the applied voltage. At the same time, the device current demonstrated significant noise which we explain by instabilities in the CF.
In contrast to this, after being exposed to radiation the same devices started to demonstrate more distinct and stable threshold switching behavior. For the increasing voltage sweep, the resistance switching from HRS to LRS became rather abrupt and happened at lower threshold voltage values, which suggests that CF formation must have required less time and energy after the irradiation. The reset sweep showed similar abrupt switching behavior from LRS to HRS. The more stable and less fluctuating I-V curves allow also to assume more reproducible and less random CF formations. These effects could be explained by appearance of the radiation-induced defects in the dielectric matrix that promote nucleation of the CF and direct it further formation.
### XPS Analysis
The chemical state and composition of the device switching layer before and after the irradiation were studied by XPS measurements. Note that the XPS is only sensitive to the top few nm of the film, however, we expect no significant difference in the film chemical composition across the film thickness. The XPS spectrum for Ag taken on the sample before irradiation **(Figure 2(a))** shows 3d\({}_{3/2}\) and 3d\({}_{5/2}\) peaks corresponding only to metallic Ag, however, after irradiation each peak became split into two corresponding to metallic Ag and silver oxide phases **(Figure 2(b))**, that is, partial silver oxidation must have taken place after the irradiation. [27, 31, 32, 33]. This is further corroborated by the change in the position, shape
and width of the Auger MNN Ag peaks measured before and after irradiation (**Figure 2 (c)**). [32, 34]
A typical XPS spectrum for Si 2p taken before irradiation (**Figure 3(a)**, black line) demonstrates a single peak corresponding to silicon oxide. After irradiation (**Figure 3(a)**, red line) its height significantly decreases accompanied by a slight shift in the peak position, while an additional, new peak corresponding to pure Si appears at 96.78 eV [35] evidencing some reduction of oxidized Si. Similarly, the XPS spectrum for O 1s (**Figure 3 (b)**, black line) shows a single peak before the irradiation, corresponding to oxygen in silicon oxide, which decreases in the intensity after the irradiation (**Figure 3 (b)**, red line) and becomes accompanied by a new additional peak at 530.38 eV [35, 36] which is assigned to a metal-oxide phase (in this case likely silver-oxide). This combined evidence from the Si 2p peak shift along with the appearance of the pure Si and metal-oxide O1s peaks after the irradiation supports a hypothesis that some Si in SiO\({}_{2}\) was reduced and that some oxygen defects in the form of oxygen vacancies are introduced in the film due to the irradiation. [37, 38]
Further, comparison of XPS spectra before and after the irradiation implies that some amount of interstitial oxygen must have been generated thus partially oxidizing the silver clusters and leaving oxygen vacancies in silica dielectric matrix. Here, it is natural to assume that some silver ions could have reacted with the non-bridging oxygen at the interface to SiO\({}_{2}\). In parallel to formation of interstitial oxygen, generation of interstitial silicon must have happened due to many broken Si-O bonds which should release some amount of silicon out of the crystal lattice.
### Artificial Neuron Spiking
For further insight into the effects of \(\gamma\)-radiation on our devices we investigated the spiking behavior of the artificial neurons whose electrical circuit is shown in **Figure 4(a)**. In our study, the artificial neuron included a diffusive memristor connected in series with an external resistor
R\({}_{\rm L}\)= 55 k\(\Omega\) and in parallel to an external capacitor Cp=1 nF. This circuit was powered by a constant voltage V\({}_{\rm ext}\)=1V. The applied V\({}_{\rm ext}\) charged C\({}_{\rm P}\) until the voltage drop across the memristor reached its threshold value causing switching to LRS. When in LRS, the capacitor discharged producing a current spike whilst the device voltage has decreased until it became below the hold voltage and the device switched back to HRS. This process of repeated charging and discharging continued itself and resulted in a series of electric current spikes mimicking spiking of biological neuron. [7]
Comparison of voltage spiking in an artificial neuron with the same memristor measured before and after the irradiation (**Figure 4 (b)**, top and bottom panel, respectively) shows a dramatic increase in the spiking frequency after the irradiation. As the spiking frequency is determined by the rate of formation and rupture of the CFs, this increase means that the radiation-induced defects promote faster and more reliable formation and disruption of CFs.
**3. Mechanism**
3.1. Hypothesis
As discussed previously, the transition from HRS to LRS in diffusive memristors made of SiO\({}_{2}\) doped with Ag is realized by Ag NPs clustering in a CF between the two electrodes driven by the external electric field as illustrated in **Figure 5 (a).**
After \(\gamma\)- irradiation, the oxide layer comprising Ag NPs is likely to contain high concentration of trapped holes, oxygen vacancies, interstitial oxygen, and interstitial silicon. The XPS data, described above suggests that after irradiation, at least some fraction of the interstitial oxygen does oxidize interfacial silver ions, whilst the interstitial silicon atoms together with oxygen vacancies are left distributed in the oxide layer. These silicon ions could consequently diffuse to form silicon nano inclusions (NI) similar to the ones reported in [39, 40]. Some redistribution of interstitial silicon and oxygen vacancies could have also happened due to work function difference (caused by the difference in electrode roughness [14] or film thickness [39, 41, 42] )
between the electrodes, however, its effect for the device performance is not that significant. The subsequent field application should further produce oxygen vacancies to concentrate at the electrode opposite the positive one.
We hypothesize that the metastable structure consisting of silicon rich silica which readily segregates to oxygen vacancies along with the Si\({}^{+}\) contribute to the modification of the electrical properties of our diffusive memristor by creating a prevalent guiding conduction path for Ag NPs. [43] Consequently, Si NI clusters emerged after irradiation enhance and accelerate formation of CF switching the device to LRS, see **Figure 5(b)**. Namely, Ag NPs do not need any more to form a continuous CF connecting the top and bottom electrodes but instead to amass a series of smaller CFs between the established Si NI clusters within the oxide layers. Thus, the energy barrier to form a LRS is lowered resulting in reduction of V\({}_{\text{th}}\) and the dynamics of CF formation happens on shorter timescales which explains more abrupt threshold switching in the IV curves and higher spiking frequency in artificial neurons made after the irradiation.
The distributed Si NIs, perhaps with support by oxygen vacancies, form a backbone on which a conduction pathway is formed as a series of CFs stretching from one electrode to another. As the result, the formation (and disruption) of conducting channels with variation of V\({}_{\text{ext}}\) become more stable and reproducible. Moreover, increased doping of silicon oxide with holes and electrons could promote tunneling between Ag NPs which do not form part of a CF, and between broken filaments where a continuous conduction is not possible for some reason. However, more rigor verification of the role of Si NIs as well of the effects of trapped charges and oxygen vacancies in the oxide requires further theoretical and experimental studies. [39, 44, 45]
### Theoretical Model and Simulations
In line with the above hypothesis, the influence of radiation on the filament formation effectively constrains particle drift and diffusion. Before the irradiation, almost all volume
between memristive terminals is available for Ag clusters, and after the irradiation pinning centers for Ag clusters are created, thus forming preferable channels/valleys for Ag NPs to move between terminals. Regardless of the character of the interaction between the pinning centers and the Ag NPs, that is, whether it is attractive or repulsive, the room for the particle diffusion is effectively shrinks towards certain preferable paths. Such a constraint could be considered as a transition from 2D diffusion [46] to the so-called "single-file diffusion" [47]. The effective shrinking of the transfer space can be modelled by varying the transverse trapping potential within the commonly accepted diffusive memristor model (see supplementary information in Ref. [48]).
Within the model, two component random force (\(\xi_{l,x}\), \(\xi_{l,x}\)) with zero mean (\(\langle\xi_{l,x}\rangle\), \(\langle\xi_{l,x}\rangle\))=(0, 0) and delta-correlations in time \(t\), \(\langle\xi_{l,x}(0)\;\xi_{l,x}(t)\rangle=\delta(t)\), \(\langle\xi_{i,y}(0)\;\xi_{i,y}(t)\rangle=\delta(t)\), \(\langle\xi_{i,x}(0)\;\xi_{i,y}(t)\rangle=0\), together with the drift force \(\frac{q\nu}{L}\) (\(q\) is charge of Ag cluster, \(V\) electric voltage, \(L\) is the gap between the electrodes or between arms of forming filament) control diffusion of \(i\)-th Ag NP with the coordinates \((x_{i},y_{i})\) in a two-dimensional potential \(U(x_{i},y_{i})=U_{x}(x_{i})+\alpha y_{i}^{2}\). The potential along \(x\)-axis \(U_{x}(x_{i})\) (see inset in **Fig 6 f**) has a large minimum nearby one of the memristor terminals reflecting attraction to Ag-rich areas to minimize Ag-SiO interface energy, while the parabolic potential \(\alpha y_{i}^{2}\) implies constrain for transverse particle diffusion, that is, the one that is controlled by the presence of radiation-induced pinning centers, i.e. the larger \(\alpha\) the more single-filed the diffusion becomes.
\[\eta\frac{dx_{i}}{dt}=-\frac{\partial U(x_{i}y_{i})}{\partial x_{i}}-\sum_{j \neq i}\frac{\partial W(x_{i}-x_{j}y_{i}-y_{j})}{\partial x_{i}}+q\frac{\nu}{ L}+\sqrt{2\eta k_{B}T}\xi_{l,x},\]
\[\eta\frac{dy_{i}}{dt}=-\frac{\partial U(x_{i}y_{i})}{\partial y_{i}}-\sum_{j \neq i}\frac{\partial W(x_{i}-x_{j}y_{i}-y_{j})}{\partial y_{i}}+q\frac{\nu}{ L}+\sqrt{2\eta k_{B}T}\xi_{l,y},\]
\[\frac{dT}{dt}=-\frac{\nu^{2}}{CR_{M}}-\kappa(T-T_{0}),\]
\(\tau\frac{dV}{dt}=V_{ext}-\left(1+\frac{R_{L}}{R_{M}}\right)V\).
Here, \(T\) is the cluster temperature, which can significantly be different from the memristor matrix temperature; \(T_{0}\) is the bath temperature (e.g., SiO-matrix or memristor terminals/substrate temperature), \(\eta,k_{B}\) are Ag-cluster viscosity and the Boltzmann coefficient, while \(\kappa,C\) are the heat transfer coefficient and cluster heat capacity, respectfully. Repulsive interaction of Ag-clusters defined by the potential \(W=W_{0}\exp\left(-\rho_{i,j}/r_{int}\right)\) with \(\rho_{int}=\sqrt{\left(x_{i}-x_{j}\right)^{2}+\left(y_{i}-y_{j}\right)^{2}}\) prevents their agglomeration in the potential well and controls the transition from 2D to single file diffusion. [49, 50] The last equation in the set is the Kirchhoff's voltage law for artificial neuron circuit with voltage bias \(V_{ext}\) applied to the artificial neuron, the load resistance \(R_{L}\), parallel capacitance \(C_{p}\), and RC time constant \(\tau\), that is, the model describes spiking behaviour of our artificial neuron before and after the irradiation (see Figure 4 for the circuit scheme and experimental data). The resistance of the memristor is calculated by considering all paths between the electrodes through Ag-clusters and assuming tunnelling resistance between two clusters \(i\) and \(j\), \(R_{i,j}=\exp\left(-\frac{\rho_{i,j}}{\lambda}\right)\) if several clusters are missed. The resistance of each path \(p\) is estimated as a series connection of tunneling resistances of all clusters in that path, and then the total memristor resistance \(R_{\text{M}}\) is evaluated as a parallel connection of all contributing paths, such that the total conductance is:
\[G_{M}=\frac{1}{R_{M}}=\sum_{p}\left(\sum_{i,j\in p}R_{i,j}\right)^{-1}\]
As was shown earlier, such ''compact models' [51] allow to qualitatively describe all observed experimental features and provide an insight into underlying physical phenomena. In our simulation for simplicity, we consider a case of four particles.
The dynamics of each of the four Ag particles was simulated for two cases: when lateral diffusion is high (**Figure 6 a, c, and e**, \(\alpha=1\)), representing the sample before the irradiation; and when lateral diffusion is suppressed (**Figure 6 b, d** and f, \(\alpha=100\)), representing the sample after irradiation. Before the irradiation, the conductance demonstrates a very complex irregular dynamics (Figure 6a) driven by rather erratic behavior of Ag-clusters. Collective movement of different Ag particles along the x-axis (Figure 6c, different colors) is characterized by them overtaking each other frequently which imposes randomness into the total conductance, e.g., the Ag-cluster shown by black curve overtakes Ag clusters with other trajectories at \(t\approx 0.5\). This randomness is also reflected by random walks of the particles in the \(y\)-direction as illustrated in Figure 6 e. For the sample after irradiation the fluctuations in the \(y\)-direction are significantly subdued (Figure 6 f), evincing the transition from a 2D to a quasi-1D dynamics. This also makes the movement of Ag-clusters in the \(x\)-direction (see Figure 6 d) more regular, as those start to move one after another with no overtaking taking place. Such ordered collective behavior of the Ag-clusters accelerates the conductance spiking, which becomes not only more regular, but also more frequent (cf. **Figure 6a and b**). Thus, these results are in a perfect qualitative agreement with experimental measurements of artificial neuron spiking presented in Figure 4b, which justifies a transition from 2D to quasi-1D drift-diffusion after irradiation as a plausible physical mechanism.
## 4 Conclusion
In conclusion, we experimentally and theoretically studied effects of \(\gamma\)- radiation from a \({}^{60}\)C source on electric properties of Ag -based diffusive memristors. The devices after irradiation demonstrated lower threshold voltages and more abrupt resistance switching between HRS and LRS. We also found out that artificial neurons with the memristors after irradiation demonstrated higher frequency and more regular spiking than those with pristine memristors.
These phenomena can be explained by formation of radiation induced Si- nano inclusions which direct and accelerate formation of CFs. Our simulations suggest that the physical mechanism of the observed dramatic change of spiking is the dynamical transition from 2D complex diffusion in the pristine samples to single-file diffusion in the samples being exposed to radiation. Our findings not only shed a light on how an exposure to a high energy radiation affects the charge transport in diffusive switches, but also offer an efficient way to improve the performance of diffusive memristors and the related artificial neurons based on these memristors via creation of artificial pinning centers. They also stimulate further research in understanding the roles of various radiation-induced defects and nano-inclusions on charge transport of memristive devices, which would promote development of more endurable and radiation immune technological concepts.
## 5 Methods
_Sample preparation_: A bottom electrode of 70 nm Pt was deposited by magnetron sputtering on a SiO2/Si wafer, followed by co-sputtering of 60 nm of Ag and SiO2 in a mixed atmosphere of Ar and O2. A top electrode of 30 nm Pt was sputtered through a shadow mask with circular holes of 100 \(\upmu\)m in diameter to obtain the following film structure: SiO2\(\backslash\)Pt\(\backslash\)SiOx:Ag\(\backslash\)Pt. All the layers were deposited at room temperature of the substrate and at a growth pressure of 5.5 mTorr.
_Electrical characterization_: Device contacts were made using tungsten tips housed in a probe station by Everbeing. Current-voltage characterization was performed using a Keithley 4200 SCS parameter analyzer. For measurements of self-sustained current spikes, a voltage pulse (1V, 50s) was applied to the device using a Rigol waveform generator and the device voltage was recorded using a PicoScope digital oscilloscope whilst the memristor was connected in series to a load resistance R1= 65 k\(\Omega\) and in parallel to a capacitor Cp=1 nF.
-ray photoelectron spectroscopy_: XPS was performed using a Thermo K-Alpha system with an Al Ka mono-chromated (1486.6 eV) source with an overall energy resolution of 350 meV. The XPS peaks are charge corrected to adventitious carbon at 284.8eV.
_Radiation exposure_: The gamma radiation exposure of the devices was carried out at the Dalton Cumbrian Facility using an irradiator in a self-contained Foss Therapy Services Model 812 with a 9 L sample chamber. The samples received \({}^{60}\)Co gamma (\(\gamma\)) radiation with two energies of 1.17 MeV and 1.33 MeV (average energy of 1.25 MeV) at a dose rate of 229.51 Gy/min for exactly 217.86 minutes to give a total accumulated dose of 50 kGy. This dose was chosen as previous works on other memristive systems had reported no significant change to electrical properties for lower radiation doses. [24, 29] The irradiator contained three source rods with up to three source capsules (GIK-7M-4) with an initial activity of 2500 Ci per rod such that the activity was evenly distributed along the length of each source rod.
_Theoretical simulation:_ Simulation parameters are: \(\frac{\lambda}{L}\) = 0.1,\(\frac{r_{int}}{L}\) = 0.04; \(\frac{W_{0}}{U(-1)-U(0)}\) = 0.6; \(\kappa\tau\) = 90; \(\alpha L^{2}/(U(-1)-U(0))\) = 1 and 100 for 2D and single-file diffusion, respectively; \(\frac{q\nu}{U(-1)-U(0)}\) = 60; \(\frac{R_{L}}{min}R_{M}\) = 300; \(T_{0}/(U(-1)-U(0))\) = 0.0012; we use units where \(2k_{B}=1,\eta=1\), C=1.
## Acknowledgements
The authors would like to thank Sam Davis for the XPS characterization and acknowledge the use of the facilities within the Loughborough Materials Characterisation Centre. The authors acknowledge the support of The University of Manchester's Dalton Cumbrian Facility (DCF), a partner in the National Nuclear User Facility, the EPSRC UK National Ion Beam Centre and the Henry Royce Institute. We acknowledge Ruth Edge for the assistance during the Gamma irradiation. This work was supported by The Engineering and Physical Sciences Research Council (EPSRC), grant no. EP/S032843/1.
Figure 1: (a)-(d) Current- voltage characteristics for four separate devices from the same batch before (black) and after the irradiation (red).
**Figure 2.** (a, b) XPS spectra (black line) for Ag 3d\({}_{32}\) and 3d\({}_{52}\) peaks before (a) and after the irradiation (b). The corresponding fits represent peaks for metallic Ag (red line) and silver oxide (blue line) in 3d\({}_{5/2}\). **(c)** Auger MNN peak for Ag before (red) and after irradiation (blue).
**Figure 3.** (a, b) XPS spectra for Si2p and O1s before (black line) and after (red line) radiation.
Figure 4: (a) Electrical circuit scheme for an artificial neuron with a diffusive memristor. (b) Experimentally observed voltage spikes measured across the memristor before (top, black) and after the irradiation (bottom, red).
Figure 5: Schematics of a SiO\({}_{x}\):Ag diffusive memristor showing (a) Conduction filament in LRS for the pristine sample before irradiation. (b) Conduction filament in LRS for the sample after irradiation, containing Si NIs and oxygen vacancies.
Figure 6: Simulated diffusion of four Ag clusters (black, red, green and blue curves) through a gap of size \(L\) located between the electrodes of a diffusive memristor which is part of an artificial spiking neuron. The diffusion is of a 2D (a, c, e) or single-file (b, d, f) character. Spiking of conductance normalized to the maximal value (c, d) \(x\) coordinate of a Ag cluster vs. time in the direction perpendicular to the electrodes (i.e. longitudinal diffusion) (e, f) \(y\) coordinate of a Ag cluster vs time in the direction along the electrodes (i.e. transverse diffusion). The inset in (f) shows the potential \(U(x)\) between the terminals.
|
2308.08448 | Implementing Quantum Generative Adversarial Network (qGAN) and QCBM in
Finance | Quantum machine learning (QML) is a cross-disciplinary subject made up of two
of the most exciting research areas: quantum computing and classical machine
learning (ML), with ML and artificial intelligence (AI) being projected as the
first fields that will be impacted by the rise of quantum machines. Quantum
computers are being used today in drug discovery, material & molecular
modelling and finance. In this work, we discuss some upcoming active new
research areas in application of quantum machine learning (QML) in finance. We
discuss certain QML models that has become areas of active interest in the
financial world for various applications. We use real world financial dataset
and compare models such as qGAN (quantum generative adversarial networks) and
QCBM (quantum circuit Born machine) among others, using simulated environments.
For the qGAN, we define quantum circuits for discriminators and generators and
show promises of future quantum advantage via QML in finance. | Santanu Ganguly | 2023-08-15T14:21:16Z | http://arxiv.org/abs/2308.08448v1 | # Implementing Quantum Generative Adversarial Network (qGAN) and QCBM in Finance
###### Abstract
Quantum machine learning (QML) is a cross-disciplinary subject made up of two of the most exciting research areas: quantum computing and classical machine learning (ML), with ML and artificial intelligence (AI) being projected as the first fields that will be impacted by the first quantum machines. Quantum computers are being used today in drug discovery, material & molecular modelling and finance. In this work, we discuss some upcoming active new research areas in application of quantum machine learning (QML) in finance. We discuss certain QML models that has become areas of active interest in the financial world for various applications. We use real world financial dataset and compare models such as qGAN (quantum generative adversarial networks) and QCBM (quantum circuit Born machine) among others, using simulated environments. For the qGAN, we define quantum circuits for discriminators and generators and show promises of future quantum advantage via QML in finance.
Quantum Generative Adversarial Network (qGAN), Quantum Circuits Born Machine (QCBM), QML
## I Quantum Computing in Finance
Quantum finance is an upcoming cross-discipline of financial engineering with the integration of quantum field theory, classical finance theory, computer science, and artificial intelligence (AI) technology [1]. Quantum machine learning (QML) is a cross-disciplinary subject made up of two of the most exciting research areas: quantum computing and classical machine learning. With recent advances of quantum computing technologies and the promised quantum advantage in QML, researchers have begun to consider how to utilize them in industries and a major focus area is aspects of finance (see review at [1]).
Since financial institutions are performing enormous tasks of numerical calculation in their daily works, promise of speed-up of such tasks by quantum computers are too enticing to ignore. For example, one of such tasks is pricing of financial derivatives: large banks typically have a huge number of derivatives written on various types of assets such as stock price, foreign exchange rate, interest rate, commodity and so on. Therefore, pricing of derivatives is an important issue for them. Another very important aspect of quantum finance is option pricing, closely related to financial derivatives [2]. Efforts of marrying quantum machine learning to stock market dynamics by leveraging the probabilistic nature of quantum computing and algorithms have been ongoing for forecasting and risk-analysis as well [3].
Quantum computers have been found to be useful in solving several well-known financial engineering problems such as Credit Risk Analysis (CRA) [4]. Simulation of portfolio loss distribution is achieved in classical computing by using the classical Monte Carlo method. Monte Carlo works by correlating underlying economic factors and associated asset relationships as a way to simulate all possible loss events. This is done by performing iterations over N samples taken from normal distribution of the data prior to averaging the several associated trajectories. Both Value at Risk (VaR) and Conditional Value at Risk (CVaR), critical properties of CRA evaluation, require large number of samples since both correspond to loss events that occur less frequently. This process tends to make classical Monte Carlo an expensive process, both computationally and time-wise. On the other hand, Quantum computers are deemed to be natural fit to solve Monte Carlo simulations - which are fundamental for derivatives pricing and related tasks, since they can be viewed as true random number generators. Quantum annealers have been found to be efficient in solving multi-period integer portfolio optimization problem, which is NP-Complete.
Quantum generative models, due to their inherent nature, promises to deliver quantum advantage on NISQ devices. This paper focuses on implementation of two quantum generative models as applied to financial problems: a) application of Quantum Generative Adversarial Networks (qGAN), a new and active research area that promises to be one where NISQ-based algorithms will be particularly fruitful for certain financial applications and b) Quantum Circuit Born Machine (QCBM). The motivation for this work is [5] which shows that Restricted Boltzmann Machines (RBM) outperforms parametric models for this type of financial dataset. This paper briefly describes classical and quantum option pricing, fundamentals of classical Generative Adversarial Networks (GAN) and qGAN, associated works done and implementation of qGAN for options pricing using real world financial dataset. Finally, we move onto the description and science of QCBM and implementation of it using real world dataset and discuss the results and future works for both schemes.
## II GAN and qGAN
### _Generative Adversarial Networks (GAN)_
Generative models are machine learning (ML) models that employs very powerful statistical techniques. The advantage of GANs is that they can be trained in an unsupervised way. GANs were introduced in 2014 in a well cited paper [6] by Goodfellow et al. and tested initially on image datasets [7, 8], medicine [9, 10], Quantitative Finance [11], for portfolio optimization [12], fraud detection [13], trading model optimization [14] and generation of time series [15, 16].
GAN is an ML model used to train to generate data closely resembling the patterns and properties of a given dataset. This task is achieved by a GAN model by utilizing two neural networks (NN) as main components: a generator, which is a |
2307.11944 | LCPOM: Precise Reconstruction of Polarized Optical Microscopy Images of
Liquid Crystals | When viewed with a cross-polarized optical microscope (POM), liquid crystals
display interference colors and complex patterns that depend on the material's
microscopic orientation. That orientation can be manipulated by application of
external fields, which provides the basis for applications in optical display
and sensing technologies. The color patterns themselves have a high information
content. Traditionally, however, calculations of the optical appearance of
liquid crystals have been performed by assuming that a single-wavelength light
source is employed, and reported in a monochromatic scale. In this work, the
original Jones matrix method is extended to calculate the colored images that
arise when a liquid crystal is exposed to a multi-wavelength source. By
accounting for the material properties, the visible light spectrum and the CIE
color matching functions, we demonstrate that the proposed approach produces
colored POM images that are in quantitative agreement with experimental data.
Results are presented for a variety of systems, including radial, bipolar, and
cholesteric droplets, where results of simulations are compared to experimental
microscopy images. The effects of droplet size, topological defect structure,
and droplet orientation are examined systematically. The technique introduced
here generates images that can be directly compared to experiments, thereby
facilitating machine learning efforts aimed at interpreting LC microscopy
images, and paving the way for the inverse design of materials capable of
producing specific internal microstructures in response to external stimuli. | Chuqiao Chen, Viviana Palacio-Betancur, Sepideh Norouzi, Pablo F. Zubieta Rico, Monirosadat Sadati, Stuart J. Rowan, Juan J. de Pablo | 2023-07-22T00:00:06Z | http://arxiv.org/abs/2307.11944v1 | # LCPOM: Precise Reconstruction of
###### Abstract
When viewed with a cross-polarized optical microscope (POM), liquid crystals display interference colors and complex patterns that depend on the material's microscopic orientation. That orientation can be manipulated by application of external fields, which provides the basis for applications in optical display and sensing technologies. The color patterns themselves have a high information content. Traditionally, however, calculations of the optical appearance of liquid crystals have been performed by assuming that a single-wavelength light source is employed, and reported in a monochromatic scale. In this work, the original Jones matrix method is extended to calculate the colored images that arise when a liquid crystal is exposed to a multi-wavelength source. By accounting for the material properties, the visible light spectrum and the CIE color matching functions, we demonstrate that the proposed approach produces colored POM images that are in quantitative agreement with experimental data. Results are presented for a variety of systems, including radial, bipolar, and cholesteric droplets, where results of simulations are compared to experimental microscopy images. The effects of droplet size, topological defect structure, and droplet orientation are examined systematically. The technique introduced here generates images that can be directly compared to experiments, thereby facilitating machine learning efforts aimed at interpreting LC microscopy images, and paving the way for the inverse design of materials capable of producing specific internal microstructures in response to external stimuli.
Polarized Optical Microscopy Liquid Crystals Simulations
## 1 Introduction
When confined between a pair of linear polarizers, liquid crystals (LCs) can display a wide range of interference colors and complex patterns owing to the material's optical birefringence (_i.e._ the difference between refractive indices parallel and perpendicular to the molecular axis). [1, 2] The brightness and color hues are sensitive to the local molecular order, which can be controlled through external stimuli, including electric fields, magnetic fields, flows, chemical cues and temperature. [1, 2, 3, 4] On account of their responsive nature and large optical birefringence, LCs are widely used in optical devices, ranging from mature technologies such as liquid crystal displays to state-of-the-art sensors that can detect toxins, biomolecules, and microplastics. [5, 6, 7, 8, 9, 10]
Liquid crystals are generally characterized using polarized optical microscopy (POM), which provides a direct measure of the material's alignment and and is able to identify any topological defects that might arise in a sample.[11, 12, 13, 14, 15] In confined LC systems, large spatial distortions in the order field can develop on account of the incompatibility between surface and bulk orientations, leading to distinct POM images. The realignment of a confined LC can be triggered by altering the balance between elastic and surface energies; a minute change in the external environment can completely change the material's appearance under POM.[11, 16, 17] Sensing and display devices often rely solely on the
transition between configurations that exhibit different topological defects (such as bipolar and radial), which are identifiable through the brightness profile. The substantial color changes that accompany such transitions are rarely exploited.[15, 18] Understanding how the POM color patterns of LCs correspond to a particular molecular alignment is of interest not only from a fundamental perspective, but also for development of next-generation display and sensing devices.
One method to understand the color texture of POM images is to rely on the Michel-Levy chart, which tabulates the interference color as a function of thickness and birefringence. [19, 20, 21] That method, however, is limited to a uniform orientation of the director field, and is incapable of predicting the interference color in confined geometries where the alignment exhibits large spatial variations. In addition, the POM images can change with light sources and viewing angles, making it difficult to match the color patterns with the underlying order field and hindering comparisons to experiments with different setups.
More generally, POM images are calculated without color information using the Ondris-Crawford method, which produces the brightness profile corresponding to the LC order field. [22, 23, 24] In this method, the sample is first discretized into layers whose thickness is much smaller than the wavelength. Subsequently, the propagation of light is modeled by multiplying the Jones matrix of each layer, which computes the retardation according to the local LC alignment. The method is versatile and easy to implement. It has been applied to many geometries, including droplets and toroids, where it is possible to reproduce the brightness profile of both nematic and cholesteric LCs. [23, 24, 25, 26, 17] The Ondris-Crawford method can be viewed as the standard approach for comparisons between experimental POM images and model predictions with numerical simulations.[11, 13, 17, 27, 28, 29, 30] A major limitation, however, is that the original formulation assumes a single wavelength, and it is difficult to compare a monochromatic brightness profile with the color images that are typically obtained from a white light source with a distribution of wavelengths in the range between 400-680nm. Note that the effect of having a broadband light source has been discussed in several experimental and simulation studies, but reports that include simulated colored POM images have been limited and the agreement with experiments has been limited. [26, 27, 28, 29, 31] An exception is provided by the work of Yoshioka _et al._, who presented several colored POM images from calculations that showed good agreement with experiments. [32] Unfortunately, few details regarding the calculation of the color images were provided in that report. In this work, we present a systematic methodological study of the computational generation of POM images (and an accompanying software package), which is validated through quantitative comparisons to experimental data for a variety of systems.
Figure 1: Illustration of the method and plots of key physical properties. (A) Schematic representation of the method for calculation of a colored POM image from the order field. In this illustration, a radial droplet with \(d=20\mu m\) is computed with \(N_{\lambda}=14\) wavelengths. (B) LED light spectrum obtained from the manufacturer. (C) Refractive indices of 5CB as a function of order parameter \(S\) and wavelength \(\lambda\). (D) Transmission ratio as a function of incident light angle. \(Tr_{p}\), \(Tr_{s}\) and \(\overline{Tr}\) stand for transmission ratios of p-polarized, s-polarized light and the weighted average value. See Supplementary materials for details. (E) Color matching functions from the CIE 1931 standard.
## 2 Calculating optical textures
The colored POM images are computed by introducing a color matching function that combines the information corresponding to multiple wavelengths to produce RGB values equivalent to the colors perceived by the human eye. [33, 34, 35, 36] In addition, we take into consideration the emission spectrum of the light source, the dependency of refractive indices on wavelength, and the reflection at the droplet interface. The accuracy and the applicability of our method are demonstrated by comparing simulated and experimental POM images of radial, bipolar, and cholesteric droplets.
When colored POM images are captured in the laboratory, the sample is typically illuminated with a white light that has a non-uniform spectrum distribution. The light spectrum differs between laboratories and can alter the color texture. [37] To produce an accurate color image, the LED spectrum \(I(\lambda)\) for experimental images produced in this work is obtained from the manufacturer or measured by an optical spectrometer. (Fig. 1A and Supplementary Fig. S1) In the calculations presented below, the light spectrum (400 nm - 680 nm) is discretized into \(N_{v}=20\) intervals and the intensity profile for each wavelength is computed using the Ondris-Crawford method. [23, 24]
The order field configurations are obtained either from analytical or numerical solutions, which are then interpolated onto a regular grid with the desired resolution. Each of the single-wavelength intensity profiles depends on the local LC alignment and the optical birefringence (\(\Delta n\)). (Fig. 1A) It is important to note that \(\Delta n\) is not a single constant, but a function of the wavelength (\(\lambda\)) and the order parameter \(S\) (which is a function of temperature and spatial gradient). Overall, both the ordinary and extraordinary refractive indices (\(n_{o}\) and \(n_{e}\) respectively) decrease with wavelength and saturate in the near-IR region. The quantitative relationship can be described by a three-band model with constants fitted to experimental measurements. [4, 38] (Fig. 1C, see Supplementary Materials for the relevant equations).
In confined droplets, the director field is distorted and the local order parameter \(S\) is smaller near the topological defects, leading to a local drop in \(\Delta n\). Here we assume that the dependence of \(n_{e}\) and \(n_{o}\) on \(S\) caused by the spatial gradient is equivalent to the variation of \(S\) caused by temperature. The following relationship is adapted from the Vuks equation: [39]
\[\Delta n=(n_{e}-n_{o})S/S_{r}\]
As an example, for the liquid crystal 4-cyano-4'-phenethyliphenyl (5CB), the reference state is taken to be \(S_{r}=0.76\) at \(T=25.1^{\circ}\). The refractive indices \(n_{o}\) and \(n_{e}\) are plotted as a function of \(\lambda\) for uniform alignment (\(S=0.76\)) and near a topological defect (\(S=0.1\)) (Fig. 1C). It is worth noting that at \(S=0.76\), \(\Delta n\) decreases from 0.067 to 0.051 as \(\lambda\) increases from 0.40\(\mu m\) to 0.70\(\mu m\), which is significant enough to affect higher order interference colors; this effect has generally been ignored in previous reports.
In addition to the interference taking place in the bulk of the liquid crystal, we also consider the light transmission ratio at the water-LC interface. For simplicity, diffraction and refraction at the interface are ignored. In this study, the transmission ratio \(\overline{Tr}\) is approximated from the Fresnel equation using the refractive indices of water and 5CB. [38, 40] (Details are available in Supplementary materials) \(\overline{Tr}\) decreases with increasing incident angle (\(\theta_{i}\)), leading to lower brightness near the periphery of the droplet (Fig. 1F).
To combine these multiple intensity profiles at different wavelengths into an RGB image, we consider how humans perceive color and how color images are stored in modern electronic devices. Briefly, the human eye can sense different wavelengths of light mainly with three types of cone cells in the retina. [33] These signals are processed by an intricate neural network to generate a perception of color in the brain. Modern-day electronics represent color images by assigning tri-stimuli values such as RGB, XYZ, or HSV (Hue Saturation Value) to each pixel, so that digital displays can allow the human eye to perceive colors that are relatively independent of the device or the lighting environment. The matching functions \(x_{i}\) (\(\lambda\)) that transform wavelength signals to XYZ values were originally determined by the International Commission on Illumination (in 1931 - CIE 1931 color space), and they are still widely employed today (Fig. 1E). [34, 35] In this work, the intensity profiles \(P(\mathbf{r},\lambda)\) at \(N_{\lambda}\) wavelengths are weighed by the light intensity \(I(\lambda)\) and the matching functions to obtain an XYZ color image which is then converted to the RGB color image by a linear transformation. [36]
The image before transformation is calculated by:
\[X_{i}(\mathbf{r}) =\int P(\mathbf{r},\lambda)\overline{T}r^{2}(\mathbf{r},\lambda)I( \lambda)x_{i}(\lambda)d\lambda\] \[\approx\sum_{j}^{N_{\lambda}}P(\mathbf{r},\lambda_{j})\overline{ Tr}^{2}(\mathbf{r},\lambda_{j})w_{ij}(\lambda_{j}) \tag{1}\]
where \(w_{ij}=\int_{\lambda_{j}}^{\lambda_{j+1}}I(\lambda)x_{i}(\lambda)d\lambda\) is the weight to the \(i\)th color channel (X, Y, Z) independent of the director field. \(P(\mathbf{r},\lambda_{j})\) represents the single-wavelength intensity profiles obtained from the Ondris-Crawford method.
In summary, the color image is obtained in four steps: 1) Generate the director field through analytical or numerical solutions and interpolate it onto a regular grid with the desired resolution. 2) Compute \(N_{\lambda}\) intensity profiles for discretized wavelengths using the Ondris-Crawford method and multiply by the transmission ratio according to the local curvature. 3) Weigh by the LED spectrum and color matching functions to get the XYZ channel images. 4) Transform the image from XYZ color space to RGB color space and represent the result as a color image.
## 3 Methodology and implementation
Each image (\(110\times 110\) pixels) takes less than 20 minutes to compute on a single Intel Core i5 CPU processor. The sensitivity and speed of this method can serve for generating a data set that facilitates inverse design or machine learning. [41, 42]
## 4 Case studies
In this section, we present the diverse applications of the LCPOM Python package, demonstrating its exceptional capabilities and providing benchmarks for computational tools seeking to align with experimental observations. Not only does our software serve as a convincing proof of concept, but it also offers invaluable insights into crucial considerations such as system size (sec. 4.1), treatment of topological defects (sec. 4.2), probing the orientation of nematic morphologies (see sec. 4.3), and moreover, the faithful reproduction of POM images in cholesteric systems (see sec. 4.4).
### Effect of system size
To explore the effect of system size, we have generated POM images of radial droplets with various diameters (\(d\)). The radial droplet provides a canonical example of the interplay between bulk elasticity and surface orientation. In the presence of a surfactant, the LC molecules orient perpendicular to the surface of the droplet, resulting in a hedgehog defect at the center of the droplet. This particular system has been reported to be stable across a wide range of temperature, different materials, and across different length scales. The analytical description of the radial director field is \(\mathbf{n}=\mathbf{x}/\left\|\mathbf{x}\right\|\). The defect is a divergence of the director field, as pictured in Fig. 1A.
The LCPOM results in Fig. 2A show the progression of colors that emerge as the diameter of the LC droplets is increased from \(d=2\mu\)m to \(20\mu\)m. As the diameter increases from \(d=3\mu\)m to \(6\mu\)m, the first-order interference colors change from yellow to blue (Fig. S2). At \(d=6.5\mu\)m, a second color ring emerges from the center of the droplet due to the spatial variation in the optical path differences. The simulated color textures are highly sensitive to system size, and a discrepancy in diameter as small as \(0.5\mu m\) leads to notable differences in the optical appearance. This level of sensitivity is not seen in BW intensity profiles, indicating that more information about the LC order is captured by our proposed method. (Fig. S3) Moreover, this result highlights the use of a computational tool related to color to determine the size of an experimental system, or the possibility to infer the order parameter from microscopy images. More simulated POM images of this case are provided in Fig. S2.
To reproduce the color texture of experiments in simulations, the birefringence must be tuned down by 5%. This difference is consistent with the fact that frustrated alignment in curved geometries can lead to an attenuation of the optical birefringence compared to the uniform bulk samples in which the refraction indices are measured. Note that the droplet has a diffuse boundary on the bright-field images, and the determination of size has an average error of 14% calculated from the FWHM (full width at half
maximum) of the boundary (see Fig. S5). This diffuse appearance is caused by the diffraction at the interface between materials with mismatching refractive indices. [40]
Overall, the color dependence on size in simulated images is in good agreement with experimental results (Fig. 2B, D and S4). The simulated and experimental images were decomposed into their respective RGB channel contributions (Fig. 2C and E). Given the symmetry of the radial structure, a quantitative comparison between simulated and experimental images can be achieved by performing a polar transformation and extracting the radial intensity profiles. Each of the RGB channels contains contributions from the entire light spectrum and does not have a simple analytical form. Importantly, the peak positions of the RGB color channels agree with the ones observed in experiments for \(r/r_{0}<0.8\), which leads to a precise prediction of the color ring locations, even for large droplets, which exhibit higher-order interference. On the other hand, peaks close to the boundary (\(r/r_{0}>0.8\)) are not observed in experiments, as the intensity decays faster towards the edge of the droplet than predicted. This is attributed to the diffraction and fluctuation at the interface which, as discussed above, are not considered in our algorithm.
### Effects of non-point defects
The nature of the defect core is an active field of study, with implications for the assembly of colloids, active liquid crystals, and photonics. By definition, the defect core is the divergence of the vector order field, and can often be treated as a point charge. However, it has been shown theoretically and in experiments that the topological defects usually do not take the shape of a point, but appear as a region with diminished order parameters. [43, 44, 45, 18, 46, 16]. Radial droplets of 5CB typically exhibit loop disclinations whose diameter is sensitive to the anchoring strength, the elastic constants (temperature), and the size of the droplet. [47, 48, 46] The exact topology of the defect and the source of fluctuations have been long-standing questions that have attracted considerable theoretical interest. [47, 48, 49, 50, 46] In experiments, the defect is usually small and sometimes appears as a blurry dot due to limitations in the optical resolution. In contrast to the hedgehog defect from sec. 4.1, a loop disclination is surrounded by a continuous variation of the director field and has lower rotational symmetry. This implies that the optical texture should reflect when a rigid body rotation of the disclination loop occurs, as previously proposed by de la Cote _et al._[51]
To examine this hypothesis, we performed simulations of nematic droplets under homeotropic anchoring conditions. Calculations of the order field following a Ginzburg-Landau relaxation yield a scalar
Figure 2: Simulation of the color POM images of radial droplets from the analytical order field. (A) Simulated images of a radial droplet from different sizes. (B) Comparison of the image of \(d=5.5\pm 0.8\mu\)m between simulation and experiment. (C) Intensity profiles of the RGB color channels and the radial intensity profile in B. (D) Comparison of the image of \(d=20.3\pm 1.8\pm\mu\)m between simulation and experiment. (E) Intensity profiles of the RGB color channels and the radial intensity profile in D.
and vector order field. The simulation details can be found in the Supplementary Information. The orientations in 3D can be described by two angles, \(\alpha\) and \(\beta\), because the order fields obey the \(D_{coh}\) symmetry. Here, \(\alpha\) is the out-of-plane tilt angle between the symmetry axis and the \(xy\)-plane and \(\beta\) represents the in-plane rotation angle between the 0-projection of the symmetry axis and the \(y\) axis (Fig. 3A).
As the droplet rotates, the reorientation of the loop creates subtle but clear changes in the POM image. As expected, the image bears the highest symmetry at \(\alpha=90^{\circ}\) and demonstrates more fuzzy central patterns compared to the POM image of the analytical form (Fig. S6). A distortion in the optical texture is observed when \(\alpha\) deviates from \(90^{\circ}\) (Fig. 3C-F, Supplementary Movie 1). Importantly, the simulations produce color patterns that are very similar to those observed in experiments for particular orientations of the droplet (Fig. 3B-G, Supplementary Movie 2). This agreement suggests that under appropriate conditions the experiments can be directly compared to simulations to infer the orientation of the defects, thereby offering a new way of studying the dynamics and order fluctuations in confined LC environments.
### Elucidation of ambiguous micrographs through perspective sweep
Another ubiquitous configuration in LC microemulsions is the bipolar droplet. It is characterized by two antipodal surface defects that emerge to satisfy a parallel molecular orientation tangential to the droplet's surface. The transition from a bipolar to a radial configuration can be triggered by adding a surfactant, which is the principle of operation for many LC-based sensing devices. [11] Similar to the order field in sec. 4.2, we performed numerical simulations to generate the vector and scalar order fields of a bipolar droplet that were then used as input for LCPOM. An advantage of this computational tool is the control over the viewpoint of an input morphology; different orientations that yield uncommon micrographs can be probed by this approach. In this case, simulated images were compared to experimental POM images of droplets created by dispersing 5CB in a PVA/water solution; additional experimental details are provided in the SI.
The bipolar droplets have two defects at opposite poles that obey the \(D_{coh}\) symmetry; all the orientations in 3D can be described by the \(\alpha\) and \(\beta\) angles from sec. 4.2. Simulated POM images for different orientations of the bipolar droplet are presented in Fig. 4A. In agreement with literature reports, simulated images at \(\alpha=0^{\circ}\) consist of concentric rings where the brightness and the outline change as the sample is rotated in the \(xy\)-plane. Optical textures similar to these are commonly reported in experiments. [11, 49, 52] Subtle deviations in the optical texture are often related to a small tilt of the bipolar axis, _i.e._ the defects are tilted out of plane. For instance, it was found that the best agreement between simulations and
Figure 3: Effect of loop orientation in a radial droplet. (A) Simulated radial droplet with a disclination loop (regions with \(S<0.3\)) at the center. The diameter of the loop is approximately \(1.9\mu\)m. (B-C) Schematic representation of the orientation of the defect loop with \(\alpha=45^{\circ}\) and \(\beta=0^{\circ}\) and the corresponding simulated POM image with \(d=17.5\mu\)m. (E-F) Schematic representation of the orientation of the defect loop with \(\alpha=45^{\circ}\) and \(\beta=145^{\circ}\) and the corresponding simulated POM image with \(d=17.5\mu\)m. (D) and (G): Experimental images for \(d=17.5\pm 1.6\mu\)m.
experiments correspond to Fig. 4B, where \(\alpha=30^{\circ}\) and \(\beta=55^{\circ}\). Note that the spacing between the pink and green rings and the distortion features near the defects are all reproduced accurately. Interestingly, screening other orientations yields unfamiliar optical textures without the concentric rings. As such images have rarely been associated with bipolar configurations, we performed additional experiments to confirm the accuracy of LCPOM. Although textures with concentric rings are observed more often, morphologies corresponding to orientations with out-of-plane rotation \(\alpha>45^{\circ}\) (Fig. 4B-C) are also confirmed experimentally. It is possible that these textures are reported less frequently because they are difficult to classify. An alternative explanation is that bipolar droplets can adopt preferred orientations due to sedimentation or flow. [21] Nevertheless, the agreement between experiments and simulations suggests that LCPOM reliably generates POM images of bipolar droplets at arbitrary orientations, thereby providing a useful tool with which to interpret POM images and classify droplets that exhibit ambiguous optical morphologies.
### LCPOM with a twist: cholesteric systems
Compared to nematics, cholesterics exhibit additional helical structures along the orientation of the director. [53, 17] Depending on the droplet diameter, the helical pitch (\(p_{0}\)), and the surface interactions, cholesteric droplets can adopt complex internal morphologies and, as such, provide unique opportunities for engineering electro-optical and sensing devices. [54, 55, 56, 32] In general, the POM images of cholesteric droplets are highly sensitive to the droplet orientation, sometimes making it difficult to infer the exact underlying structure. Here, we computed POM images of a cholesteric droplet with a number of turns of \(N=3.8\), and compared them to experimental images with \(N=3.4\) reported by Krakhalev _et al._[27]. The order field was obtained from a theoretically-informed Monte Carlo simulation with homeotropic boundary conditions, following the procedure of Palacio-Betancur _et al._[56] The POM calculations are based on the material properties of the E7 mixture. [57, 27] The images produced in this manner show excellent agreement with experiments. [27, 28] Note that this match is only obtained when the birefringence is scaled down by 40% compared to that of bulk E7, implying that the chiral dopant or the local twist may have led to a decrease in the birefringence. Another possible reason is that the droplets dispersed in polymer films may be oblate, and hence the optical path difference could be overestimated when a spherical geometry is assumed. [27] To examine how the optical texture changes with droplet orientation, POM images were calculated at varying angles and compared with experimental results from Krakhalev _et al._ (Supplementary Movie 3). [27] We found that a small change in the
Figure 4: POM images of bipolar droplets with different configurations from simulations and experiments. (A) Effects of varying the orientation of a bipolar droplet with \(d=20\mu\)m. (B-D), from left to right: bright field image, order field from numerical relaxation, cross-polarized image, simulated POM image. (B) Droplet with a diameter of \(22.9\mu\)m. The poles are oriented close to the \(xy\)-plane with a small out-of-plane tilt. The simulated image is produced with \(\alpha=30^{\circ}\) and \(\beta=55^{\circ}\). (C) Droplet with a diameter of \(13.3\mu\)m with the bipolar axis coinciding with the \(y\)-axis. The simulated image is produced with \(\alpha=50^{\circ}\) and \(\beta=0^{\circ}\). (D) Droplet with a diameter of \(12.4\mu\)m. The defects are located near the center of the \(xy\)-plane aligned with the \(z\)-axis. The simulated image is produced with \(\alpha=80^{\circ}\) and \(\beta=-45^{\circ}\). POM images in C and D are computed by using the light spectrum in Fig. S1(B).
orientation can lead to distinctly different optical textures of the droplet, yet a good match with experimental results can be obtained when the orientation is set close to those reported in the original paper (Fig. 5D-F). In general, controlling the orientation of cholesteric droplets is challenging in experiments, making it difficult to investigate the optical texture systematically. By generating high-fidelity color images of complex structures, LCPOM can help develop a better understanding of structure in confined cholesterics, which is needed for design of advanced optical devices.
## 5 Discussion and Conclusions
A straightforward method for simulation of color in POM images has been presented for confined liquid crystals, including droplets. By incorporating the emission spectrum of the light source, the dependence of refractive indices on wavelength, the transmission ratio at the droplet interface as well as the color matching functions, our simulation method is shown to be capable of generating colored POM images that are in quantitative agreement with experiments for radial, bipolar and cholesteric droplets. The method provides a particularly useful tool to validate theoretical models and to interpret experimental measurements. By comparing computed POM images of the order field profile obtained from theory or simulations to experimental POM images, one can gain insights into the governing physics and the balance of various phenomenological parameters. We envision that the proposed computational tool will help generate a realistic dataset to aid machine learning efforts aimed at understanding the structure and dynamics of liquid crystals, and will help engineer a new generation of LC-based sensing devices where color is used to extract detailed information about molecular-level sensing events.
## Supplementary Information
Experimental details, and further validation images are provided in the Supplementary Information.
### Code and Data Availability
The code for LCPOM along with its documentation will be released after beta testing. Sign up in this form to be notified once it is available.
Figure 5: Comparison between simulated and experimental POM images of cholesteric droplets. The order fields of cholesteric droplets are obtained from Ginzburg-Landau relaxations with homeotropic boundary conditions. The number of turns is \(N=3.8\) and the diameter of the droplet is set to the experimental value of \(d=17\mu m\). From left to right: isoclinical lines in red, order field, simulated POM images, and experimental images. The experimental images are adapted from Krakhalev _et al._ Scientific Reports (2017). [27] Order field in (B) is rotated by Euler angles \((\alpha,\beta)=(70^{\circ},30^{\circ})\) relative to (A).
The files containing the order fields and to reproduce the optical textures in this work are available upon request from the corresponding author.
## Acknowledgments
C.C. thanks Dr. Neil D. Dolinski for help on spectrum measurement. This work was primarily supported by the University of Chicago Materials Research Science and Engineering Center, which is funded by National Science Foundation under award number DMR-2011854. V.P.B. thanks the Fulbright commission in Colombia and COLCIENCIAS for support through the PhD student scholarship. M.S. is supported by National Science Foundation, Division of Materials Research, Condensed Matter Physics, under the NSF CAREER award 2146428. The authors also acknowledge the Research Computing Center of the University of Chicago for computational resources.
|
2301.13205 | Representations and identities of Baxter monoids with involution | Let $(\mathsf{baxt}_n,~^\sharp)$ be the Baxter monoid of finite rank $n$ with
Sch\"{u}tzenberger's involution $^{\sharp}$. In this paper, it is shown that
$(\mathsf{baxt}_n,~^\sharp)$ admits a faithful representation by an involution
monoid of upper triangular matrices over any semiring from a large class
including the tropical semiring under the skew transposition. Then a
transparent combinatorial characterization of the word identities satisfied by
$(\mathsf{baxt}_n,~^\sharp)$ is given. Further, it is proved that
$(\mathsf{baxt}_n,~^\sharp)$ is finitely based if and only if $n\neq 3$, and
shown that the identity checking problem for $(\mathsf{baxt}_n,~^\sharp)$ can
be done in polynomial time. | Bin Bin Han, Wen Ting Zhang, Yan Feng Luo, Jin Xing Zhao | 2023-01-29T14:14:16Z | http://arxiv.org/abs/2301.13205v1 | # Representations and identities of Baxter monoids with involution
###### Abstract.
Let \((\mathsf{baxt}_{n},\ ^{\sharp})\) be the Baxter monoid of finite rank \(n\) with Schutzenberger's involution \({}^{\sharp}\). In this paper, it is shown that \((\mathsf{baxt}_{n},\ ^{\sharp})\) admits a faithful representation by an involution monoid of upper triangular matrices over any semiring from a large class including the tropical semiring under the skew transposition. Then a transparent combinatorial characterization of the word identities satisfied by \((\mathsf{baxt}_{n},\ ^{\sharp})\) is given. Further, it is proved that \((\mathsf{baxt}_{n},\ ^{\sharp})\) is finitely based if and only if \(n\neq 3\), and shown that the identity checking problem for \((\mathsf{baxt}_{n},\ ^{\sharp})\) can be done in polynomial time.
Key words and phrases:Baxter monoid, Schutzenberger's involution, representation, identity, finite basis problem 2010 Mathematics Subject Classification: 20M07, 20M30, 05E99, 12K10, 16Y60 The authors are partially supported by the National Natural Science Foundation of China (Nos. 12271224, 12171213, 12161062) \({}^{*}\) Corresponding author
## 1. Introduction
Identities and varieties of semigroups have long been studied, and several important questions arise in this area, such as the _finite basis problem_, that is the problem of classifying semigroups according to the finite basis property. The first example of non-finitely based finite semigroup was discovered by Perkins [49] in the 1960s; since then, the finite basis problem for semigroups has attracted much attention. Refer to Volkov [51] for a survey of work performed in this direction and for more information on the finite basis problem for semigroups in general. Other questions regarding the variety generated by a semigroup are those of whether it contains only finitely generated subvarieties (see, for example, [51]), or countably infinite subvarieties [28]. Another widely studied question is the _identity checking problem_Check-\(\textsc{Id}(S)\) for a semigroup \(S\), that is the decision problem whose instance is an arbitrary identity \(\mathbf{u}\approx\mathbf{v}\), and the answer to such an instance is 'YES' if \(S\) satisfies \(\mathbf{u}\approx\mathbf{v}\), and 'NO' if it does not. For a finite semigroup \(S\), the identity checking problem Check-\(\textsc{Id}(S)\) is always decidable but this is not necessarily true for an infinite semigroup [46]. Studying the computational complexity of identity checking in semigroups was proposed by Sapir [31, Problem 2.4], and up to now, there are many results in this study [6, 10, 16, 29, 33].
Recall that a unary operation \({}^{*}\) on a semigroup \(S\) is an _involution_ if \(S\) satisfies the identities
\[(x^{*})^{*}\approx x\quad\text{and}\quad(xy)^{*}\approx y^{*}x^{*}. \tag{1.1}\]
An _involution semigroup_ is a pair \((S,\ ^{*})\) where \(S\) is a semigroup with involution \({}^{*}\), and \(S\) is called the _semigroup reduct_ of \((S,\ ^{*})\). Common examples of involution semigroups include groups \((G,\ ^{-1})\) with inversion \({}^{-1}\), multiplicative matrix semigroups \((M_{n},\ ^{T})\) and \((M_{n},\ ^{D})\) over any field with transposition \({}^{T}\) and skew transposition \({}^{D}\) respectively. Over the years, the identities and varieties of involution semigroups have received less attention than that for semigroups. Since the turn of the millennium, interest in involution semigroups has significantly increased. For example, many counterintuitive results were established and examples have been found
Introduction
The _basic model_ of a discrete model is a model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a discrete model of a model of discrete model of a discrete
Notation and background information of the paper are given in Section 2. In Section 3, we exhibit a faithful representation of \((\mathsf{baxt}_{n},\ ^{\sharp})\) as an involution monoid of upper triangular matrices over any semiring from a large class including the tropical semiring under the skew transposition. In Section 4, a characterization of the word identities satisfied by \((\mathsf{baxt}_{n},\ ^{\sharp})\) is given. In Section 5, we prove that \((\mathsf{baxt}_{n},\ ^{\sharp})\) is finitely based if and only if \(n\neq 3\) and that each variety generated by \((\mathsf{baxt}_{n},\ ^{\sharp})\) with \(n\geq 2\) contains continuum many subvarieties. And it is shown that the identity checking problem for \((\mathsf{baxt}_{n},\ ^{\sharp})\) can be done in polynomial time in Section 6.
## 2. Preliminaries
Most of the notation and definition of this article are given in this section. Refer to the monograph of Burris and Sankappanavar [7] for any undefined notation and terminology of universal algebra in general.
### Words and content
Let \(\mathcal{X}\) be a countably infinite alphabet and \(\mathcal{X}^{*}=\{x^{*}\,|\,x\in\mathcal{X}\}\) be a disjoint copy of \(\mathcal{X}\). Elements of \(\mathcal{X}\cup\mathcal{X}^{*}\) are called _variables_, elements of the free involution monoid \((\mathcal{X}\cup\mathcal{X}^{*})^{\times}=(\mathcal{X}\cup\mathcal{X}^{*})^{+} \cup\{\emptyset\}\) are called _words_, and elements of \(\mathcal{X}^{+}\cup\{\emptyset\}\) are called _plain words_. A word \(\mathbf{u}\) is a _factor_ of a word \(\mathbf{w}\) if \(\mathbf{w}=\mathbf{aub}\) for some \(\mathbf{a},\mathbf{b}\in(\mathcal{X}\cup\mathcal{X}^{*})^{\times}\).
Let \(\mathbf{u}\in(\mathcal{X}\cup\mathcal{X}^{*})^{+}\) be a word and \(x\in\mathcal{X}\cup\mathcal{X}^{*}\) be a variable. The _content_\(\mathsf{con}(\mathbf{u})\) of a word \(\mathbf{u}\) is the set of variables that occur in \(\mathbf{u}\), and \(\mathsf{occ}(x,\mathbf{u})\) is the number of occurrences of the letter \(x\) in \(\mathbf{u}\). Let \(\overline{\mathbf{u}}\) be the plain word obtained from \(\mathbf{u}\) by removing all occurrences of the symbol \({}^{*}\). For any \(x_{1},x_{2},\ldots,x_{n}\in\mathcal{X}\cup\mathcal{X}^{*}\) such that \(\overline{x_{1}},\overline{x_{2}},\ldots,\overline{x_{n}}\in\mathcal{X}\) are distinct variables, let \(\mathbf{u}[x_{1},x_{2},\ldots,x_{n}]\) denote the word obtained from \(\mathbf{u}\) by retaining only the variables \(x_{1},x_{1}^{*},x_{2},x_{2}^{*},\ldots,x_{n},x_{n}^{*}\). Denote by \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})\) [resp. \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})\)] the number of occurrences of \(x\) appearing before [resp. after] the first [resp. last] occurrence of \(y\) in \(\mathbf{u}\). The _initial part_ of \(\mathbf{u}\), denoted by \(\mathsf{ip}(\mathbf{u})\), is the word obtained from \(\mathbf{u}\) by retaining the occurrence of each variable \(x\in\mathsf{con}(\mathbf{u})\) satisfying \(x,x^{*}\notin\mathsf{con}(\mathbf{u}_{1})\) where \(\mathbf{u}=\mathbf{u}_{1}x\mathbf{u}_{2}\); the _final part_ of \(\mathbf{u}\), denoted by \(\mathsf{fp}(\mathbf{u})\), is the word obtained from \(\mathbf{u}\) by retaining the occurrence of each variable \(x\in\mathsf{con}(\mathbf{u})\) satisfying \(x,x^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\) where \(\mathbf{u}=\mathbf{u}_{1}x\mathbf{u}_{2}\).
**Example 2.1**.: Let \(\mathbf{u}=x^{*}zxy^{*}xyz^{2}x\) and \(x,y,z\in\mathcal{X}\). Then
* \(\mathsf{con}(\mathbf{u})=\{x,y,z,x^{*},y^{*}\}\), \(\overline{\mathbf{u}}=xzxyxyz^{2}x\),
* \(\mathsf{occ}(x,\mathbf{u})=3\), \(\mathsf{occ}(x^{*},\mathbf{u})=\mathsf{occ}(y,\mathbf{u})=\mathsf{occ}(y^{*}, \mathbf{u})=1\),
* \(\mathbf{u}[x]=x^{*}x^{3}\), \(\mathbf{u}[x,y]=x^{*}xy^{*}xyx\),
* \(\widehat{\mathsf{occ}}_{y^{*}}(x,\mathbf{u})=1\), \(\widehat{\mathsf{occ}}_{y^{*}}(x,\mathbf{u})=2\),
* \(\mathsf{ip}(\mathbf{u})=x^{*}zy^{*}\), \(\mathsf{fp}(\mathbf{u})=yzx\).
### Terms and identities
The set \(\mathsf{T}(\mathcal{X})\) of _terms_ over \(\mathcal{X}\) is the smallest set containing \(\mathcal{X}\) that is closed under concatenation and \({}^{*}\). The proper inclusion \((\mathcal{X}\cup\mathcal{X}^{*})^{\times}\subset\mathsf{T}(\mathcal{X})\) holds and the identities (1.1) can be used to convert any nonempty term \(\mathbf{t}\in\mathsf{T}(\mathcal{X})\) into some unique word \(\lfloor\mathbf{t}\rfloor\in(\mathcal{X}\cup\mathcal{X}^{*})^{+}\). For instance, \(\lfloor x(x^{2}(yx^{*})^{*})^{*}zy^{*}\rfloor=xy(x^{*})^{3}zy^{*}\).
**Remark 2.2**.: For any subterm \(\mathbf{s}\) of a term \(\mathbf{t}\), either \(\lfloor\mathbf{s}\rfloor\) or \(\lfloor\mathbf{s}^{*}\rfloor\) is a factor of \(\lfloor\mathbf{t}\rfloor\).
An _identity_ is an expression \(\mathbf{s}\approx\mathbf{t}\) formed by nonempty terms \(\mathbf{s},\mathbf{t}\in\mathsf{T}(\mathcal{X})\), a _word identity_ is an identity \(\mathbf{u}\approx\mathbf{v}\) formed by words \(\mathbf{u},\mathbf{v}\in(\mathcal{X}\cup\mathcal{X}^{*})^{+}\). We write \(\mathbf{u}=\mathbf{v}\) if \(\mathbf{u}\) and \(\mathbf{v}\) are identical. An identity \(\mathbf{u}\approx\mathbf{v}\) is _non-trivial_ if \(\mathbf{u}\neq\mathbf{v}\). An identity \(\mathbf{s}\approx\mathbf{t}\) is directly deducible from an identity \(\mathbf{p}\approx\mathbf{q}\) if there exists some substitution \(\varphi:\mathcal{X}\to\mathsf{T}(\mathcal{X})\) such that \(\varphi(\mathbf{p})\) is a subterm of \(\mathbf{s}\), and replacing this particular subterm \(\varphi(\mathbf{p})\) of \(\mathbf{s}\) with \(\varphi(\mathbf{q})\) results in the term \(\mathbf{t}\). An identity \(\mathbf{s}\approx\mathbf{t}\) is deducible from some set \(\Sigma\) of identities if there exists a sequence \(\mathbf{s}=\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{r}=\mathbf{t}\) of
terms such that each identity \(\mathbf{s}_{i}\approx\mathbf{s}_{i+1}\) is directly deducible from some identity in \(\Sigma\).
**Remark 2.3** ([8, Sublemma 2.2]).: An identity \(\mathbf{s}\approx\mathbf{t}\) is deducible from (1.1) if and only if \(\lfloor\mathbf{s}\rfloor=\lfloor\mathbf{t}\rfloor\).
An involution semigroup \((S,\,^{*}\,)\)_satisfies_ an identity \(\mathbf{s}\approx\mathbf{t}\), if for any substitution \(\varphi:\mathcal{X}\to S\), the elements \(\varphi(\mathbf{s})\) and \(\varphi(\mathbf{t})\) of \(S\) coincide; in this case, \(\mathbf{s}\approx\mathbf{t}\) is also said to be an _identity of_\((S,\,^{*}\,)\).
**Remark 2.4**.: Note that assigning the unit element to a variable \(x\) in a word identity is effectively the same as removing all occurrences of \(x\) and \(x^{*}\). Therefore any involution monoid that satisfies a word identity \(\mathbf{s}\approx\mathbf{t}\) also satisfies the word identity \(\mathbf{s}[x_{1},x_{2},\ldots,x_{n}]\approx\mathbf{t}[x_{1},x_{2},\ldots,x_{n}]\) for any distinct variables \(\overline{x_{1}},\overline{x_{2}},\ldots,\overline{x_{n}}\in\mathcal{X}\).
For any involution semigroup \((S,\,^{*}\,)\), a set \(\Sigma\) of identities of \((S,\,^{*}\,)\) is an _identity basis_ for \((S,\,^{*}\,)\) if every identity satisfied by \((S,\,^{*}\,)\) is deducible from \(\Sigma\). An involution semigroup is _finitely based_ if it has some finite identity basis; otherwise, it is _non-finitely based_.
The variety generated by semigroup \(S\) [resp. involution semigroup \((S,\,^{*}\,)\)] is denoted by \(\mathsf{Var}S\) [resp. \(\mathsf{Var}(S,\,^{*}\,)\)]. For any set \(\Sigma\) of identities, denote by \(\mathsf{Var}\Sigma\) the variety determined by \(\Sigma\).
### The Baxter monoid and its involution
Let \(\mathcal{A}=\{1<2<3<\cdots\}\) denote the set of positive integers, viewed as an infinite ordered alphabet. The combinatorial objects and insertion algorithm related to the Baxter monoid are given in the following.
A _right strict binary search tree_ is a labelled rooted binary tree where the label of each node is greater than or equal to the label of every node in its left subtree, and strictly less than every node in its right subtree. The associated insertion algorithm is as follows:
**Algorithm 1**.: Input a right strict binary search tree \(T\) and a symbol \(a\in\mathcal{A}\). If \(T\) is empty, create a node and label it \(a\). If \(T\) is non-empty, examine the label \(x\) of the root node: if \(a>x\), recursively insert \(a\) into the right subtree of the root node; otherwise recursively insert \(a\) into the left subtree of the root node. Output the resulting tree.
Let \(w_{1},\cdots,w_{k}\in\mathcal{A}\) and \(\mathbf{w}=w_{1}\cdots w_{k}\in\mathcal{A}^{*}\). Then the combinatorial object \(\mathrm{P}_{\mathfrak{s}\mathfrak{y}\mathsf{h}_{\infty}}(\mathbf{w})\) of \(\mathbf{w}\) is obtained as follows: reading \(\mathbf{w}\) from right-to-left, one starts with an empty tree and inserts each symbol in \(\mathbf{w}\) into a right strict binary search tree according to Algorithm 1. For example, \(\mathrm{P}_{\mathfrak{s}\mathfrak{y}\mathsf{h}_{\infty}}(36131512665)\) is given as follows:
A _left strict binary search tree is_ a labelled rooted binary tree where the label of each node is strictly greater than the label of every node in its left subtree, and less than or equal to every node in its right subtree. The associated insertion algorithm is as follows:
**Algorithm 2**.: Input a left strict binary search tree \(T\) and a symbol \(a\in\mathcal{A}\). If \(T\) is empty, create a node and label it \(a\). If \(T\) is non-empty, examine the label \(x\) of the root node: if \(a<x\), recursively insert \(a\) into the left subtree of the root node;
otherwise recursively insert \(a\) into the right subtree of the root note. Output the resulting tree.
Let \(w_{1},\cdots,w_{k}\in\mathcal{A}\) and \(\mathbf{w}=w_{1}\cdots w_{k}\in\mathcal{A}^{\star}\). Then the combinatorial object \(\mathrm{P}_{\mathtt{syk}_{\infty}^{\sharp}}(\mathbf{w})\) of \(\mathbf{w}\) is obtained as follows: reading \(\mathbf{w}\) from left-to-right, one starts with an empty tree and inserts each symbol in \(\mathbf{w}\) into a left strict binary search tree according to Algorithm 2. For example, \(\mathrm{P}_{\mathtt{syk}_{\infty}^{\sharp}}(36131512665)\) is given as follows:
Let \(w_{1},\cdots,w_{k}\in\mathcal{A}\) and \(\mathbf{w}=w_{1}\cdots w_{k}\in\mathcal{A}^{\star}\). Then the combinatorial object \(\mathrm{P}_{\mathtt{bact}_{\infty}}(\mathbf{w})\) of \(\mathbf{w}\) is obtained by Algorithms 1 and 2, that is, \(\mathrm{P}_{\mathtt{bact}_{\infty}}(\mathbf{w})=(\mathrm{P}_{\mathtt{syk}_{ \infty}^{\sharp}}(\mathbf{w}),\mathrm{P}_{\mathtt{syk}_{\infty}}(\mathbf{w}))\).
Define the relation \(\equiv_{\mathtt{bact}_{\infty}}\) by
\[\mathbf{u}\equiv_{\mathtt{bact}_{\infty}}\mathbf{v}\quad\text{if and only if}\quad\mathrm{P}_{\mathtt{bact}_{\infty}}(\mathbf{u})=\mathrm{P}_{\mathtt{bact}_{ \infty}}(\mathbf{v})\]
for any \(\mathbf{u},\mathbf{v}\in\mathcal{A}^{\star}\). The relation \(\equiv_{\mathtt{bact}_{\infty}}\) is a congruence on \(\mathcal{A}^{\star}\). The Baxter monoid \(\mathtt{bact}_{\infty}\) is the factor monoid \(\mathcal{A}^{\star}/_{\equiv_{\mathtt{bact}_{\infty}}}\). The rank-\(n\) analogue \(\mathtt{bact}_{n}\) is the factor monoid \(\mathcal{A}^{\star}_{n}/_{\equiv_{\mathtt{bact}_{\infty}}}\), where the relation \(\equiv_{\mathtt{bact}_{\infty}}\) is naturally restricted to \(\mathcal{A}^{\star}_{n}\times\mathcal{A}^{\star}_{n}\) and \(\mathcal{A}_{n}=\{1<2<\cdots<n\}\) is the set of the first \(n\) natural numbers viewed as a finite ordered alphabet. It follows from the definition of \(\equiv_{\mathtt{bact}_{\infty}}\) that each element \([\mathbf{u}]_{\equiv_{\mathtt{bact}_{\infty}}}\) of the factor monoid \(\mathtt{bact}_{\infty}\) can be identified with the combinatorial object \(\mathrm{P}_{\mathtt{bact}_{\infty}}(\mathbf{u})\). Clearly \(\mathtt{bact}_{1}\) is a free monogenic monoid \(\langle 1\rangle=\{\varepsilon,1,1^{2},1^{3},\ldots\}\) and so it is commutative. Note that
\[\mathtt{bact}_{1}\subset\mathtt{bact}_{2}\subset\cdots\subset\mathtt{bact} _{i}\subset\mathtt{bact}_{i+1}\subset\cdots\subset\mathtt{bact}_{\infty}\,. \tag{2.1}\]
For any word \(\mathbf{u}\in\mathcal{A}^{\star}\), the _length_\(|\mathbf{u}|\) of \(\mathbf{u}\) is the number of symbols occurring in \(\mathbf{u}\), and \(|\mathbf{u}|_{a}\) is the number of times the symbol \(a\) appearing in \(\mathbf{u}\); the _evaluation_ of \(\mathbf{u}\), denoted by \(\mathtt{ev}(\mathbf{u})\), is the infinite tuple of non-negative integers, indexed by \(\mathcal{A}\), whose \(a\)-th element is \(|\mathbf{u}|_{a}\), thus this tuple describes the number of each symbol in \(\mathcal{A}\) that appears in \(\mathbf{u}\). It is immediate from the definition of the Baxter monoid that if \(\mathbf{u}\equiv_{\mathtt{bact}_{\infty}}\mathbf{v}\), then \(\mathtt{ev}(\mathbf{u})=\mathtt{ev}(\mathbf{v})\), and hence it makes sense to define the evaluation of each element of Baxter monoid to be the evaluation of any word representing it. The _support_ of a word \(\mathbf{u}\in\mathcal{A}^{\star}\), denoted by \(\mathtt{sup}(\mathbf{u})\), is the set of letters that occur in \(\mathbf{u}\). Note that \(\mathtt{ev}(\mathbf{u})=\mathtt{ev}(\mathbf{v})\) implies that \(\mathtt{sup}(\mathbf{u})=\mathtt{sup}(\mathbf{v})\).
Let \(\mathbf{u}\in\mathcal{A}^{\star}\) and \(a,b\in\mathtt{sup}(\mathbf{u})\) with \(a<b\). We say that \(\mathbf{u}\) has a \(b\)-\(a\)_right precedence of index \(r\)_ for some \(r\geq 1\) if, when reading \(\mathbf{u}\) from right to left, \(b\) occurs \(r\) times before the first occurrence of \(a\) and, for any \(c\in\mathtt{sup}(\mathbf{u})\) such that \(a<c<b\), \(c\) does not occur before the first occurrence of \(a\). We say that \(\mathbf{u}\) has a \(a\)-\(b\)_left precedence of index \(\ell\)_ if, when reading \(\mathbf{u}\) from left to right, \(a\) occurs \(\ell\) times before the first occurrence of \(b\) and, for any \(c\in\mathtt{sup}(\mathbf{u})\) such that \(a<c<b\), \(c\) does not occur before the first occurrence of \(b\). Note that, for any given \(a\in\mathtt{sup}(\mathbf{u})\), there is at most one \(b\in\mathtt{sup}(\mathbf{u})\) such that \(\mathbf{u}\) has a \(b\)-\(a\) right precedence of index \(r\); on the other hand, \(\mathbf{u}\) can have several right precedences of the form \(b\)-\(x\) for a fixed \(b\); and that, for any given \(b\in\mathtt{sup}(\mathbf{u})\), there is at most one \(a\in\mathtt{sup}(\mathbf{u})\) such that \(\mathbf{u}\) has a \(a\)-\(b\) left precedence of index \(\ell\); on the other hand, \(\mathbf{u}\) can have several left precedences of the form \(a\)-\(x\) for a fixed \(a\). Denote by
\[\mathsf{rpi}(\mathbf{u})=\{(b\text{-}a,r):\mathbf{u}\text{ has a $b$-$a$ right precedence of index $r$}\}\]
and
\[\mathsf{lpi}(\mathbf{u})=\{(a\text{-}b,\ell):\mathbf{u}\text{ has a $a$-$b$ left precedence of index $\ell$}\}.\]
For example, \(\mathsf{rpi}(36131512665)=\{(2\text{-}1,1),(5\text{-}2,1),(5\text{-}3,2)\}\) and \(\mathsf{lpi}(36131512665)=\{(1\text{-}2,3),(3\text{-}5,2),(3\text{-}6,1)\}\).
**Proposition 2.5** ([14, Corollary 2.11]).: _For any \(\mathbf{u},\mathbf{v}\in\mathcal{A}^{\star}\), \(\mathbf{u}\equiv_{\mathsf{bact}_{\infty}}\mathbf{v}\) if and only if \(\mathsf{ev}(\mathbf{u})=\mathsf{ev}(\mathbf{v})\), \(\mathsf{rpi}(\mathbf{u})=\mathsf{rpi}(\mathbf{v})\) and \(\mathsf{lpi}(\mathbf{u})=\mathsf{lpi}(\mathbf{v})\)._
Note that if the word \(\mathbf{u}\) has a \(b\)-\(a\) right precedence or a \(a\)-\(b\) left precedence for \(a<b\), then each word in \([\mathbf{u}]_{\mathsf{bact}_{\infty}}\) has a \(b\)-\(a\) right precedence or a \(a\)-\(b\) left precedence.
The Baxter monoid can also be defined by the presentation \(\langle\mathcal{A}\mid\mathcal{R}_{\mathsf{bact}_{\infty}}\rangle\), where
\[\mathcal{R}_{\mathsf{bact}_{\infty}}= \{(c\mathbf{u}ad\mathbf{v}b,c\mathbf{u}ad\mathbf{v}b):a\leq b<c \leq d\}\] \[\cup\{(b\mathbf{u}ad\mathbf{v}c,b\mathbf{u}ad\mathbf{v}c):a<b\leq c <d\}\,.\]
For each \(n\in\mathbb{N}\), a presentation for the Baxter monoid of rank \(n\) can be obtained by restricting generators and relations of the above presentation to generators in \(\mathcal{A}_{n}\). Note that these relations are length-preserving.
If the Baxter monoid \(\mathsf{bact}_{n}\) under the unary operation \({}^{*}\) is an involution monoid, then the relation \(\equiv_{\mathsf{bact}_{\infty}}\mathbf{v}\) must be compatible with the involution operation, that is, if \(\mathbf{u}\equiv_{\mathsf{bact}_{\infty}}\mathbf{v}\), then \(\mathbf{u}^{*}\equiv_{\mathsf{bact}_{\infty}}\mathbf{v}^{*}\).
Let \(\mathcal{A}_{n}^{\sharp}:=\{1^{\sharp}>2^{\sharp}>\cdots>n^{\sharp}\}\) be the alphabet \(\mathcal{A}_{n}\) on which the order relation has been reversed and \((\mathcal{A}_{n}^{\sharp})^{\sharp}:=\mathcal{A}_{n}\). For \(w_{1}w_{2}\cdots w_{n}\in\mathcal{A}^{\star}\), \((w_{1}w_{2}\cdots w_{n})^{\sharp}:=w_{n}^{\sharp}\cdots w_{2}^{\sharp}w_{1}^{\sharp}\).
**Proposition 2.6** ([18, Proposition 3.4]).: _Let \(\mathbf{w}\) and \(\mathbf{w}^{\prime}\) be two words in \(\mathcal{A}_{n}^{\star}\). Then \(\mathbf{w}\equiv_{\mathsf{bact}_{\infty}}\mathbf{w}^{\prime}\) if and only if \(\mathbf{w}^{\sharp}\equiv_{\mathsf{bact}_{\infty}}(\mathbf{w}^{\prime})^{\sharp}\)._
The relation \(\equiv_{\mathsf{bact}_{\infty}}\) is compatible with the unary operation \({}^{\sharp}\). Thus \((\mathsf{bact}_{n},\ ^{\sharp})\) is an involution monoid, and the involution \({}^{\sharp}\) is always called _Schutzenberger's involution_.
**Proposition 2.7**.: _Schutzenberger's involution is the unique involution on the Baxter monoid \(\mathsf{bact}_{n}\) for each \(n\geq 1\)._
Proof.: Suppose that \({}^{*}\) is an involution operation on \(\mathsf{bact}_{n}\). Note that the relations \(\mathcal{R}_{\mathsf{hypo}_{\infty}}\) are length-preserving. Then the involution of a generator in \(\mathcal{A}_{n}\) is still a generator in \(\mathcal{A}_{n}\). Since \(\mathsf{bact}_{1}\) has only one generator \(1\), we have \(1^{*}=1\). Thus the involution on \(\mathsf{bact}_{1}\) is trivial. For \(\mathsf{bact}_{n}\) with \(n\geq 2\), let \(a<b\leq n\). Then \((b\mathbf{u}ab\mathbf{v}a)^{*}=a^{*}\mathbf{v}^{*}b^{*}a^{*}\mathbf{u}^{*}b^{*} \equiv_{\mathsf{bact}_{\infty}}a^{*}\mathbf{v}^{*}a^{*}b^{*}\mathbf{u}^{*}b^{*} =(b\mathbf{u}ba\mathbf{v}a)^{*}\) by \(b\mathbf{u}ab\mathbf{v}a\equiv_{\mathsf{bact}_{\infty}}b\mathbf{u}b\mathbf{v}a \in\mathcal{R}_{\mathsf{hypo}_{\infty}}\). This implies \(a^{*}\mathbf{v}^{*}b^{*}a^{*}\mathbf{u}^{*}b^{*}\equiv_{\mathsf{bact}_{\infty }}a^{*}\mathbf{v}^{*}a^{*}b^{*}\mathbf{u}^{*}b^{*}\in\mathcal{R}_{\mathsf{bact}_ {\infty}}\), and so \(b^{*}<a^{*}\). Hence for any \(a<b\), we must have \(b^{*}<a^{*}\), whence \({}^{*}\) must be the order-reversing permutation on \(\mathcal{A}_{n}\). Therefore \({}^{*}\) is Schutzenberger's involution, and so Schutzenberger's involution is the unique involution on the Baxter monoid \(\mathsf{bact}_{n}\).
### Matrix representations over semirings
Recall that \(\mathbb{S}=(S,+,\cdot)\) is a _commutative semiring_ with additive identity \(\mathbf{0}\) and multiplicative identity \(\mathbf{1}\) if \(S\) is a set equipped with two binary operations \(+\) and \(\cdot\) such that \((S,+)\) and \((S,\ \cdot)\) are commutative monoids satisfying
\[a(b+c)=a\cdot b+a\cdot c\quad\text{and}\quad\mathbf{0}\cdot a=\mathbf{0}\]
for all \(a,b,c\in\mathbb{S}\). Semiring \(\mathbb{S}\) is _idempotent_ if \(a+a=a\) for all \(a\in\mathbb{S}\). An element \(a\in\mathbb{S}\) is _infinite multiplicative order_ if for any non-negative integers \(i,j\), \(a^{i}=a^{j}\) if and only if \(i=j\). In this paper, we always assume that \(\mathbb{S}\) is a commutative and idempotent semiring with \(\mathbf{0},\mathbf{1}\) containing an element of infinite multiplicative order. A common example of such semiring is the tropical semiring \(\mathbb{T}=(\mathbb{R}\cup\{-\infty\},\oplus,\otimes)\)
which is the set \(\mathbb{R}\) of real numbers together with minus infinity \(-\infty\), with the addition and multiplication defined as follows
\[a\oplus b=\max\{a,b\}\quad\text{and}\quad a\otimes b=a+b.\]
In other words, the tropical sum of two numbers is their maximum and the tropical product of two numbers is their sum, and \(-\infty\) is the additive identity and \(0\) is the multiplicative identity. Note that except \(-\infty\) and \(0\), all other elements in \(\mathbb{T}\) have infinite multiplicative order.
Note that the set of all \(n\times n\) matrices with entries in \(\mathbb{S}\) forms a semigroup under the matrix multiplication induced from the operations in \(\mathbb{S}\). We denote this semigroup by \(M_{n}(\mathbb{S})\). Let \(UT_{n}(\mathbb{S})\) be the subsemigroup of \(M_{n}(\mathbb{S})\) of all upper triangular \(n\times n\) matrices. For any matrix \(A\in M_{n}(\mathbb{S})\), denote by \(A^{D}\) the matrix obtained by reflecting \(A\) with respect to the secondary diagonal (from the top right to the bottom left corner), that is, \((A^{D})_{ij}=A_{(n+1-j)(n+1-i)}\). It is easy to verify that this unary operation \({}^{D}\) is an involution operation of \(M_{n}(\mathbb{S})\). A (linear) representation of a semigroup \(S\) [resp. involution semigroup \((S,\ ^{*})\)] is a homomorphism \(\rho:S\to M_{n}(\mathbb{S})\) [resp. \(\rho:(S,\ ^{*})\to(M_{n}(\mathbb{S}),\ ^{D})\) ]. The homomorphism \(\rho\) is said to be _faithful_ if it is injective. Note that an involution semigroup representation \(\rho:(S,\ ^{*})\to(M_{n}(\mathbb{S}),\ ^{D})\) deduces a semigroup representation \(\rho:S\to M_{n}(\mathbb{S})\), but a semigroup representation \(\rho:S\to M_{n}(\mathbb{S})\) can not be an involution semigroup representation \(\rho:(S,\ ^{*})\to(M_{n}(\mathbb{S}),\ ^{D})\). The tropical semiring is of interest as a natural carrier for representations of semigroups. For example, the bicyclic monoid \(\mathcal{B}:=\langle a,b\ |\ ba=1\rangle\), which is ubiquitous in infinite semigroup theory, admits no faithful finite dimensional representations over any field; however it has a number of natural representations over the tropical semiring [16, 26].
## 3. matrix representations of \((\mathsf{baxt}_{n},\ ^{\sharp})\)
In this section, we exhibit a faithful representation of \((\mathsf{baxt}_{n},\ ^{\sharp})\) for each finite \(n\) as an involution monoid of upper triangular matrices over \(\mathbb{S}\) under the skew transposition, and we prove that all involution semigroups \((\mathsf{baxt}_{n},\ ^{\sharp})\) with \(n\geq 4\) generate the same variety.
For convenience, denote by
\[\mathrm{P}=\begin{pmatrix}s&\mathbf{0}\\ \mathbf{0}&\mathbf{1}\end{pmatrix},\mathrm{Q}=\begin{pmatrix}\mathbf{1}& \mathbf{0}\\ \mathbf{0}&s\end{pmatrix},\mathrm{J}=\begin{pmatrix}\mathbf{1}&\mathbf{1}\\ \mathbf{0}&\mathbf{0}\end{pmatrix},\mathrm{K}=\begin{pmatrix}\mathbf{0}& \mathbf{1}\\ \mathbf{0}&\mathbf{1}\end{pmatrix}\]
where \(s\in\mathbb{S}\) is an element of infinite multiplicative order. Denote by \(\mathsf{diag}\{\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{n}\}\) the block diagonal matrix
\[\begin{pmatrix}\Lambda_{1}&&&\\ &\Lambda_{2}&&\\ &&\ddots&\\ &&&\Lambda_{n}\end{pmatrix}\]
where \(\Lambda_{1},\Lambda_{2},\ldots,\Lambda_{n}\) are square matrices. And let \(\mathrm{E}_{n}\) be the \(n\times n\) matrix with \(\mathbf{1}\)s on the main diagonal and \(\mathbf{0}\)s elsewhere.
First we give a matrix representation of \((\mathsf{baxt}_{1},\ ^{\sharp})\). Define a map \(\varphi_{1}:\mathcal{A}_{1}\cup\{\varepsilon\}\to UT_{2}(\mathbb{S})\) given by \(\varepsilon\mapsto\mathrm{E}_{2}\) and \(1\mapsto\mathrm{PQ}\). Clearly, the map \(\varphi_{1}\) induces a faithful representation of \((\mathsf{baxt}_{1},\ ^{\sharp})\).
Next we consider the matrix representation of \((\mathsf{baxt}_{2},\ ^{\sharp})\). Define a map \(\varphi_{2}:\mathcal{A}_{2}\cup\{\varepsilon\}\to UT_{6}(\mathbb{S})\) given by \(\varepsilon\mapsto\mathrm{E}_{6}\),
\[1\mapsto\mathsf{diag}\{s,\mathrm{P},\mathrm{J},\mathbf{1}\},\ \ 2\mapsto\mathsf{diag}\{\mathbf{1},\mathrm{K},\mathrm{Q},s\}.\]
Clearly, \(\varphi_{2}\) can be extended to a homomorphism from \(\mathcal{A}_{2}^{*}\) to \(UT_{6}(\mathbb{S})\). Note that \(\varphi_{2}(1^{\sharp})=\varphi_{2}(2)=\mathsf{diag}\{\mathbf{1},\mathrm{K}, \mathrm{Q},s\}=(\varphi_{2}(1))^{D}\) and \(\varphi_{2}(2^{\sharp})=\varphi_{2}(1)=\varphi_{2}(2)=\varphi_{2}(1)=\varphi_ {2}(1)=\varphi_{2}(1)=\varphi_{2}(1)=\varphi_{2}(1)=\varphi_{2}(1)=\varphi_{2}( 1)=\varphi_{2
\(\mathsf{diag}\{s,\mathrm{P},\mathrm{J},\mathbf{1}\}=(\varphi_{2}(2))^{D}\). Thus \(\varphi_{2}\) can be extended to a homomorphism from \((\mathcal{A}_{2}^{\star},\ ^{\sharp})\) to \((UT_{6}(\mathbb{S}),\ ^{D})\). In fact, \(\varphi_{2}\) induces a faithful representation of \((\mathsf{baxt}_{2},\ ^{\sharp})\).
**Theorem 3.1**.: _The map \(\varphi_{2}:(\mathsf{baxt}_{2},\ ^{\sharp})\to(UT_{6}(\mathbb{S}),\ ^{D})\) is a faithful representation of \((\mathsf{baxt}_{2},\ ^{\sharp})\)._
Proof.: Note that \(\varphi_{2}\) is a homomorphism from \((\mathcal{A}_{2}^{\star},\ ^{\sharp})\) to \((UT_{6}(\mathbb{S}),\ ^{D})\). Then to show that the map \(\varphi_{2}\) induces a homomorphism from \((\mathsf{baxt}_{2},\ ^{\sharp})\) to \((UT_{6}(\mathbb{S}),\ ^{D})\), we only need to show that for any \(\mathbf{u},\mathbf{v}\in\mathcal{A}_{2}^{\star}\), if \(\mathbf{u}\equiv_{\mathsf{baxt}_{\infty}}\mathbf{v}\), then \(\varphi_{2}(\mathbf{u})=\varphi_{2}(\mathbf{v})\). By the definition of \(\varphi_{2}\), it is easy to verify that for any \(\mathbf{w}\in\mathcal{A}_{2}^{\star}\),
\[\varphi_{2}(\mathbf{w})=\mathsf{diag}\{\Lambda_{1},\Lambda_{2},\Lambda_{3}, \Lambda_{4}\}\]
where
\[\Lambda_{1} =\left\{\begin{array}{ll}s^{|\mathbf{w}|_{1}},&\text{ if }\{1\} \in\mathsf{sup}(\mathbf{w}),\\ \mathbf{1},&\text{ if }\{1\}\not\in\mathsf{sup}(\mathbf{w}),\end{array}\right.\] \[\Lambda_{2} =\left\{\begin{array}{ll}\mathrm{P}^{|\mathbf{w}|_{1}},&\text{ if }\mathsf{sup}(\mathbf{w})=\{1\},\\ \mathrm{K},&\text{ if }\{2\}\subseteq\mathsf{sup}(\mathbf{w})\text{ and }(\text{1-2},\ell)\not\in\mathsf{ lpi}(\mathbf{w})\text{ for any }\ell>0,\\ \mathrm{P}^{\ell_{1}}\mathrm{K},&\text{ if }\mathsf{sup}(\mathbf{w})=\{1,2\} \text{ and }(\text{1-2},\ell_{1})\in\mathsf{lpi}(\mathbf{w}),\end{array}\right.\] \[\Lambda_{3} =\left\{\begin{array}{ll}\mathrm{Q}^{|\mathbf{w}|_{2}},&\text{ if }\mathsf{sup}(\mathbf{w})=\{2\},\\ \mathrm{J},&\text{ if }\{1\}\subseteq\mathsf{sup}(\mathbf{w})\text{ and }(\text{2-1},r)\not\in\mathsf{ rpi}(\mathbf{w})\text{ for any }r>0,\\ \mathrm{J}\mathrm{Q}^{r_{1}},&\text{ if }\mathsf{sup}(\mathbf{w})=\{1,2\} \text{ and }(\text{2-1},r_{1})\in\mathsf{rpi}(\mathbf{w}),\end{array}\right.\] \[\Lambda_{4} =\left\{\begin{array}{ll}s^{|\mathbf{w}|_{2}},&\text{ if }\{2\}\in\mathsf{sup}( \mathbf{w}),\\ \mathbf{1},&\text{ if }\{2\}\not\in\mathsf{sup}(\mathbf{w}).\end{array}\right.\]
Since \(\mathsf{ev}(\mathbf{u})=\mathsf{ev}(\mathbf{v}),\mathsf{lpi}(\mathbf{u})= \mathsf{lpi}(\mathbf{v}),\mathsf{rpi}(\mathbf{u})=\mathsf{rpi}(\mathbf{v})\) by Proposition 2.5, it is routine to show that \(\varphi_{2}(\mathbf{u})=\varphi_{2}(\mathbf{v})\).
Suppose that \(\mathbf{u}\not\equiv_{\mathsf{baxt}_{\infty}}\mathbf{v}\). Then \(\mathsf{ev}(\mathbf{u})\neq\mathsf{ev}(\mathbf{v})\), or \(\mathsf{lpi}(\mathbf{u})\neq\mathsf{lpi}(\mathbf{v})\), or \(\mathsf{rpi}(\mathbf{u})\neq\mathsf{rpi}(\mathbf{v})\) by Proposition 2.5. By the definition of \(\varphi_{2}\), it is routine to show that \(\varphi_{2}(\mathbf{u})\neq\varphi_{2}(\mathbf{v})\). Hence \(\varphi_{2}\) is injective. Therefore the map \(\varphi_{2}:(\mathsf{baxt}_{2},\ ^{\sharp})\to(UT_{6}(\mathbb{S}),\ ^{D})\) is a faithful representation of \((\mathsf{baxt}_{2},\ ^{\sharp})\).
Next we consider the matrix representation of \((\mathsf{baxt}_{3},\ ^{\sharp})\). Define a map \(\varphi_{3}:\mathcal{A}_{3}\cup\{\varepsilon\}\to UT_{15}(\mathbb{S})\) given by \(\varepsilon\mapsto\mathrm{E}_{15}\),
\[1 \mapsto\mathsf{diag}\{s,\mathrm{P},\mathrm{P},\mathrm{E}_{2}, \mathbf{1},\mathrm{J},\mathrm{E}_{2},\mathrm{J},\mathbf{1}\},\] \[2 \mapsto\mathsf{diag}\{\mathbf{1},\mathrm{K},\mathrm{K},\mathrm{P}, s,\mathrm{Q},\mathrm{J},\mathbf{1}\},\] \[3 \mapsto\mathsf{diag}\{\mathbf{1},\mathrm{K},\mathrm{E}_{2},\mathrm{ K},\mathbf{1},\mathrm{E}_{2},\mathrm{Q},\mathrm{Q},s\}.\]
Clearly, \(\varphi_{3}\) can be extended to a homomorphism from \(\mathcal{A}_{3}^{\star}\) to \(UT_{15}(\mathbb{S})\). Note that \(\varphi_{3}(1^{\sharp})=\varphi_{3}(3)=(\varphi_{3}(1))^{D},\varphi_{3}(2^{ \sharp})=\varphi_{3}(2)=(\varphi_{3}(2))^{D}\) and \(\varphi_{3}(3^{\sharp})=\varphi_{3}(1)=(\varphi_{3}(1))^{D}\). Thus \(\varphi_{3}\) can be extended to a homomorphism from \((\mathcal{A}_{3}^{\star},\ ^{\sharp})\) to \((UT_{15}(\mathbb{S}),\ ^{D})\). In fact, \(\varphi_{3}\) induces a faithful representation of \((\mathsf{baxt}_{3},\ ^{\sharp})\).
**Theorem 3.2**.: _The map \(\varphi_{3}:(\mathsf{baxt}_{3},\ ^{\sharp})\to(UT_{15}(\mathbb{S}),\ ^{D})\) is a faithful representation of \((\mathsf{baxt}_{3},\ ^{\sharp})\)._
Proof.: Note that \(\varphi_{3}\) is a homomorphism from \((\mathcal{A}_{3}^{\star},\ ^{\sharp})\) to \((UT_{15}(\mathbb{S}),\ ^{D})\). Then to show that the map \(\varphi_{3}\) induces a homomorphism from \((\mathsf{baxt}_{3},\ ^{\sharp})\) to \((UT_{15}(\mathbb{S}),\ ^{D})\), we only need to show that for any \(\mathbf{u},\mathbf{v}\in\mathcal{A}_{3}^{\star}\), if \(\mathbf{u}\equiv_{\mathsf{hypo}_{\infty}}\mathbf{v}\), then \(\varphi_{3}(\mathbf{u})=\varphi_{3}(\mathbf{v})\). By the definition of \(\varphi_{3}\), it is easy to verify that for any \(\mathbf{w}\in\mathcal{A}_{3}^{\star}\),
\[\varphi_{3}(\mathbf{w})=\mathsf{diag}\{\Lambda_{1},\Lambda_{2},\Lambda_{3},\Lambda_{4 },\Lambda_{5},\Lambda_{6},\Lambda_{7},\Lambda_{8},\Lambda_{9}\}\]
where
\[\Lambda_{1} =\left\{\begin{array}{ll}s^{|{\bf w}|_{1}},&\text{if $\{1\}\in{\sf sup }({\bf w})$},\\ {\bf 1},&\text{if $\{1\}\not\in{\sf sup}({\bf w})$},\\ \end{array}\right.\] \[\Lambda_{2} =\left\{\begin{array}{ll}{\rm P}^{|{\bf w}|_{1}},&\text{if ${\sf sup}({\bf w})=\{1\}$},\\ {\rm P}^{\ell_{1}}{\rm K},&\text{if $\{1,2\}={\sf sup}({\bf w})$ and $(1 $-2,\ell_{1})\in{\sf lpi}({\bf w})$},\\ {\rm P}^{\ell_{2}}{\rm K},&\text{if $\{1,3\}\subseteq{\sf sup}({\bf w})$ and $(1 $-3,\ell_{2})\in{\sf lpi}({\bf w})$},\\ {\rm P}^{\ell_{1}}{\rm K},&\text{if $\{1,2,3\}={\sf sup}({\bf w})$ and $(1 $-2,\ell_{1})\in{\sf lpi}({\bf w})$},\\ {\rm K},&\text{otherwise},\end{array}\right.\] \[\Lambda_{3} =\left\{\begin{array}{ll}{\rm P}^{|{\bf w}|_{1}},&\text{if ${\sf sup}({\bf w})=\{1\}$ or $\{1,3\}$},\\ {\rm E}_{2},&\text{if ${\sf sup}({\bf w})=\{3\}$},\\ {\rm P}^{\ell_{1}}{\rm K},&\text{if $\{1,2\}\subseteq{\sf sup}({\bf w})$ and $(1 $-2,\ell_{1})\in{\sf lpi}({\bf w})$},\\ {\rm K},&\text{otherwise},\end{array}\right.\] \[\Lambda_{4} =\left\{\begin{array}{ll}{\rm E}_{2},&\text{if ${\sf sup}({\bf w})=\{1\}$},\\ {\rm P}^{|{\bf w}|_{2}},&\text{if ${\sf sup}({\bf w})=\{2\}$ or $\{1,2\}$},\\ {\rm P}^{\ell_{3}}{\rm K},&\text{if $\{2,3\}\subseteq{\sf sup}({\bf w})$ and $(2 $-3,\ell_{3})\in{\sf lpi}({\bf w})$},\\ {\rm K},&\text{otherwise},\end{array}\right.\] \[\Lambda_{5} =\left\{\begin{array}{ll}s^{|{\bf w}|_{2}},&\text{if $\{2\}\in{\sf sup }({\bf w})$},\\ {\bf 1},&\text{if $\{2\}\not\in{\sf sup}({\bf w})$},\\ \end{array}\right.\] \[\Lambda_{6} =\left\{\begin{array}{ll}{\rm Q}^{|{\bf w}|_{2}},&\text{if ${\sf sup}({\bf w})=\{2\}$ or $\{2,3\}$},\\ {\rm E}_{2},&\text{if ${\sf sup}({\bf w})=\{3\}$},\\ {\rm J}{\rm Q}^{r_{1}},&\text{if $\{1,2\}\subseteq{\sf sup}({\bf w})$ and $(2 $-1,r_{1})\in{\sf rpi}({\bf w})$},\\ {\rm J},&\text{otherwise},\end{array}\right.\] \[\Lambda_{7} =\left\{\begin{array}{ll}{\rm E}_{2},&\text{if ${\sf sup}({\bf w})=\{1\}$},\\ {\rm Q}^{|{\bf w}|_{3}},&\text{if ${\sf sup}({\bf w})=\{3\}$ or $\{1,3\}$},\\ {\rm J}{\rm Q}^{r_{3}},&\text{if $\{2,3\}\subseteq{\sf sup}({\bf w})$ and $(3 $-2,r_{3})\in{\sf rpi}({\bf w})$},\\ {\rm J},&\text{otherwise},\end{array}\right.\] \[\Lambda_{8} =\left\{\begin{array}{ll}{\rm Q}^{|{\bf w}|_{3}},&\text{if ${\sf sup}({\bf w})=\{3\}$},\\ {\rm J}{\rm Q}^{r_{2}},&\text{if $\{1,3\}\subseteq{\sf sup}({\bf w})$ and $(3 $-1,r_{2})\in{\sf rpi}({\bf w})$},\\ {\rm J}{\rm Q}^{r_{3}},&\text{if $\{2,3\}={\sf sup}({\bf w})$ and $(3 $-2,r_{3})\in{\sf rpi}({\bf w})$},\\ {\rm J}{\rm Q}^{r_{3}},&\text{if $\{1,2,3\}={\sf sup}({\bf w})$ and $(2 $-1,r_{1})\in{\sf rpi}({\bf w})$},\,(3\text{-}2,r_{3})\in{\sf rpi}({\bf w}),\\ {\rm J},&\text{otherwise},\end{array}\right.\] \[\Lambda_{9} =\left\{\begin{array}{ll}s^{|{\bf w}|_{3}},&\text{if $\{3\}\in{\sf sup}({\bf w})$},\\ {\bf 1},&\text{if $\{3\}\not\in{\sf sup}({\bf w})$}.\end{array}\right.\]
Since \({\sf ev}({\bf u})={\sf ev}({\bf v}),{\sf lpi}({\bf u})={\sf lpi}({\bf v}),{ \sf rpi}({\bf u})={\sf rpi}({\bf v})\) by Proposition 2.5, it is routine to show that \(\varphi_{3}({\bf u})=\varphi_{3}({\bf v})\).
Suppose that \({\bf u}\
definition of \(\sharp\), we have that \(j^{\sharp}<i^{\sharp}\) and there is at most one \(k\in\mathcal{A}_{n}\) satisfying \(k=k^{\sharp}\) and \(j-i=i^{\sharp}-j^{\sharp}\) when \(i\neq i^{\sharp},j\neq j^{\sharp}\). So there are seven cases about the order of \(i,j,i^{\sharp},j^{\sharp}\) in \(\mathcal{A}_{n}\): \(i^{\sharp}=j\), \(i<j=j^{\sharp}<i^{\sharp}\), \(j^{\sharp}<i=i^{\sharp}<j\), \(i<j<j^{\sharp}<i^{\sharp}\), \(j^{\sharp}<i^{\sharp}<i<j\), \(i<j^{\sharp}<j<i^{\sharp}\), \(j^{\sharp}<i<i^{\sharp}<j\). For any \(i<j\in\mathcal{A}_{n}\) with \(n\geq 4\), define a map \(\varphi_{ij}\) from \(\mathcal{A}_{n}^{*}\) to \(\mathsf{bax}_{3}\times\mathsf{bax}_{3}\) which can be determined by the following four cases according to the order of \(i,i^{\sharp},j,j^{\sharp}\) in \(\mathcal{A}_{n}\).
**Case 1.**\(i^{\sharp}=j\). Define a map \(\lambda:\mathcal{A}_{n}\to\mathsf{baxt}_{3}\) given by
\[k\mapsto\begin{cases}[1]_{\mathsf{baxt}_{3}}&\text{if }k=i,\\ _{\mathsf{baxt}_{3}}&\text{if }k=j,\\ _{\mathsf{baxt}_{3}}&\text{if }i<k<j,\\ _{\mathsf{baxt}_{3}}&\text{otherwise.}\end{cases}\]
Clearly, this map can be extended to a homomorphism \(\lambda:\mathcal{A}_{n}^{*}\to\mathsf{baxt}_{3}\). Define a map \(\lambda_{ij}:\mathcal{A}_{n}\to\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\) given by
\[k\mapsto(\lambda(k),\lambda(k)).\]
This map can be extended to a homomorphism \(\lambda_{ij}:\mathcal{A}_{n}^{*}\to\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\). Further, \(\lambda_{ij}\) is also a homomorphism from \((\mathcal{A}_{n}^{*},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\). This is because for any \(k\in\mathcal{A}_{n}\), \(\lambda(k^{\sharp})=(\lambda(k))^{\sharp}\) which follows from
\[\begin{cases}\lambda(k^{\sharp})=(\lambda(k))^{\sharp}=[3]_{\mathsf{baxt}_{3} }&\text{if }k=i,\\ \lambda(k^{\sharp})=(\lambda(k))^{\sharp}=[31]_{\mathsf{baxt}_{3}}&\text{if }i<k<j,\\ \lambda(k^{\sharp})=(\lambda(k))^{\sharp}=[1]_{\mathsf{baxt}_{3}}&\text{if }k=j,\\ \lambda(k^{\sharp})=(\lambda(k))^{\sharp}=[\varepsilon]_{\mathsf{baxt}_{3}}& \text{otherwise.}\end{cases}\]
Therefore for any \(\mathbf{w}=k_{1}k_{2}\cdots k_{n}\),
\[\lambda_{ij}(\mathbf{w^{\sharp}}) =(\lambda(\mathbf{w^{\sharp}}),\lambda(\mathbf{w^{\sharp}}))\] \[=(\lambda(k_{n}^{\sharp})\cdots\lambda(k_{1}^{\sharp}),\lambda(k _{n}^{\sharp})\cdots\lambda(k_{1}^{\sharp}))\] \[=((\lambda(k_{n}))^{\sharp}\cdots(\lambda(k_{1}))^{\sharp},(\lambda (k_{n}))^{\sharp}\cdots(\lambda(k_{1}))^{\sharp})\] \[=((\lambda(\mathbf{w}))^{\sharp},(\lambda(\mathbf{w}))^{\sharp})\] \[=(\lambda_{ij}(\mathbf{w}))^{\sharp}.\]
**Case 2.**\(i<j=j^{\sharp}<i^{\sharp}\) or \(j^{\sharp}<i=i^{\sharp}<j\). For convenience, let \(i_{1}=i,i_{2}=j\) and \(i_{3}=i^{\sharp}\) when \(i<j=j^{\sharp}<i^{\sharp}\) and \(i_{1}=j^{\sharp},i_{2}=i\) and \(i_{3}=j\) when \(j^{\sharp}<i=i^{\sharp}<j\). Define maps \(\theta_{1}:\mathcal{A}_{n}\to\mathsf{baxt}_{3}\) and \(\theta_{2}:\mathcal{A}_{n}\to\mathsf{baxt}_{3}\) by
\[k\mapsto\begin{cases}[1]_{\mathsf{baxt}_{3}}&\text{if }k=i_{1},\\ _{\mathsf{baxt}_{3}}&\text{if }k=i_{2},\\ _{\mathsf{baxt}_{3}}&\text{if }i_{1}<k<i_{2},\\ _{\mathsf{baxt}_{3}}&\text{otherwise,}\end{cases}\text{ and }\text{ }\text{ }k\mapsto\begin{cases}[2]_{\mathsf{baxt}_{3}}&\text{if }k=i_{2},\\ _{\mathsf{baxt}_{3}}&\text{if }k=i_{3},\\ _{\mathsf{baxt}_{3}}&\text{if }i_{2}<k<i_{3},\\ _{\mathsf{baxt}_{3}}&\text{otherwise,}\end{cases}\]
respectively. Clearly, \(\theta_{1},\theta_{2}\) can be extended to homomorphisms from \(\mathcal{A}_{n}^{*}\) to \(\mathsf{baxt}_{3}\) respectively. Define a map \(\theta_{ij}:\mathcal{A}_{n}\to\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\) by
\[k\mapsto(\theta_{1}(k),\theta_{2}(k)).\]
This map can be extended to a homomorphism \(\theta_{ij}:\mathcal{A}_{n}^{*}\to\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\). Further, \(\theta_{ij}\) is also a homomorphism from \((\mathcal{A}_{n}^{*},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\). This is because for
any \(k\in\mathcal{A}_{n}\), \(\theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp},(\theta_{1}(k))^{\sharp}=\theta_{ 2}(k^{\sharp})\) which follows from
\[\begin{cases}\theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp}=[\varepsilon]_{ \mathtt{bast}_{3}},&(\theta_{1}(k))^{\sharp}=\theta_{2}(k^{\sharp})=[3]_{ \mathtt{bast}_{3}}&\text{if $k=i_{1}$},\\ \theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp}=[\varepsilon]_{\mathtt{bast}_ {3}},&(\theta_{1}(k))^{\sharp}=\theta_{2}(k^{\sharp})=[32]_{\mathtt{bast}_{3}} &\text{if $i_{1}<k<i_{2}$},\\ \theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp}=[2]_{\mathtt{bast}_{3}},&( \theta_{1}(k))^{\sharp}=\theta_{2}(k^{\sharp})=[2]_{\mathtt{bast}_{3}}&\text{ if $k=i_{2}$},\\ \theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp}=[21]_{\mathtt{bast}_{3}},&( \theta_{1}(k))^{\sharp}=\theta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{bast}_{3 }}&\text{if $i_{2}<k<i_{3}$},\\ \theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp}=[1]_{\mathtt{bast}_{3}},&( \theta_{1}(k))^{\sharp}=\theta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{bast}_{3 }}&\text{if $k=i_{3}$},\\ \theta_{1}(k^{\sharp})=(\theta_{2}(k))^{\sharp}=[\varepsilon]_{\mathtt{bast}_ {3}},&(\theta_{1}(k))^{\sharp}=\theta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{ bast}_{3}}&\text{if $k<i_{1}$ or $k>i_{3}$}.\end{cases}\]
Therefore for any \(\mathbf{w}=k_{1}k_{2}\cdots k_{n}\),
\[\theta_{ij}(\mathbf{w}^{\sharp}) =(\theta_{1}(\mathbf{w}^{\sharp}),\theta_{2}(\mathbf{w}^{\sharp }))\] \[=(\theta_{1}(k_{n}^{\sharp})\cdots\theta_{1}(k_{1}^{\sharp}), \theta_{2}(k_{n}^{\sharp})\cdots\theta_{2}(k_{1}^{\sharp}))\] \[=((\theta_{2}(k_{n}))^{\sharp}\cdots(\theta_{2}(k_{1}))^{\sharp},(\theta_{1}(k_{n}))^{\sharp}\cdots(\theta_{1}(k_{1}))^{\sharp})\] \[=((\theta_{2}(\mathbf{w}))^{\sharp},(\theta_{1}(\mathbf{w}))^{ \sharp})\] \[=(\theta_{ij}(\mathbf{w}))^{\sharp}.\]
**Case 3.**\(i<j<j^{\sharp}<i^{\sharp}\) or \(j^{\sharp}<i^{\sharp}<i<j\). For convenience, let \(i_{1}=i,i_{2}=j,i_{3}=j^{\sharp}\) and \(i_{4}=i^{\sharp}\) when \(i<j<j^{\sharp}<i^{\sharp}\) and \(i_{1}=j^{\sharp},i_{2}=i^{\sharp},i_{3}=i\) and \(i_{4}=j\) when \(j^{\sharp}<i^{\sharp}<i<j\). Define maps \(\eta_{1}:\mathcal{A}_{n}\rightarrow\mathtt{bast}_{3}\) and \(\eta_{2}:\mathcal{A}_{n}\rightarrow\mathtt{bast}_{3}\) by
\[k\mapsto\begin{cases}[1]_{\mathtt{bast}_{3}}&\text{if $k=i_{1}$},\\ \parbox{142.367913pt}{$\text{if $k=i_{2}$},}\\ \parbox{142.367913pt}{$\text{if $i_{1}<k<i_{2}$},}\\ \parbox{142.367913pt}{$\text{otherwise},}\end{cases}\quad\text{and}\quad k \mapsto\begin{cases}[2]_{\mathtt{bast}_{3}}&\text{if $k=i_{3}$},\\ \parbox{142.367913pt}{$\text{if $k=i_{4}$},}\\ \parbox{142.367913pt}{$\text{if $i_{3}<k<i_{4}$},}\\ \parbox{142.367913pt}{$\text{otherwise},}\end{cases}\]
respectively. Clearly, \(\eta_{1},\eta_{2}\) can be extended to homomorphisms from \(\mathcal{A}_{n}^{*}\) to \(\mathtt{bast}_{3}\) respectively. Define a map \(\eta_{ij}:\mathcal{A}_{n}\rightarrow\mathtt{bast}_{3}\times\mathtt{bast}_{3}\) by
\[k\mapsto(\eta_{1}(k),\eta_{2}(k)).\]
This map can be extended to a homomorphism from \(\mathcal{A}_{n}^{*}\) to \(\mathtt{bast}_{3}\times\mathtt{bast}_{3}\). Further, \(\eta_{ij}\) is also a homomorphism from \((\mathcal{A}_{n}^{*},\ ^{\sharp})\) to \((\mathtt{bast}_{3}\times\mathtt{bast}_{3},\ ^{\sharp})\). This is because for any \(k\in\mathcal{A}_{n}\), \(\eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp},(\eta_{1}(k))^{\sharp}=\eta_{2}(k^{ \sharp})\) which follows from
\[\begin{cases}\eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[\varepsilon]_{ \mathtt{bast}_{3}},&(\eta_{1}(k))^{\sharp}=\eta_{2}(k^{\sharp})=[3]_{\mathtt{ bast}_{3}}&\text{if $k=i_{1}$},\\ \eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[\varepsilon]_{\mathtt{bast}_{3}},&( \eta_{1}(k))^{\sharp}=\eta_{2}(k^{\sharp})=[32]_{\mathtt{bast}_{3}}&\text{if $i_{1}<k<i_{2}$},\\ \eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[\varepsilon]_{\mathtt{bast}_{3}},&( \eta_{1}(k))^{\sharp}=\eta_{2}(k^{\sharp})=[2]_{\mathtt{bast}_{3}}&\text{if $k=i_{2}$},\\ \eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[2]_{\mathtt{bast}_{3}},&(\eta_{1} (k))^{\sharp}=\eta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{bast}_{3}}&\text{if $k=i_{3}$},\\ \eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[21]_{\mathtt{bast}_{3}},&(\eta_{1} (k))^{\sharp}=\eta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{bast}_{3}}&\text{if $i_{3}<k<i_{4}$},\\ \eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[1]_{\mathtt{bast}_{3}},&(\eta_{1} (k))^{\sharp}=\eta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{bast}_{3}}&\text{if $k=i_{4}$},\\ \eta_{1}(k^{\sharp})=(\eta_{2}(k))^{\sharp}=[\varepsilon]_{\mathtt{bast}_{3}},&( \eta_{1}(k))^{\sharp}=\eta_{2}(k^{\sharp})=[\varepsilon]_{\mathtt{bast}_{3}}& \text{otherwise}.\end{cases}\]
Therefore it is routine to verify that \(\eta_{ij}(\mathbf{w}^{\sharp})=(\eta_{ij}(\mathbf{w}))^{\sharp}\) for any \(\mathbf{w}\in\mathcal{A}_{n}^{*}\).
**Case 4.**\(i<j^{\sharp}<j<i^{\sharp}\) or \(j^{\sharp}<i<i^{\sharp}<j\). For convenience, let \(i_{1}=i,i_{2}=j^{\sharp},i_{3}=j\) and \(i_{4}=i^{\sharp}\) when \(i<j^{\sharp}<j<i^{\sharp}\) and \(i_{1}=j^{\sharp},i_{2}=i,i_{3}=i^{\sharp}\) and \(i_{4}=j\) when \(j^{\sharp}<i<i^{\sharp}<j\). Define maps \(\kappa_{1}:\mathcal{A}_{n}\rightarrow\mathtt{bast}_{3}\) and \(\kappa_{2}:\mathcal{A}_{n}\rightarrow\mathtt{bast}_{3}\) by
\[k\mapsto\begin{cases}[1]_{\mathtt{bast}_{3}}&\text{if $k=i_{1}$},\\ \parbox{142.367913pt}{$\text{if $k=i_{1}$},}\\ \parbox{142.367913pt}{$\text{if $k=i_{2}$},}\\ \parbox{142.367913pt}{$\text{if $k=i_{4}$},}\\ \parbox{142.367913pt}{$\text{if $i
respectively. Clearly, \(\kappa_{1},\kappa_{2}\) can be extended to homomorphisms from \(\mathcal{A}_{n}^{\star}\) to \(\mathsf{baxt}_{3}\) respectively. Define a map \(\kappa_{ij}:\mathcal{A}_{n}\to\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\),
\[k\mapsto(\kappa_{1}(k),\kappa_{2}(k)).\]
Clearly, this map can be extended to a homomorphism from \(\mathcal{A}_{n}^{\star}\) to \(\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\). Further, \(\kappa_{ij}\) is a homomorphism from \((\mathcal{A}_{n}^{\star},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\). This is because for any \(k\in\mathcal{A}_{n}\), \(\kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp},(\kappa_{1}(k))^{\sharp}= \kappa_{2}(k^{\sharp})\) which follows from
\[\begin{cases}\kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[\varepsilon]_{ \mathsf{baxt}_{3}},\ \ \ (\kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[3]_{\mathsf{baxt}_{3}}&\text{ if $k=i_{1}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[\varepsilon]_{\mathsf{baxt}_ {3}},\ \ \ (\kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[32]_{\mathsf{baxt}_{3}}&\text{ if $i_{1}<k<i_{2}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[2]_{\mathsf{baxt}_{3}},\ \ \ (\kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[32]_{\mathsf{baxt}_{3}}&\text{ if $k=i_{2}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[21]_{\mathsf{baxt}_{3}},\ ( \kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[2]_{\mathsf{baxt}_{3}}&\text{ if $i_{2}<k<i_{3}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[21]_{\mathsf{baxt}_{3}},\ \ ( \kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[2]_{\mathsf{baxt}_{3}}&\text{ if $k=i_{3}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[21]_{\mathsf{baxt}_{3}},\ \ ( \kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[\varepsilon]_{\mathsf{baxt}_{3}} &\text{ if $i_{3}<k<i_{4}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[1]_{\mathsf{baxt}_{3}},\ \ \ ( \kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[\varepsilon]_{\mathsf{baxt}_{3}} &\text{ if $k=i_{4}$},\\ \kappa_{1}(k^{\sharp})=(\kappa_{2}(k))^{\sharp}=[\varepsilon]_{\mathsf{baxt}_ {3}},\ \ \ (\kappa_{1}(k))^{\sharp}=\kappa_{2}(k^{\sharp})=[\varepsilon]_{\mathsf{baxt}_{3}} &\text{ otherwise}.\end{cases}\]
Therefore it is routine to verify that \(\kappa_{ij}(\mathbf{w}^{\sharp})=(\kappa_{ij}(\mathbf{w}))^{\sharp}\) for any \(\mathbf{w}\in\mathcal{A}_{n}^{\star}\).
Now we can define the map \(\varphi_{ij}:\mathcal{A}_{n}^{\star}\to\mathsf{baxt}_{3}\times\mathsf{baxt}_{3}\) by
\[\varphi_{ij}=\left\{\begin{array}{ll}\lambda_{ij}&\text{if $i^{\sharp}=j$},\\ \theta_{ij}&\text{if $i<j=j^{\sharp}<i^{\sharp}$ or $j^{\sharp}<i=i^{\sharp}<j$},\\ \eta_{ij}&\text{if $i<j<j^{\sharp}<i^{\sharp}$ or $j^{\sharp}<i^{\sharp}<i<j$},\\ \kappa_{ij}&\text{if $i<j^{\sharp}<j<i^{\sharp}$ or $j^{\sharp}<i<i^{\sharp}<j$} \end{array}\right.\]
where \(\lambda_{ij},\theta_{ij},\eta_{ij},\kappa_{ij}\) are defined as above. Since each of maps \(\lambda_{ij},\theta_{ij},\eta_{ij},\kappa_{ij}\) is a homomorphism from \((\mathcal{A}_{n}^{\star},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\). Therefore \(\varphi_{ij}\) is a homomorphism from \((\mathcal{A}_{n}^{\star},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\).
**Lemma 3.3**.: _The homomorphism \(\varphi_{ij}\) induces a homomorphism \(\varphi_{ij}:(\mathsf{baxt}_{n},\ ^{\sharp})\to(\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\) for any \(n\geq 4\)._
Proof.: Note that \(\varphi_{ij}\) is a homomorphism from \((\mathcal{A}_{n}^{\star},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\). Then to show that the homomorphism \(\varphi_{ij}\) induces a homomorphism from \((\mathsf{baxt}_{n},\ ^{\sharp})\) to \((\mathsf{baxt}_{3}\times\mathsf{baxt}_{3},\ ^{\sharp})\), we only need to show that for any \(\mathbf{u},\mathbf{v}\in\mathcal{A}_{n}^{\star}\), if \(\mathbf{u}\equiv_{\mathsf{baxt}_{\infty}}\mathbf{v}\), then \(\varphi_{ij}(\mathbf{u})=\varphi_{ij}(\mathbf{v})\). It follows from Proposition 2.5 and the definition of \(\varphi_{ij}\) that the evaluations of the first and the second components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) are the same respectively. In the following, we prove that the left and right precedences of the first and the second components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) are the same respectively.
**Case 1.**\(\varphi_{ij}=\lambda_{ij}\). If there exists \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), since the 1-3 left precedence in the first component of \(\lambda_{ij}(\mathbf{u})\) [resp. \(\lambda_{ij}(\mathbf{v})\)] corresponds to some \(i\)-\(h\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] with \(i<h\leq j\) and the 3-1 right precedence in the first component of \(\lambda_{ij}(\mathbf{u})\) [resp. \(\lambda_{ij}(\mathbf{v})\)] corresponds to some \(h\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] with \(i<h\leq j\), it follows from Proposition 2.5 that the first components of \(\lambda_{ij}(\mathbf{u})\) and \(\lambda_{ij}(\mathbf{v})\) have the same 1-3 left precedence and the same 3-1 right precedence; if there is no \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), then since the 1-3 left precedence in the first component of \(\lambda_{ij}(\mathbf{u})\) [resp. \(\lambda_{ij}(\mathbf{v})\)] corresponds to the \(i\)-\(j\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] and the 3-1 right precedence in the first component of \(\lambda_{ij}(\mathbf{u})\) [resp. \(\lambda_{ij}(\mathbf{v})\)] corresponds to the \(j\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)], it follows from Proposition 2.5 that the first component of \(\lambda_{ij}(\mathbf{u})\) and the first component of \(\lambda_{ij}(\mathbf{v})\) have the same 1-3 left precedence and the same 3-1 right precedence. A similar argument can show that the left and right precedences of the second components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) are the same.
**Case 2.**\(\varphi_{ij}=\theta_{ij},\eta_{ij}\) or \(\kappa_{ij}\).
**2.1.**\(i<j=j^{\sharp}<i^{\sharp}\), \(i<j<j^{\sharp}<i^{\sharp}\) or \(i<j^{\sharp}<j<i^{\sharp}\). If there exists \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), since the 1-2 left precedence in the first component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to some \(i\)-\(h\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] with \(i<h\leq j\) and the 2-1 right precedence in the first component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to some \(h\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] for \(i<h\leq j\), it follows from Proposition 2.5 that the first components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) have the same 1-2 left precedence and the same 2-1 right precedence; if there is no \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), then since the 1-2 left precedence in the first component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to the \(i\)-\(j\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] and the 2-1 right precedence in the first component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to the \(j\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)], it follows from Proposition 2.5 that the first component of \(\varphi_{ij}(\mathbf{u})\) and the first component of \(\varphi_{ij}(\mathbf{v})\) have the same 1-2 left precedence and the same 2-1 right precedence. A similar argument can show that the left and right precedences of the second components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) are the same.
**2.2.**\(j^{\sharp}<i=i^{\sharp}<j\), \(j^{\sharp}<i^{\sharp}<i<j\), or \(j^{\sharp}<i<i^{\sharp}<j\). If there exists \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), since the 2-3 left precedence in the second component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to some \(i\)-\(h\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] with \(i<h\leq j\) and the 3-2 right precedence in the second component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to some \(h\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] with \(i<h\leq j\), it follows from Proposition 2.5 that the second components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) have the same 2-3 left precedence and the same 3-2 right precedence; if there is no \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), then since the 2-3 left precedence in the second component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to some \(i\)-\(j\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] and the 3-2 right precedence in the second component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to the \(j\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)], it follows from Proposition 2.5 that the second component of \(\varphi_{ij}(\mathbf{u})\) and the second component of \(\varphi_{ij}(\mathbf{v})\) have the same 2-3 left precedence and the same 3-2 right precedence; if there is no \(k\in\mathsf{sup}(\mathbf{u})\) satisfying \(i<k<j\), then since the 2-3 left precedence in the second component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to some \(i\)-\(j\) left precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)] and the 3-2 right precedence in the second component of \(\varphi_{ij}(\mathbf{u})\) [resp. \(\varphi_{ij}(\mathbf{v})\)] corresponds to the \(j\)-\(i\) right precedence in \(\mathbf{u}\) [resp. \(\mathbf{v}\)], it follows from Proposition 2.5 that the second component of \(\varphi_{ij}(\mathbf{u})\) and the second component of \(\varphi_{ij}(\mathbf{v})\) have the same 2-3 left precedence and the same 3-2 right precedence. A similar argument can show that the left and right precedences of the first components of \(\varphi_{ij}(\mathbf{u})\) and \(\varphi_{ij}(\mathbf{v})\) are the same.
**Lemma 3.4**.: _Let \(\mathbf{u},\mathbf{v}\in\mathcal{A}_{n}^{\star}\) for any \(n\geq 4\). Then \(\mathbf{u}\equiv_{\mathtt{bact}_{\infty}}\mathbf{v}\) if and only if \(\varphi_{ij}(\mathbf{u})=\varphi_{ij}(\mathbf{v})\) for all \(1\leq i<j\leq n\)._
Proof.: The necessity follows from the proof of Lemma 3.3. Let \(\mathbf{w}\in\mathcal{A}_{n}^{\star}\) for some \(n\geq 4\). Suppose \(\mathsf{sup}(\mathbf{w})=\{a_{1}<\cdots<a_{\ell}\}\) for some \(\ell\in\mathbb{N}\). We can obtain the evaluation of \(\mathbf{w}\) from \(\varphi_{a_{i},a_{j}}(\mathbf{w})\). For any \(i\) with \(1\leq i<\ell\), if \(a_{i}^{\sharp}=a_{i+1}\), then since there is no \(k\in\mathsf{sup}(\mathbf{w})\) satisfying \(a_{i}<k<a_{i+1}\), it follows from the definition of \(\varphi_{ij}\) that the number of occurrences of 1 in the first [resp. second] component of \(\varphi_{ij}(\mathbf{w})\) equals to \(\left|\mathbf{w}\right|_{a_{i}}\) and the number of occurrences of 3 in the first [resp. second] component of \(\varphi_{ij}(\mathbf{w})\) equals to \(\left|\mathbf{w}\right|_{a_{i+1}}\); if \(a_{i+1}^{\sharp}<a_{i}=a_{i}^{\sharp}<a_{i+1}\), \(a_{i+1}^{\sharp}<a_{i}^{\sharp}<a_{i+1}\), or \(a_{i+1}^{\sharp}<a_{i}<a_{i}^{\sharp}<a_{i+1}\), then since there is no \(k\in\mathsf{sup}(\mathbf{w})\) satisfying \(a_{i}<k<a_{i+1}\), it follows from the definition of \(\varphi_{ij}\) that the number of occurrences of 2 in the second component of \(\varphi_{ij}(\mathbf{w})\) equals to \(\left|\mathbf{w}\right|_{a_{i+1}}\). Therefore both the number of occurrences of \(a_{i}\) and \(a_{i+1}\) in \(\mathbf{w}\) can be derived from the maps \(\varphi_{a_{i}a_{i+1}}\).
We can obtain the left and right precedences of \(\mathbf{w}\) from \(\varphi_{a_{i},a_{j}}(\mathbf{w})\). If \(a_{i},a_{j}\) satisfy \(a_{i}^{\sharp}=a_{j}\), \(a_{i}<a_{j}=a_{j}^{\sharp}<a_{i}^{\sharp}\), \(a_{i}<a_{j}<a_{j}^{\sharp}<a_{i}^{\sharp}\) or \(a_{i}<a_{j}^{\sharp}<a_{j}<a_{i}^{\sharp}\), then we check the the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\); if \(a_{i},a_{j}\) satisfy \(a_{j}^{\sharp}<a_{i}=a_{i}^{\sharp}<a_{j}\), \(a_{j}^{\sharp}<a_{i}^{\sharp}<a_{i}<a_{j}\), or \(a_{j}^{\sharp}<a_{i}<a_{i}^{\sharp}<a_{j}\), then we check the the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\).
For each \(a_{j}\) satisfying \(a_{j}^{\sharp}<a_{j}\), if \(a_{i}\) satisfying \(a_{j}^{\sharp}<a_{i}<a_{j}\) is the largest number such that the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) starts with \(2\), then, when reading the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) from left-to-right, the first occurrence of \(3\) corresponds to the first occurrence of \(a_{j}\), and all occurrences of \(2\) before the first occurrence of \(3\) correspond to all occurrences of \(a_{i}\) before the first occurrence of \(3\) correspond to all occurrences of \(a_{i}\) before the first occurrence of \(a_{j}\). Hence \(\mathbf{w}\) has a \(a_{i}\)-\(a_{j}\) left precedence. Otherwise, if \(a_{i}\) satisfying \(a_{i}^{\sharp}=a_{j}\) or \(a_{i}<a_{j}<a_{i}^{\sharp}\) is the largest number such that the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) starts with \(1\), then, when reading the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) from left-to-right, the first occurrence of \(2\) corresponds to the first occurrence of \(a_{j}\), and all occurrences of \(1\) before the first occurrence of \(2\) or \(3\) correspond to all occurrences of \(a_{i}\) before the first occurrence of \(a_{j}\), and so \(\mathbf{w}\) has a \(a_{i}\)-\(a_{j}\) left precedence; otherwise \(\mathbf{w}\) does not have a \(a_{i}\)-\(a_{j}\) left precedence. For each \(a_{j}\) satisfying \(a_{j}\leq a_{j}^{\sharp}\), if \(a_{i}<a_{j}\) is the largest number such that the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) starts with \(1\), then, when reading the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) from left-to-right, the first occurrence of \(2\) corresponds to the first occurrence of \(a_{j}\), and all occurrences of \(1\) before the first occurrence of \(2\) correspond to all occurrences of \(a_{i}\) before the first occurrence of \(a_{j}\), and so \(\mathbf{w}\) has a \(a_{i}\)-\(a_{j}\) left precedence; otherwise \(\mathbf{w}\) does not have a \(a_{i}\)-\(a_{j}\) left precedence.
For each \(a_{i}\) satisfying \(a_{i}<a_{i}^{\sharp}\), if \(a_{j}\) satisfying \(a_{i}<a_{j}\leq a_{i}^{\sharp}\) is the smallest number such that the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) ends with \(2\) or \(3\), then, when reading the first component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) from right-to-left, the first occurrence of \(2\) corresponds to the first occurrence of \(2\) corresponds to all occurrences of \(a_{j}\) before the first occurrence of \(a_{i}\), and so \(\mathbf{w}\) has a \(a_{j}\)-\(a_{i}\) right precedence; otherwise \(\mathbf{w}\) does not have a \(a_{j}\)-\(a_{i}\) right precedence. Otherwise, if \(a_{j}\) satisfying \(a_{i}^{\sharp}>a_{i}<a_{i}^{\sharp}<a_{j}\) is the smallest number such that the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) ends with \(3\), then, when reading the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) from right-to-left, the first occurrence of \(2\) corresponds to the first occurrence of \(a_{i}\), and all occurrences of \(3\) before the first occurrence of \(2\) correspond to all occurrences of \(a_{j}\) before the first occurrence of \(a_{i}\), and so \(\mathbf{w}\) has a \(a_{j}\)-\(a_{i}\) right precedence; otherwise \(\mathbf{w}\) does not have a \(a_{j}\)-\(a_{i}\) right precedence. For each \(a_{i}\) satisfying \(a_{i}\geq a_{i}^{\sharp}\), if \(a_{j}\) is the smallest number such that the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) ends with \(3\), then, when reading the second component of \(\varphi_{a_{i},a_{j}}(\mathbf{w})\) from right-to-left, the first occurrence of \(2\) corresponds to the first occurrence of \(a_{i}\), and all occurrences of \(3\) before the first occurrence of \(2\) correspond to all occurrences of \(a_{j}\) before the first occurrence of \(2\) correspond to all occurrences of \(a_{j}\) before the first occurrence of \(a_{i}\), and so \(\mathbf{w}\) has a \(a_{j}\)-\(a_{i}\) right precedence; otherwise \(\mathbf{w}\) does not have a \(a_{j}\)-\(a_{i}\) right precedence.
Therefore, if \(\varphi_{ij}(\mathbf{u})=\varphi_{ij}(\mathbf{v})\) for all \(1\leq i<j\leq n\), then by the above arguments, \(\mathbf{u}\) and \(\mathbf{v}\) have the same evaluations and left and right precedences. Consequently, \(\mathbf{u}\equiv_{\mathtt{baxd}_{\infty}}\mathbf{v}\).
For each \(n\in\mathbb{N}\), with \(n\geq 4\), let \(I_{n}\) be the index set
\[\{(i,j):1\leq i<j\leq n\}.\]
Now, consider the map
\[\phi_{n}:(\mathtt{baxt}_{n},\ ^{\sharp})\rightarrow\prod_{I_{n}}(\mathtt{baxt}_{3} \times\mathtt{baxt}_{3},\ ^{\sharp}),\]
whose \((i,j)\)-th component is given by \(\varphi_{ij}([\mathbf{w}]_{\mathtt{bast}_{n}})\) for \(\mathbf{w}\in\mathcal{A}_{n}^{\star}\) and \((i,j)\in I_{n}\).
**Proposition 3.5**.: _The map \(\phi_{n}\) is an embedding from \((\mathtt{bast}_{n},\ ^{\sharp})\) to \((\mathtt{bast}_{3}\times\mathtt{bast}_{3},\ ^{\sharp})\)._
Proof.: The map \(\phi_{n}\) is a homomorphism by Lemma 3.3. It follows from the definition of \(\phi_{n}\) and Lemma 3.4 that \([\mathbf{u}]_{\mathtt{bast}_{n}}=[\mathbf{v}]_{\mathtt{bast}_{n}}\) if and only if \(\varphi_{n}([\mathbf{u}]_{\mathtt{bast}_{n}})=\varphi_{n}([\mathbf{v}]_{ \mathtt{bast}_{n}})\) for any \(\mathbf{u},\mathbf{v}\in\mathcal{A}_{n}^{\star}\). Hence \(\phi_{n}\) is an embedding.
For any \(([\mathbf{u}_{1}]_{\mathtt{bast}_{3}},[\mathbf{u}_{2}]_{\mathtt{bast}_{3}}, \dots,[\mathbf{u}_{2|I_{n}|}]_{\mathtt{bast}_{3}})\in\prod\limits_{2I_{n}} \mathtt{bast}_{3}\), define an involution operation \({}^{\sharp}\) on \(\prod\limits_{2I_{n}}\mathtt{bast}_{3}\) by
\[([\mathbf{u}_{1}]_{\mathtt{bast}_{3}},[\mathbf{u}_{2}]_{\mathtt{bast}_{3}}, \dots,[\mathbf{u}_{2|I_{n}|}]_{\mathtt{bast}_{3}})^{\sharp}=([\mathbf{u}_{2|I_ {n}|}]_{\mathtt{bast}_{3}}^{\sharp},\dots,[\mathbf{u}_{2}]_{\mathtt{bast}_{3} }^{\sharp},[\mathbf{u}_{1}]_{\mathtt{bast}_{3}}^{\sharp}).\]
Define a map \(\psi:\prod\limits_{I_{n}}(\mathtt{bast}_{3}\times\mathtt{bast}_{3},\ ^{\sharp})\to(\prod\limits_{2I_{n}}\mathtt{bast}_{3},\ ^{\sharp})\) given by
\[(([\mathbf{u}_{1}]_{\mathtt{bast}_{3}},[\mathbf{u}_{2}]_{\mathtt{bast}_{3}}),([\mathbf{u}_{3}]_{\mathtt{bast}_{3}},[\mathbf{u}_{2}]_{\mathtt{bast}_{4}}),\dots,([\mathbf{u}_{2|I_{n}|-1}]_{\mathtt{bast}_{3}},[\mathbf{u}_{2|I_{n}|}]_ {\mathtt{bast}_{3}}))\] \[\mapsto([\mathbf{u}_{1}]_{\mathtt{bast}_{3}},[\mathbf{u}_{3}]_{ \mathtt{bast}_{3}},\dots,[\mathbf{u}_{2|I_{n}|-1}]_{\mathtt{bast}_{3}},[ \mathbf{u}_{2|I_{n}|}]_{\mathtt{bast}_{3}},\dots,[\mathbf{u}_{4}]_{\mathtt{bast }_{3}},[\mathbf{u}_{2}]_{\mathtt{bast}_{3}}).\]
It is routine to verify that the map \(\psi\) is an isomorphism. By Theorem 3.2, each element in \((\mathtt{bast}_{3},\ ^{\sharp})\) corresponds to a matrix in \(UT_{15}(\mathbb{S})\) and the involution \({}^{\sharp}\) on \((\mathtt{bast}_{3},\ ^{\sharp})\) corresponds to the skew transposition on \(UT_{15}(\mathbb{S})\). It follows that there is an embedding, denoted by \(\varphi\), from \((\prod\limits_{2I_{n}}\mathtt{bast}_{3},\ ^{\sharp})\) to \((UT_{30|I_{n}|}(\mathbb{S}),\ ^{D})\). Let
\[\varphi_{n}=\varphi\circ\psi\circ\phi_{n}.\]
Then the following result hold.
**Theorem 3.6**.: _For each \(n\geq 4\), the map \(\varphi_{n}:(\mathtt{bast}_{n},\ ^{\sharp})\to(UT_{30|I_{n}|}(\mathbb{S}),\ ^{D})\) is a faithful representation of \((\mathtt{bast}_{n},\ ^{\sharp})\)._
Let \(a<b<c<d\) be a \(4\)-element ordered alphabet and
\[B=\langle a,b,c,d\,|\,\mathcal{R}_{\mathtt{bast}_{\infty}},ac=ca,ad=da,bc=cb, bd=db\rangle\cup\{1\}\]
be a monoid. The involution operation \({}^{\sharp}\) on \(B\) can be defined by \(a\mapsto d,b\mapsto c\).
**Theorem 3.7**.: _For any \(m,n\geq 4\), the involution monoids \((\mathtt{bast}_{m},\ ^{\sharp})\) and \((\mathtt{bast}_{n},\ ^{\sharp})\) generate the same variety._
Proof.: It follows from Proposition 3.5 that
\[\mathtt{Var}(\mathtt{bast}_{4},\ ^{\sharp})\subseteq\mathtt{Var}(\mathtt{bast }_{5},\ ^{\sharp})\subseteq\dots\subseteq\mathtt{Var}(\mathtt{bast}_{3}\times \mathtt{bast}_{3},\ ^{\sharp}).\]
It suffices to show that \(\mathtt{Var}(\mathtt{bast}_{3}\times\mathtt{bast}_{3},\ ^{\sharp})\subseteq\mathtt{Var}( \mathtt{bast}_{4},\ ^{\sharp})\). Clearly \((B,\ ^{\sharp})\) is a homomorphism image of \((\mathtt{bast}_{4},\ ^{\sharp})\), and so \(\mathtt{Var}(B,\ ^{\sharp})\subseteq\mathtt{Var}(\mathtt{bast}_{4},\ ^{\sharp})\). In the following, we show that \(\mathtt{Var}(\mathtt{bast}_{3}\times\mathtt{bast}_{3},\ ^{\sharp})\subseteq\mathtt{Var}(B,\ ^{\sharp})\).
Let \(\varphi\) be a map from \((\mathtt{bast}_{3}\times\mathtt{bast}_{3},\ ^{\sharp})\) to \((B,\ ^{\sharp})\times(B,\ ^{\sharp})\times(B,\ ^{\sharp})\) given by
\[([\varepsilon]_{\mathtt{bast}_{3}},[\varepsilon]_{\mathtt{bast}_{3}}) \mapsto(a,a,1), ([2]_{\mathtt{bast}_{3}},[\varepsilon]_{\mathtt{bast}_{3}}) \mapsto(b,ba,a),\] \[([3]_{\mathtt{bast}_{3}},[\varepsilon]_{\mathtt{bast}_{3}}) \mapsto(1,b,b), ([\varepsilon]_{\mathtt{bast}_{3}},[1]_{\mathtt{bast}_{3}}) \mapsto(1,c,c),\] \[([\varepsilon]_{\mathtt{bast}_{3}},[2]_{\mathtt{bast}_{3}}) \mapsto(c,dc,d), ([\varepsilon]_{\mathtt{bast}_{3}},[3]_{\mathtt{bast}_{3}}) \mapsto(d,d,1).\]
First to show that \(\varphi\) is well-defined, that is, if \(\mathbf{u}_{1}\equiv_{\mathtt{bast}_{\infty}}\mathbf{v}_{1},\mathbf{u}_{2} \equiv_{\mathtt{bast}_{\infty}}\mathbf{v}_{2}\), then
\[(U_{1},U_{2},U_{3})=\varphi([\mathbf{u}_{1}]_{\mathtt{bast}_{3}},[\mathbf{u}_{2} ]_{\mathtt{bast}_{3}})=\varphi([\mathbf{v}_{1}]_{\mathtt{bast}_{3}},[\mathbf{v}_{2} ]_{\mathtt{bast}_{3}})=(V_{1},V_{2},V_{3}).\]
Since \(\mathbf{u}_{1}\equiv_{\mathtt{bast}_{\infty}}\mathbf{v}_{1},\mathbf{u}_{2} \equiv_{\mathtt{bast}_{\infty}}\mathbf{v}_{2}\), it is easy to see that \(\mathtt{ev}(U_{i})=\mathtt{ev}(V_{i})\) for \(i=1,2,3\). It follows from Proposition 2.5 and the definition of \((B,\ ^{\sharp})\) that, for any
\(([{\bf w}_{1}]_{{\tt backt}_{3}},[{\bf w}_{2}]_{{\tt backt}_{3}})\in{\tt backt}_{3} \times{\tt backt}_{3}\), the left and right precedences of each component of \(\varphi([{\bf w}_{1}]_{{\tt backt}_{3}},[{\bf w}_{2}]_{{\tt backt}_{3}})=(W_{1},W _{2},W_{3})\) can be characterized as follows:
\((a\)-\(b,\ell_{1})\in{\sf lpi}(W_{1})\) [resp. \((c\)-\(d,\ell_{1})\in{\sf lpi}(W_{3})\)] \(\Leftrightarrow\) (1-2, \(\ell_{1})\in{\sf lpi}({\bf w}_{1})\) [resp. \({\sf lpi}({\bf w}_{2})\)],
\((a\)-\(b,\ell_{2})\in{\sf lpi}(W_{3})\) [resp. \((c\)-\(d,\ell_{2})\in{\sf lpi}(W_{1})\)] \(\Leftrightarrow\) (2-3, \(\ell_{2})\in{\sf lpi}({\bf w}_{1})\) [resp. \({\sf lpi}({\bf w}_{2})\)],
\((a\)-\(b,\ell_{3})\in{\sf lpi}(W_{2})\) [resp. \((c\)-\(d,\ell_{3})\in{\sf lpi}(W_{2})\)] \(\Leftrightarrow\) (1-3, \(\ell_{3})\in{\sf lpi}({\bf w}_{1})\) [resp. \({\sf lpi}({\bf w}_{2})\)],
where \(\ell_{3}\neq\ell_{1}\) and
\((b\)-\(a,r_{1})\in{\sf rpi}(W_{1})\) [resp. \((d\)-\(c,r_{1})\in{\sf rpi}(W_{3})\)] \(\Leftrightarrow\) (2-1, \(r_{1})\in{\sf rpi}({\bf w}_{1})\) [resp. \({\sf rpi}({\bf w}_{2})\)],
\((b\)-\(a,r_{2})\in{\sf rpi}(W_{3})\) [resp. \((d\)-\(c,r_{2})\in{\sf rpi}(W_{1})\)] \(\Leftrightarrow\) (3-2, \(r_{2})\in{\sf rpi}({\bf w}_{1})\) [resp. \({\sf rpi}({\bf w}_{2})\)],
\((b\)-\(a,r_{3})\in{\sf rpi}(W_{2})\) [resp. \((d\)-\(c,r_{3})\in{\sf rpi}(W_{2})\)] \(\Leftrightarrow\) (3-1, \(r_{3})\in{\sf rpi}({\bf w}_{1})\) [resp. \({\sf rpi}({\bf w}_{2})\)].
where \(r_{3}\neq r_{1}\). Since \({\bf u}_{1}\equiv_{{\tt back}_{\infty}}{\bf v}_{1},{\bf u}_{2}\equiv_{{\tt back }_{\infty}}{\bf v}_{2}\), it is routine to show that \({\sf lpi}(U_{i})={\sf lpi}(V_{i})\) and \({\sf rpi}(U_{i})={\sf rpi}(V_{i})\) for \(i=1,2,3\). Thus \(\varphi\) is well-defined, and so \(\varphi\) is a homomorphism.
Next to show that the homomorphism \(\varphi\) is injective. Suppose that \(([{\bf u}_{1}]_{{\tt back}_{3}},\ [{\bf u}_{2}]_{{\tt back}_{3}})\neq([{\bf v}_{1}]_{{ \tt back}_{3}},[{\bf v}_{2}]_{{\tt back}_{3}})\). Then \({\sf ev}({\bf u}_{1})\neq{\sf ev}({\bf u}_{2})\), or \({\sf ev}({\bf v}_{1})\neq{\sf ev}({\bf v}_{2})\), or \({\sf lpi}({\bf u}_{1})\neq{\sf lpi}({\bf u}_{2})\), or \({\sf rpi}({\bf u}_{1})\neq{\sf rpi}({\bf u}_{2})\), or \({\sf lpi}({\bf u}_{1})\neq{\sf lpi}({\bf u}_{2})\) or \({\sf rpi}({\bf v}_{1})\neq{\sf rpi}({\bf v}_{2})\). Hence, by the definition of \(\varphi\), it is routine to show that \(\varphi([{\bf u}_{1}]_{{\tt back}_{3}},\ [{\bf u}_{2}]_{{\tt back}_{3}})\neq\varphi([{\bf v}_{1}]_{{ \tt back}_{3}},\ [{\bf v}_{2}]_{{\tt back}_{3}})\). Therefore the homomorphism \(\varphi\) is injective.
## 4. The identities satisfied by \(({\tt back}_{n},\ ^{\sharp})\)
In this section, a complete characterization of the word identities satisfied by \(({\tt back}_{n},\ ^{\sharp})\) for each finite \(n\) is given. Clearly, a word identity \({\bf u}\approx{\bf v}\) holds in \(({\tt back}_{1},\ ^{\sharp})\) if and only if \({\sf occ}(x,\overline{{\bf u}})={\sf occ}(x,\overline{{\bf v}})\) for any \(x\in{\sf con}(\overline{{\bf uv}})\).
Let
\[A=\langle\,a,b\,|\,ab=ba\,\rangle=\{a^{m}b^{n}\,|\,m,n\geq 0\}\]
be a monoid. The monoid \(A\) is an involution monoid \((A,\ ^{*})\) under the unary operation \({}^{*}:a^{m}b^{n}\mapsto a^{n}b^{m}\). A word identity \({\bf u}\approx{\bf v}\) is _balanced_ if \({\sf occ}(x,{\bf u})={\sf occ}(x,{\bf v})\) for any \(x\in{\sf con}({\bf uv})\).
**Lemma 4.1**.: _A word identity \({\bf u}\approx{\bf v}\) holds in \((A,\ ^{*})\) if and only if \({\bf u}\approx{\bf v}\) is balanced._
Proof.: Suppose that \({\bf u}\approx{\bf v}\) is a word identity satisfied by \((A,\ ^{*})\) such that either \({\sf occ}(x,{\bf u})\neq{\sf occ}(x,{\bf v})\) or \({\sf occ}(x^{*},{\bf u})\neq{\sf occ}(x^{*},{\bf v})\) for some \(x\in{\mathcal{X}}\). Let \(\varphi\) be a homomorphism from \(({\mathcal{X}}\cup{\mathcal{X}}^{*})^{+}\) to \((A,\ ^{*})\) that maps \(x\) to \(a\) and any other variable to \(1\). Then \(\varphi({\bf u})=a^{{\sf occ}(x,{\bf u})}b^{{\sf occ}(x^{*},{\bf u})}\neq a^{{ \sf occ}(x,{\bf v})}b^{{\sf occ}(x^{*},{\bf v})}=\varphi({\bf v})\), a contradiction.
Conversely, if \({\bf u}\approx{\bf v}\) is balanced, then \(\varphi({\bf u})=\varphi({\mathcal{X}}\cup{\mathcal{X}}^{*})^{+}\) to \((A,\ ^{*})\) since \((A,\ ^{*})\) is commutative. Therefore \({\bf u}\approx{\bf v}\) holds in \((A,\ ^{*})\).
If \({\bf u}\) and \({\bf v}\) are words such that \({\sf occ}(x,{\bf u})={\sf occ}(x,{\bf v})\) for each variable \(x\), then we say that \({\bf v}\) is a _permutation_ of \({\bf u}\).
**Theorem 4.2**.: _A word identity \({\bf u}\approx{\bf v}\) holds in \(({\tt back}_{2},\ ^{\sharp})\) if and only if \({\bf u}\approx{\bf v}\) is balanced and satisfies that for any \(x,y\in{\sf con}({\bf u})\) with \(x,x^{*}\neq y\),_
1. _if_ \({\bf u}[x,y]\in x^{\alpha}x^{*}\{x,x^{*},y,y^{*}\}^{\times}\) _for some_ \(\alpha\geq 1\)_, then_ \({\bf v}[x,y]\in x^{\alpha}x^{*}\{x,x^{*},y,y^{*}\}^{\times}\)_,_ 2. _if_ \({\bf u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x^{\alpha}\) _for some_ \(\alpha\geq 1\)_, then_ \({\bf v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x^{\alpha}\)_._
2. _if_ \({\bf u}[x,y]\in y^{\alpha}x\{x,x^{*},y,y,y^{*}\}^{\times}\) _for some_ \(\alpha\geq 1\)_, then_ \({\bf v}[x,y]\in y^{\alpha}x\{x,x^{*},y,y^{*}\}^{\times}\)_,_ 3. _if_ \({\bf u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}xy^{\alpha}\) _for some_ \(\alpha\geq 1\)_, then_ \({\bf v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}xy^{\alpha}\)
* _if_ \(\mathbf{u}[x,y]\in x\{x,x^{*},y,y^{*}\}^{\times}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x^{*},y^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}x\{x,x^{*},y,y^{*}\}^{\times}\) _or_ \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_,_
* _if_ \(\mathbf{u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x\mathbf{a}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x^{*},y^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x\mathbf{a}^{\prime}\) _or_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}y\mathbf{a}^{\prime}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_._
Proof.: Let \(\mathbf{u}\approx\mathbf{v}\) be any word identity satisfied by \((\mathsf{baxt}_{2},\ ^{\sharp})\). It is routine to verify that \((A,\ ^{*})\) is a homomorphic image of \((\mathsf{baxt}_{2},\ ^{\sharp})\) under the map given by \([1]_{\mathsf{baxt}_{2}}\mapsto a\) and \([2]_{\mathsf{baxt}_{2}}\mapsto b\). Then it follows from Lemma 4.1 that \(\mathbf{u}\approx\mathbf{v}\) is balanced. Now to show that (Ia)-(IIIa) hold, and then (Ib)-(IIIb) hold by symmetry. Suppose that \(\mathbf{u}\) starts with \(x\) but \(\mathbf{v}\) starts with \(y\neq x\). If \(y=x^{*}\), then letting \(\phi_{1}\) be the substitution such that \(x\mapsto[2]_{\mathsf{baxt}_{2}},z\mapsto[\varepsilon]_{\mathsf{baxt}_{2}}\) for any \(z\neq x,x^{*}\), we obtain
\[[2]_{\mathsf{baxt}_{2}}\cdot[\mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{1}(\mathbf{ u})\neq\phi_{1}(\mathbf{v})=[1]_{\mathsf{baxt}_{2}}\cdot[\mathbf{t}]_{\mathsf{baxt }_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction; if \(y\neq x^{*}\), then letting \(\phi_{2}\) be the substitution such that \(x\mapsto[2]_{\mathsf{baxt}_{2}},y\mapsto[1]_{\mathsf{baxt}_{2}},z\mapsto[ \varepsilon]_{\mathsf{baxt}_{2}}\) for any \(z\neq x,y\), we obtain
\[[2]_{\mathsf{baxt}_{2}}\cdot[\mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{2}(\mathbf{ u})\neq\phi_{2}(\mathbf{v})=[1]_{\mathsf{baxt}_{2}}\cdot[\mathbf{t}]_{\mathsf{baxt }_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction. Thus \(\mathbf{u}\) and \(\mathbf{v}\) start with the same variable.
If \(\mathbf{u}[x,y]\in x^{\alpha}x^{*}\{x,x^{*},y,y^{*}\}^{\times}\), then \(\mathbf{v}[x,y]\in x^{\beta}z\{x,x^{*},y,y^{*}\}^{\times}\) with \(z\neq x\). Suppose that \(z\neq x^{*}\). Let \(\phi_{3}\) be the substitution such that \(x\mapsto[2]_{\mathsf{baxt}_{2}},z\mapsto[1^{\alpha}2]_{\mathsf{baxt}_{2}}\). Then
\[[1^{\alpha}2]_{\mathsf{baxt}_{2}}\cdot[\mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{3} (\mathbf{u}[x,y])\neq\phi_{3}(\mathbf{v}[x,y])=[1^{\alpha+\beta}2]_{\mathsf{baxt }_{2}}\cdot[\mathbf{t}]_{\mathsf{baxt}_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction. Thus \(z=x^{*}\). Suppose that \(\alpha\neq\beta\). Let \(\phi_{4}\) be the substitution such that \(x\mapsto[2]_{\mathsf{baxt}_{2}}\). Then
\[[1^{\alpha}2]_{\mathsf{baxt}_{2}}\cdot[\mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{4 }(\mathbf{u}[x,y])\neq\phi_{4}(\mathbf{v}[x,y])=[1^{\beta}2]_{\mathsf{baxt}_{2} }\cdot[\mathbf{t}]_{\mathsf{baxt}_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction. Thus \(\alpha=\beta\), and so \(\mathbf{v}[x,y]\in x^{\alpha}x^{*}\{x,x^{*},y,y^{*}\}^{\times}\). Therefore (Ia) holds.
If \(\mathbf{u}[x,y]\in y^{\alpha}x\{x,x^{*},y,y^{*}\}^{\times}\), then \(\mathbf{v}[x,y]\in y^{\beta}z\{x,x^{*},y,y^{*}\}^{\times}\) with \(z\neq y\). Clearly \(z\neq y^{*}\) by condition (Ia). If \(z=x^{*}\), then \(\mathbf{u}[x]\) starts with \(x\) but \(\mathbf{v}[x]\) starts with \(x^{*}\), which is impossible. Therefore \(z=x\). Suppose that \(\alpha\neq\beta\). Let \(\phi_{5}\) be the substitution such that \(x\mapsto[2]_{\mathsf{baxt}_{2}},y\mapsto[1]_{\mathsf{baxt}_{2}}\). Then
\[[1^{\alpha}2]_{\mathsf{baxt}_{2}}\cdot[\mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{5} (\mathbf{u}[x,y])\neq\phi_{5}(\mathbf{v}[x,y])=[1^{\beta}2]_{\mathsf{baxt}_{2} }\cdot[\mathbf{t}]_{\mathsf{baxt}_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction. Thus \(\alpha=\beta\), and so \(\mathbf{v}[x,y]\in y^{\alpha}x\{x,x^{*},y,y^{*}\}^{\times}\). Therefore (IIa) holds.
If \(\mathbf{u}[x,y]\in\mathbf{a}x\{x,x^{*},y,y^{*}\}^{\times}\) with \(\mathsf{con}(\mathbf{a})=\{x^{*},y^{*}\}\), then by condition (IIa) \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\) or \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}x\{x,x^{*},y,y^{*}\}^{\times}\) with \(\mathsf{con}(\mathbf{a}^{\prime})=\{x^{*},y^{*}\}\). Suppose that \(|\mathbf{a}^{\prime}|\neq|\mathbf{a}|\). Let \(\phi_{6}\) be the substitution from \(\mathcal{X}\) to \(\mathsf{baxt}_{2}\) such that \(x\mapsto[2]_{\mathsf{baxt}_{2}},y\mapsto[2]_{\mathsf{baxt}_{2}}\). Then
\[[1^{|\mathbf{a}|}2]_{\mathsf{baxt}_{2}}\cdot[\mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{6 }(\mathbf{u}[x,y])\neq\phi_{6}(\mathbf{v}[x,y])=[1^{|\mathbf{a}^{\prime}|}2]_{ \mathsf{baxt}_{2}}\cdot[\mathbf{t}]_{\mathsf{baxt}_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction. Therefore \(|\mathbf{a}^{\prime}|=|\mathbf{a}|\). Suppose that \(\mathsf{occ}(x^{*},\mathbf{a}^{\prime})\neq\mathsf{occ}(x^{*},\mathbf{a})\). Let \(\phi_{7}\) be the substitution from \(\mathcal{X}\) to \(\mathsf{baxt}_{2}\) such that \(x\mapsto[22]_{\mathsf{baxt}_{2}},y\mapsto[2]_{\mathsf{baxt}_{2}}\). Then
\[[1^{|\mathbf{a}|+\mathsf{occ}(x^{*},\mathbf{u})}2]_{\mathsf{baxt}_{2}}\cdot[ \mathbf{s}]_{\mathsf{baxt}_{2}}=\phi_{7}(\mathbf{u}[x,y])\neq\phi_{7}(\mathbf{v} [x,y])=[1^{|\mathbf{a}^{\prime}|+\mathsf{occ}(x^{*},\mathbf{v})}2]_{\mathsf{baxt }_{2}}\cdot[\mathbf{t}]_{\mathsf{baxt}_{2}}\]
where \(\mathbf{s},\mathbf{t}\in\mathcal{A}_{2}^{\star}\), a contradiction. Therefore \(\mathsf{occ}(x^{*},\mathbf{a}^{\prime})=\mathsf{occ}(x^{*},\mathbf{a}),\mathsf{occ}(y^{*}, \mathbf{a}^{\prime})=
is balanced. If \(\mathsf{sup}(\phi(\mathbf{u}))=\{1\}\) or \(\{2\}\), then \(\phi(\mathbf{u})=\phi(\mathbf{v})\). If \(\mathsf{sup}(\phi(\mathbf{u}))=\{1,2\}\), to show \(\phi(\mathbf{u})=\phi(\mathbf{v})\), it suffices to show \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\) and \(\mathsf{lpi}(\phi(\mathbf{u}))=\mathsf{lpi}(\phi(\mathbf{v}))\). By symmetry, we only need to show that \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\). If \(\phi(\mathbf{u})\) does not have a 2-1 right precedence, then \(\phi(\mathbf{v})\) does not have a 2-1 right precedence by conditions (I) and (II). If \(\phi(\mathbf{u})\) has a 2-1 right precedence of index \(r\), then we may assume that \(\phi(x)\neq[\varepsilon]_{\mathsf{bast}_{2}}\) for any \(x\in\mathsf{con}(\mathbf{u})\) by Remark 2.4, whence \(\mathbf{u}\) can be written into the form \(\mathbf{u}=\mathbf{u}_{1}z\mathbf{u}_{2}\) for some possibly empty words \(\mathbf{u}_{1},\mathbf{u}_{2}\) satisfying \(\mathsf{sup}(\phi(\mathbf{u}_{2}))=\{2\}\) and \(\mathsf{sup}(\phi(z))=\{1,2\}\) or \(\{1\}\). Note that \(z\not\in\mathsf{con}(\mathbf{u}_{2})\) and \(z^{*}\in\mathsf{con}(\mathbf{u}_{2})\) if \(\mathsf{sup}(\phi(z))=\{1\}\) and that for any \(x\in\mathsf{con}(\mathbf{u}_{2}),x^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\). As such, \(\phi(z\mathbf{u}_{2})\) has the same 2-1 right precedence of index \(r\) as \(\phi(\mathbf{u})\). Now we consider the form of \(\mathbf{v}\). There are two cases.
**Case 1.**\(z^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\). It follows from (IIb) that \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) where \(\mathbf{v}_{2}\) is a permutation of \(\mathbf{u}_{2}\). This implies that \(\mathsf{sup}(\phi(\mathbf{v}_{2}))=\{2\}\), hence \(\phi(z\mathbf{v}_{2})\) also has a 2-1 right precedence of index \(r\). Since \(\phi(z\mathbf{v}_{2})\) has the same right precedence as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a 2-1 right precedence of index \(r\). Therefore \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\).
**Case 2.**\(z^{*}\in\mathsf{con}(\mathbf{u}_{2})\). Let \(\mathsf{con}(\mathbf{u}_{2})=\{z^{*},x_{1},x_{2},\ldots,x_{n}\}\). Without loss of generality, we may assume that \(\mathsf{fpi}(\mathbf{u}_{2})=z^{*}x_{1}x_{2}\cdots x_{n}\). Note that \(x_{1}^{*},x_{2}^{*},\ldots,x_{n}^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\). Then \(\mathbf{v}=\mathbf{v}_{1}yz^{*}\mathbf{x}_{1}x_{1}\mathbf{x}_{2}x_{2}\cdots \mathbf{x}_{n}x_{n}\) by (IIb) where \(\mathbf{y}\in\{x_{1},x_{1}^{*},\ldots,x_{n},x_{n}^{*},z,z^{*}\}^{+}\) and \(\mathbf{x}_{i}\in\{x_{i},x_{i+1},\ldots,x_{n}\}^{+}\) for \(i=1,2,\ldots,n\). Therefore either \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) or \(\mathbf{v}=\mathbf{v}_{1}x_{i}^{*}\mathbf{v}_{2}\) with \(\mathsf{con}(\mathbf{v}_{2})=\mathsf{con}(\mathbf{u}_{2})\).
**2.1.**\(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\). It follows from (IIb) that \(\mathbf{v}_{2}\) is a permutation of \(\mathbf{u}_{2}\). This implies that \(\mathsf{sup}(\phi(\mathbf{v}_{2}))=\{2\}\), hence \(\phi(z\mathbf{v}_{2})\) also has a 2-1 right precedence of index \(r\). Since \(\phi(z\mathbf{v}_{2})\) has the same right precedence as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a 2-1 right precedence of index \(r\). Therefore \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\).
**2.2.**\(\mathbf{v}=\mathbf{v}_{1}x_{i}^{*}\mathbf{v}_{2}\). It follows from (IIIb) that \(\mathsf{occ}(z^{*},\mathbf{u}_{2})=\mathsf{occ}(z^{*},\mathbf{v}_{2})\) and \(\mathsf{occ}(x_{i},\mathbf{u}_{2})=\mathsf{occ}(x_{i},\mathbf{v}_{2})\). Suppose that there exists some \(x_{j}\in\mathsf{con}(\mathbf{u}_{2})\) such that \(\mathsf{occ}(x_{j},\mathbf{u}_{2})\neq\mathsf{occ}(x_{j},\mathbf{v}_{2})\). Then \(x_{j}\neq x_{i},z^{*}\). If \(x_{j}^{*}\not\in\mathsf{con}(\mathbf{u}_{1})\), then it follows from (IIIb) that \(\overrightarrow{\mathsf{occ}}_{x_{i}^{*}}(x_{j},\mathbf{u})=\overrightarrow{ \mathsf{occ}}_{x_{i}^{*}}(x_{j},\mathbf{v})\) and \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{u})=\overrightarrow{\mathsf{occ}} _{z}(x_{j},\mathbf{v})\). It follows from the forms of \(\mathbf{u}\) and \(\mathbf{v}\) that \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{v})-\overrightarrow{\mathsf{occ}} _{x_{i}^{*}}(x_{j},\mathbf{v})\geq 0\) and \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{u})-\overrightarrow{\mathsf{occ}} _{x_{i}^{*}}(x_{j},\mathbf{u})\leq 0\). Suppose that \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{v})-\overrightarrow{\mathsf{occ}} _{x_{i}^{*}}(x_{j},\mathbf{v})>0\). Then \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{u})-\overrightarrow{\mathsf{occ}} _{x_{i}^{*}}(x_{j},\mathbf{u})>0\), which is impossible. If \(x_{j}^{*}\in\mathsf{con}(\mathbf{u}_{1})\), then there are four cases about the order of \(\llcorner_{\infty}x_{j}^{*},\llcorner_{\infty}x_{i}^{*},\llcorner_{\infty}z\) in \(\mathbf{u},\mathbf{v}\): \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}x_{i}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{i}^{*}\llcorner_{\infty}z\), \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}x_{i}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\), \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{i}^{*}\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\), and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\) and \(\llcorner_{\infty}x_{j}^{*}\llcorner_{\infty}z\), then it follows from (IIIb) that \(\overrightarrow{\mathsf{occ}}_{x_{i}^{*}}(x_{j},\mathbf{u})=\overrightarrow{ \mathsf{occ}}_{x_{i}^{*}}(x_{j},\mathbf{v})\) and \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{u})=\overrightarrow{\mathsf{occ}} _{z}(x_{j},\mathbf{v})\). By the forms of \(\mathbf{u}\) and \(\mathbf{v}\), \(\overrightarrow{\mathsf{occ}}_{z}(x_{j},\mathbf{v})-\overrightarrow{\mathsf
\(\mathbf{u}\) and \(\mathbf{v}\), \(\widehat{\mathsf{oc}}\hat{\mathfrak{c}}_{x_{j}^{*}}(x_{j},\mathbf{v})-\widehat{ \mathsf{oc}}\hat{\mathfrak{c}}_{x_{j}^{*}}(x_{j},\mathbf{v})\geq 0\), \(\widehat{\mathsf{oc}}\hat{\mathfrak{c}}_{z}(x_{j},\mathbf{u})-\widehat{ \mathsf{oc}}\hat{\mathfrak{c}}_{x_{j}^{*}}(x_{j},\mathbf{u})\leq 0\). Suppose \(\widehat{\mathsf{oc}}\hat{\mathfrak{c}}_{x_{j}^{*}}(x_{j},\mathbf{v})-\widehat {\mathsf{oc}}\hat{\mathfrak{c}}_{x_{j}^{*}}(x_{j},\mathbf{v})>0\). Then \(\widehat{\mathsf{oc}}\hat{\mathfrak{c}}_{z}(x_{j},\mathbf{u})-\widehat{ \mathsf{oc}}\hat{\mathfrak{c}}_{x_{j}^{*}}(x_{j},\mathbf{u})>0\), which is impossible.
Thus \(\mathbf{v}=\mathbf{v}_{1}x_{i}^{*}\mathbf{v}_{2}\) where \(\mathbf{v}_{2}\) is a permutation of \(\mathbf{u}_{2}\). This implies that \(\phi(\mathbf{v}_{2})\) has support \(\{2\}\), hence \(\phi(x_{i}^{*}\mathbf{v}_{2})\) also has a 2-1 right precedence of index \(r\). Since \(\phi(x_{i}^{*}\mathbf{v}_{2})\) has the same right precedence as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a 2-1 right precedence of index \(r\). Therefore \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\).
**Theorem 4.3**.: _A word identity \(\mathbf{u}\approx\mathbf{v}\) holds in \((\mathsf{bax}_{3},\ ^{\sharp})\) if and only if \(\mathbf{u}\approx\mathbf{v}\) is balanced and satisfies that for any \(x,y\in\mathsf{con}(\mathbf{u})\) with \(x,x^{*}\neq y\),_
1. _if_ \(\mathbf{u}[x,y]\in x^{\alpha}x^{*}\{x,x^{*},y,y^{*}\}^{\times}\) _for some_ \(\alpha\geq 1\)_, then_ \(\mathbf{v}[x,y]\in x^{\alpha}x^{*}\{x,x^{*},y,y^{*}\}^{\times}\)_,_ 2. _if_ \(\mathbf{u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x^{\alpha}\) _for some_ \(\alpha\geq 1\)_, then_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x^{\alpha}\)_._
2. _if_ \(\mathbf{u}[x,y]\in y^{\alpha}x\{x,x^{*},y,y^{*}\}^{\times}\) _for some_ \(\alpha\geq 1\)_, then_ \(\mathbf{v}[x,y]\in y^{\alpha}x\{x,x^{*},y,y^{*}\}^{\times}\)_,_ 2. _if_ \(\mathbf{u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}xy^{\alpha}\) _for some_ \(\alpha\geq 1\)_, then_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}xy^{\alpha}\)_._
3. _if_ \(\mathbf{u}[x,y]\in\mathbf{a}x\{x,x^{*},y,y^{*}\}^{\times}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x^{*},y^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}x\{x,x^{*},y,y^{*}\}^{\times}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_,_ 2. _if_ \(\mathbf{u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x\mathbf{a}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x^{*},y^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}x\mathbf{a}^{\prime}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_._
4. \(\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x,\mathbf{u})+\widehat{\mathsf{oc}} \mathfrak{c}_{y}(x^{*},\mathbf{u})=\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x, \mathbf{v})+\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x^{*},\mathbf{v})\)_,_ 5. _if_ \(\mathbf{u}[x,y]\in\mathbf{a}y\{x,x^{*},y,y^{*}\}^{\times}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x,x^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_,_ 6. _if_ \(\mathbf{u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}y\mathbf{a}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x,x^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}y\mathbf{a}^{\prime}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_._
Proof.: Suppose that \(\mathbf{u}\approx\mathbf{v}\) is a word identity satisfied by \((\mathsf{bax}_{3},\ ^{\sharp})\). It is routine to verify that the involution submonoid of \((\mathsf{bax}_{3},\ ^{\sharp})\) generated by \([1]_{\mathsf{bax}_{3}}\) and \([3]_{\mathsf{bax}_{3}}\) is isomorphic to \((\mathsf{bax}_{2},\ ^{\sharp})\). It follows from Theorem 4.2 that \(\mathbf{u}\approx\mathbf{v}\) is balanced and satisfies conditions (I)-(II). Now to show that (IIIa)--(Va) hold, and then (IIIb)--(Vb) hold by symmetry.
Suppose that \(\mathbf{u}[x,y]\in\mathbf{a}x\{x,x^{*},y,y^{*}\}^{\times}\) with \(\mathsf{con}(\mathbf{a})=\{x^{*},y^{*}\}\). Then it follows from condition (IIIa) of Theorem 4.2 that \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}x\{x,x^{*},y,y^{*}\}^{\times}\) or \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\) where \(\mathbf{a}^{\prime}\) is a permutation of \(\mathbf{a}\). Suppose that \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\). Let \(\phi_{1}\) be a substitution such that \(x\mapsto[3]_{\mathsf{bax}_{3}},y\mapsto[2]_{\mathsf{bax}_{3}}\) and any other variable to \([\varepsilon]_{\mathsf{bax}_{3}}\). Then \(\phi_{1}(\mathbf{u})\) have a 2-3 left precedence of index \(\mathsf{occ}(y^{*},\mathbf{a})\) but the index of 2-3 left precedence of \(\phi_{1}(\mathbf{v})\) is greater than \(\mathsf{occ}(y^{*},\mathbf{a})\), a contradiction. Therefore \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}x\{x,x^{*},y,y^{*}\}^{\times}\), and so (IIIa) holds.
Suppose that \(\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x,\mathbf{u})+\widehat{\mathsf{oc}} \mathfrak{c}_{y}(x^{*},\mathbf{u})\neq\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x, \mathbf{v})+\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x^{*},\mathbf{v})\) for some \(x,x^{*}\neq y\in\mathsf{con}(\mathbf{u})\). Let \(\phi_{2}\) be a substitution such that \(x\mapsto[2]_{\mathsf{bax}_{3}},y\mapsto[3]_{\mathsf{bax}_{3}}\) and any other variable to \([\varepsilon]_{\mathsf{bax}_{3}}\). Then \(\phi_{2}(\mathbf{u})\) has a 2-3 left precedence of index \(\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x,\mathbf{u})+\widehat{\mathsf{oc}}\mathfrak{c}_{y} (x^{*},\mathbf{u})\) but \(\phi_{2}(\mathbf{v})\) has a 2-3 left precedence of index \(\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x,\mathbf{v})+\widehat{\mathsf{oc}}\mathfrak{c}_{y} (x^{*},\mathbf{v})\), a contradiction. Therefore \(\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x,\mathbf{u})+\widehat{\mathsf{oc}} \mathfrak{c}_{y}(x^{*},\mathbf{u})=\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x, \mathbf{v})+\widehat{\mathsf{oc}}\mathfrak{c}_{y}(x^{*},\mathbf{v})\), and so (IVa) holds.
If \(\mathbf{u}[x,y]\in\mathbf{a}
Then \(\phi_{3}(\mathbf{u})\) has a 1-2 left precedence of index \(\mathsf{occ}(x,\mathbf{a})\) but \(\phi_{3}(\mathbf{v})\) has a 1-2 left precedence of index \(\mathsf{occ}(x,\mathbf{a}^{\prime})\), a contradiction. Therefore \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\) where \(\mathbf{a}^{\prime}\) is a permutation of \(\mathbf{a}\), and so (Va) holds.
Conversely, let \(\mathbf{u}\approx\mathbf{v}\) be any balanced word identity satisfying (I)-(V) and \(\phi\) be any substitution from \(\mathcal{X}\) to \(\mathsf{bax}_{3}\). Note that \(\mathsf{ev}(\phi(\mathbf{u}))=\mathsf{ev}(\phi(\mathbf{v}))\) since \(\mathbf{u}\approx\mathbf{v}\) is balanced. Now, to show \(\phi(\mathbf{u})=\phi(\mathbf{v})\), it suffices to show \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\) and \(\mathsf{lpi}(\phi(\mathbf{u}))=\mathsf{lpi}(\phi(\mathbf{v}))\). By symmetry, we only need to show that \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\). If \(\phi(\mathbf{u})\) does not have a 2-1 [resp. 3-2, 3-1] right precedence, then \(\phi(\mathbf{v})\) does not have a 2-1 [resp. 3-2, 3-1] right precedence by (I) and (II). In the following, we show that if \(\phi(\mathbf{u})\) has a 2-1 [resp. 3-2, 3-1] right precedence of index \(r\), then \(\phi(\mathbf{v})\) has a 2-1 [resp. 3-2, 3-1] right precedence of index \(r\).
If \(\phi(\mathbf{u})\) has a 3-1 right precedence of index \(r\), then we may assume that \(\phi(x)\neq[\varepsilon]_{\mathsf{bax}_{3}}\) for any \(x\in\mathsf{con}(\mathbf{u})\) by Remark 2.4, whence \(\mathbf{u}\) can be written into the form \(\mathbf{u}=\mathbf{u}_{1}z\mathbf{u}_{2}\) for some possibly empty words \(\mathbf{u}_{1},\mathbf{u}_{2}\) satisfying \(\mathsf{sup}(\phi(\mathbf{u}_{2}))=\{3\}\) and \(\mathsf{sup}(\phi(z))=\{1\}\) or \(\{1,3\}\). Note that \(z\not\in\mathsf{con}(\mathbf{u}_{2})\) and \(z^{*}\) may occur in \(\mathbf{u}_{2}\) if \(\mathsf{sup}(\phi(z))=\{1\}\) and that for any \(x\in\mathsf{con}(\mathbf{u}_{2}),x^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\). As such, \(\phi(z\mathbf{u}_{2})\) has the same 3-1 right precedence of index \(r\) as \(\phi(\mathbf{u})\). Now we consider the form of \(\mathbf{v}\). If \(z^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\), then it follows from (IIb) that \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) where \(\mathbf{v}_{2}\) is a permutation of \(\mathbf{u}_{2}\). This implies that \(\mathsf{sup}(\phi(\mathbf{v}_{2}))=\{3\}\), hence \(\phi(z\mathbf{v}_{2})\) also has a 3-1 right precedence of index \(r\). Since \(\phi(z\mathbf{v}_{2})\) has the same 3-1 right precedence of index \(r\) as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a 3-1 right precedence of index \(r\). If \(z^{*}\in\mathsf{con}(\mathbf{u}_{2})\). Let \(\mathsf{con}(\mathbf{u}_{2})=\{z^{*},x_{1},x_{2},\ldots,x_{n}\}\). Without loss of generality, we may assume that \(\mathsf{fp}(\mathbf{u}_{2})=z^{*}x_{1}x_{2}\cdots x_{n}\). Note that \(x_{1}^{*},x_{2}^{*},\ldots,x_{n}^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\). Then it follows from (IIb) that \(\mathbf{v}=\mathbf{v}_{1}yz^{*}\mathbf{x}_{1}x_{1}\)\(\mathbf{x}_{2}x_{2}\cdots\mathbf{x}_{n}x_{n}\) where \(\mathbf{y}\in\{x_{1},x_{1}^{*},\ldots,x_{n},x_{n}^{*},z,z^{*}\}^{+}\) and \(\mathbf{x}_{i}\in\{x_{i},x_{i+1},\ldots,x_{n}\}^{+}\) for \(i=1,2,\ldots,n\). Therefore \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) where \(\mathbf{v}_{2}\) is a permutation of \(\mathbf{u}_{2}\) by (IIIb). This implies that \(\phi(\mathbf{v}_{2})\) has support \(\{3\}\), hence \(\phi(z\mathbf{v}_{2})\) also has a 3-1 right precedence of index \(r\). Since \(\phi(z\mathbf{v}_{2})\) has the 3-1 right precedence of index \(r\) as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a 3-1 right precedence of index \(r\).
If \(\phi(\mathbf{u})\) has a 2-1 right precedence of index \(r\), then we may assume that \(\phi(x)\neq[\varepsilon]_{\mathsf{bax}_{3}}\) for any \(x\in\mathsf{con}(\mathbf{u})\) by Remark 2.4, whence \(\mathbf{u}\) can be written into the form \(\mathbf{u}=\mathbf{u}_{1}z\mathbf{u}_{2}\) for some possibly empty words \(\mathbf{u}_{1},\mathbf{u}_{2}\) satisfying \(\mathsf{sup}(\phi(\mathbf{u}_{2}))\subseteq\{2,3\}\) and \(\mathsf{sup}(\phi(z))=\{1\}\), \(\{1,2\}\), \(\{1,3\}\) or \(\{1,2,3\}\). Note that \(z\not\in\mathsf{con}(\mathbf{u}_{2})\) and \(z^{*}\) may occur in \(\mathbf{u}_{2}\) if \(\mathsf{sup}(\phi(z))=\{1\}\) or \(\{1,2\}\) and that for any \(x\in\mathsf{con}(\mathbf{u}_{2})\), \(x^{*}\) may occur in \(\mathsf{con}(\mathbf{u}_{2})\) if \(\mathsf{sup}(\phi(x))=\{2\}\). As such, \(\phi(z\mathbf{u}_{2})\) has the same 2-1 right precedence of index \(r\) as \(\phi(\mathbf{u})\). It follows from (IIb)-(Vb) that \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) satisfying that \(z\not\in\mathsf{con}(\mathbf{v}_{2})\), \(\mathsf{con}(\mathbf{u}_{2})=\mathsf{con}(\mathbf{v}_{2})\), and \(\mathsf{occ}(z^{*},\mathbf{u}_{2})=\mathsf{occ}(z^{*},\mathbf{v}_{2})\), and \(\mathsf{occ}(x,\mathbf{u}_{2})=\mathsf{occ}(x,\mathbf{v}_{2})\) when \(x\in\mathsf{con}(\mathbf{u}_{2}),x^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\), and \(\mathsf{occ}(x,\mathbf{u}_{2})+\mathsf{occ}(x^{*},\mathbf{u}_{2})=\mathsf{occ}(x, \mathbf{v}_{2})+\mathsf{occ}(x^{*},\mathbf{v}_{2})\) when \(x,x^{*}\in\mathsf{con}(\mathbf{u}_{2})\). This implies that \(\phi(z\mathbf{v}_{2})\) also has a 2-1 right precedence of index \(r\). Since \(\phi(z\mathbf{v}_{2})\) has the same 2-1 right precedence of index \(r\) as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a 2-1 right precedence of index \(r\).
If \(\phi(\mathbf{u})\) has a 3-2 right precedence of index \(r\), then we may assume that \(\phi(x)\neq[\varepsilon]_{\mathsf{bax}_{3}}\) for any \(x\in\mathsf{con}(\mathbf{u})\) by Remark 2.4, whence \(\mathbf{u}\) can be written into the form \(\mathbf{u}=\mathbf{u}_{1}z\mathbf{u}_{2}\) satisfying \(\mathsf{sup}(\phi(\mathbf{u}_{2}))\subseteq\{1,3\}\) and \(\mathsf{sup}(\phi(z))=\{2\}\), \(\{1,2\}\), \(\{2,3\}\) or \(\{1,2,3\}\). Note that \(z,z^{*}\) cannot occur in \(\mathbf{u}_{2}\) and that for any \(x\in\mathsf{con}(\mathbf{u}_{2})\), \(x^{*}\) may occur in \(\mathsf{con}(\mathbf{u}_{2})\). As such, \(\phi(z\mathbf{u}_{2})\) has the same 3-2 right precedence of index \(r\) as \(\phi(\mathbf{u})\). It follows from (IIb)-(Vb) that \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) such that \(z\not\in\mathsf{con}(\mathbf{v}_{2})\), \(\mathsf{con}(\mathbf{u}_{2})=\mathsf{con}(\mathbf{v}_{2})\), \(\mathsf{occ}(x,\mathbf{u}_{2})=\mathsf
**Theorem 4.4**.: _A word identity \(\mathbf{u}\approx\mathbf{v}\) holds in \((\underline{\mathsf{bact}}_{4},\ ^{\sharp})\) if and only if \(\mathbf{u}\approx\mathbf{v}\) is balanced and \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v}),\overline{\mathsf{oc}} \underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{\mathsf{oc}}\underline{ \mathsf{c}}_{x}(y,\mathbf{v})\) for any \(x,y\in\mathsf{con}(\mathbf{u})\)._
Proof.: Suppose that \(\mathbf{u}\approx\mathbf{v}\) is a word identity satisfied by \((\underline{\mathsf{bact}}_{4},\ ^{\sharp})\). It is routine to verify that the involution submonoid of \((\underline{\mathsf{bact}}_{4},\ ^{\sharp})\) generated by \([1]_{\underline{\mathsf{bact}}_{4}}\) and \([4]_{\underline{\mathsf{bact}}_{3}}\) is isomorphic to \((\underline{\mathsf{bact}}_{2},\ ^{\sharp})\). Then it follows from Theorem 4.2 that \(\mathbf{u}\approx\mathbf{v}\) is balanced. Suppose that \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})\neq\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\). Let \(\phi\) be a substitution such that \(x\mapsto[1]_{\underline{\mathsf{bact}}_{4}},y\mapsto[2]_{\underline{\mathsf{ bact}}_{4}}\) and any other variable to \([\varepsilon]_{\underline{\mathsf{bact}}_{4}}\). Then \(\phi(\mathbf{u})\) has a 2-1 right precedence of index \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})\) but \(\phi(\mathbf{v})\) has a 2-1 left precedence of index \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\), a contradiction. Therefore \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\) for any \(x,y\in\mathsf{con}(\mathbf{u})\). Symmetrically, \(\overleftarrow{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})= \overleftarrow{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\) for any \(x,y\in\mathsf{con}(\mathbf{u})\).
Conversely, let \(\mathbf{u}\approx\mathbf{v}\) be any balanced word identity satisfying \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v}),\overleftarrow{\mathsf{ cc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{\mathsf{oc}} \underline{\mathsf{c}}_{x}(y,\mathbf{v})\) for any \(x,y\in\mathsf{con}(\mathbf{u})\) and \(\phi\) be any substitution from \(\mathcal{X}\) to \(\underline{\mathsf{bact}}_{4}\). Note that \(\mathsf{ev}(\phi(\mathbf{u}))=\mathsf{ev}(\phi(\mathbf{v}))\) since \(\mathbf{u}\approx\mathbf{v}\) is balanced. Now, to show \(\phi(\mathbf{u})=\phi(\mathbf{v})\), it suffices to show that \(\mathsf{rpi}(\phi(\mathbf{u}))=\mathsf{rpi}(\phi(\mathbf{v}))\), and then \(\mathsf{lpi}(\phi(\mathbf{u}))=\mathsf{lpi}(\phi(\mathbf{v}))\) by symmetry. If \(\phi(\mathbf{u})\) does not have a \(j\)-\(i\) right precedence with \(i<j\), then \(\phi(\mathbf{v})\) does not have a \(j\)-\(i\) right precedence by \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\). If \(\phi(\mathbf{u})\) has a \(j\)-\(i\) right precedence of index \(r\), then \(\mathbf{u}\) can be written into the form \(\mathbf{u}=\mathbf{u}_{1}z\mathbf{u}_{2}\) satisfying \(i\not\in\mathsf{sup}(\phi(\mathbf{u}_{2}))\) and \(i\in\mathsf{sup}(\phi(z))\). Note that \(z\) cannot occur in \(\mathbf{u}_{2}\). As such, \(\phi(z\mathbf{u}_{2})\) has the same \(j\)-\(i\) right precedence of index \(r\) as \(\phi(\mathbf{u})\). It follows from \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\) that \(\mathbf{v}=\mathbf{v}_{1}z\mathbf{v}_{2}\) such that \(z\not\in\mathsf{con}(\mathbf{v}_{2})\) and \(\mathbf{v}_{2}\) is a permutation of \(\mathbf{u}_{2}\). This implies that \(\phi(z\mathbf{v}_{2})\) also has a \(j\)-\(i\) right precedence of index \(r\). Since \(\phi(z\mathbf{v}_{2})\) has the same \(j\)-\(i\) right precedence of index \(r\) as \(\phi(\mathbf{v})\), \(\phi(\mathbf{v})\) has a \(j\)-\(i\) right precedence of index \(r\). Therefore \(\mathsf{rpi}(\phi(\mathbf{u}_{2}))=\mathsf{rpi}(\phi(\mathbf{v}))\).
**Remark 4.5**.: It follows from Theorems 4.2-4.4 that the involution monoids \((\underline{\mathsf{bact}}_{2},\ ^{\sharp})\), \((\underline{\mathsf{bact}}_{3},\ ^{\sharp})\) and \((\underline{\mathsf{bact}}_{n},\ ^{\sharp})\) with \(n\geq 4\) generate different varieties. However, the monoids \(\underline{\mathsf{bact}}_{n}\) with \(n\geq 2\) generate the same variety by [14, Theorem 3.12] or [11, Proposition 6.10]. And by Theorems 4.2-4.4, an identity \(\mathbf{u}\approx\mathbf{v}\) holds in \(\underline{\mathsf{bact}}_{n}\) with \(n\geq 2\) if and only if
1. \(\mathbf{u}\approx\mathbf{v}\) is balanced;
2. \(\overline{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overline{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\) and \(\overleftarrow{\mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{u})=\overleftarrow{ \mathsf{oc}}\underline{\mathsf{c}}_{x}(y,\mathbf{v})\) for any \(x,y\in\mathsf{con}(\mathbf{u})\).
This fact also has been shown in [14, Theorem 4.3].
## 5. Finite basis problem for \((\underline{\mathsf{bact}}_{n},\ ^{\sharp})\)
In this section, the finite basis problem for involution monoid \((\underline{\mathsf{bact}}_{n},\ ^{\sharp})\) for each finite \(n\) is solved. Clearly, \((\underline{\mathsf{bact}}_{1},\ ^{\sharp})\) is finitely based since its involution is trivial and it is commutative.
For any word \(\mathbf{u}\in(\mathcal{A}\cup\mathcal{A}^{*})^{+}\), denote by \(\overleftarrow{\mathbf{u}}\) the word obtained from \(\mathbf{u}\) by writing \(\mathbf{u}\) in reverse. For example, if \(\mathbf{u}=x^{5}zy^{*}(z^{*})^{3}x^{*}\), then \(\overleftarrow{\mathbf{u}}=x^{*}(z^{*})^{3}y^{*}zx^{5}\). The identity \(\overleftarrow{\mathbf{u}}\approx\overleftarrow{\mathbf{v}}\) is called the _reverse_ of \(\mathbf{u}\approx\mathbf{v}\). It follows from the proof of [37, Lemma 7(i)] that if an involution semigroup \((S,\ ^{*})\) satisfies the word identity \(\mathbf{u}\approx\mathbf{v}\), then \((S,\ ^{*})\) also satisfies \(\overleftarrow{\mathbf{u}}\approx\overleftarrow{\mathbf{v}}\).
First, we show that \((\underline{\mathsf{bact}}_{2},\ ^{\sharp})\) is finitely based.
**Theorem 5.1**.: _The identities (1.1) and_
\[x^{*}hxkxysx^{*}tx\approx x^{*}hxkyxsx^{*}tx, x^{*}hxkxysxtx^{*}\approx x^{*}hxkyxsxtx^{*}, \tag{5.1}\] \[xhx^{*}kxysx^{*}tx\approx xhx^{*}kysx^{*}tx, xhx^{*}kxysxtx^{*}\approx xhx^{*}kyxsxtx^{*},\] \[x^{*}hxkxys^{*}ty\approx x^{*}hxkxys*tyty, x^{*}hxkxysyty^{*}\approx x^{*}hxkyxsyty^{*},\] (5.2) \[xhx^{*}kxysyty\approx xhx^{*}kyxsyty^{*}\approx xhx^{*}kyxsyty^{*},\] \[xhykxysxty\approx xhykyxsxty, xhykxysytx\approx xhykxysytx,\] (5.3) \[xhykxysx^{*}tyy^{*}\approx xhykyxsxty^{*}, xhykxysytx^{*}\approx xhykxysytx^{*}x^{*}, \tag{5.4}\]
\[x^{*}hy^{*}kxyysx^{*}ty^{*}\approx x^{*}hy^{*}kyxsx^{*}ty^{*},\ x^{*}hy^{*} kxysy^{*}tx^{*}\approx x^{*}hy^{*}kyxsy^{*}tx^{*}, \tag{5.5}\] \[x^{*}hxxkysxty\approx x^{*}hxkyxsxty,\ \ x^{*}hxxkysytx\approx x^{*}h xkysytx,\] \[xhx^{*}kxysxty\approx xhx^{*}kyxsxty,\ \ xhx^{*}kxysytx\approx xhx^{*} kyxsyytx,\] \[x^{*}hxxxysx^{*}ty^{*}\approx x^{*}hxkyxsx^{*}ty^{*},\ \ x^{*}hxxxysy^{*}tx^{*} \approx x^{*}hxkyxsy^{*}tx^{*},\] \[xhx^{*}kxysx^{*}ty^{*}\approx xhx^{*}kyxsx^{*}ty^{*},\ \ xhx^{*} kxysy^{*}tx^{*}\approx xhx^{*}kyxsy^{*}tx^{*}, \tag{5.7}\]
_and their reverses constitute an identity basis for \((\mathsf{baxt}_{2},\ ^{\sharp})\)._
Proof.: Clearly, \((\mathsf{baxt}_{2},\ ^{\sharp})\) satisfies (1.1), and the identities (5.1)-(5.7) and their reverses by Theorem 4.2.
Note that the identities (1.1) can be used to convert any non-empty term into some unique word. It suffices to show that each non-trivial word identity satisfied by \((\mathsf{baxt}_{2},\ ^{\sharp})\) can be deduced from (5.1)-(5.7) and their reverses. Note that any identity satisfied by \((\mathsf{baxt}_{2},\ ^{\sharp})\) is balanced. Let \(\Sigma\) be the set of all non-trivial balanced word identities satisfied by \((\mathsf{baxt}_{2},\ ^{\sharp})\) but can not be deducible from (5.1)-(5.7) and their reverses. Suppose that \(\Sigma\neq\emptyset\). Note that any balanced identity \(\mathbf{u}\approx\mathbf{v}\) in \(\Sigma\) can be written uniquely into the form \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) where \(a,b\) are distinct variables and \(|\mathbf{u}^{\prime}|=|\mathbf{v}^{\prime}|\). Choose an identity, say \(\mathbf{u}\approx\mathbf{v}\), from \(\Sigma\) such that when it is written as \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\), the lengths of the words \(\mathbf{u}^{\prime}\) and \(\mathbf{v}^{\prime}\) are as short as possible. Since \(\mathbf{u}\approx\mathbf{v}\) is balanced, we have \(\mathsf{con}(\mathbf{u}^{\prime}a)=\mathsf{con}(\mathbf{v}^{\prime}b)\) and \(\mathsf{occ}(x,\mathbf{u}^{\prime}a)=\mathsf{occ}(x,\mathbf{v}^{\prime}b)\) for any \(x\in\mathsf{con}(\mathbf{u}^{\prime}a)\).
Since \(\mathsf{con}(\mathbf{u}^{\prime}a)=\mathsf{con}(\mathbf{v}^{\prime}b)\), we have \(b\in\mathsf{con}(\mathbf{u}^{\prime})\), and so \(\mathbf{u}^{\prime}a=\mathbf{u}_{1}bc\mathbf{u}_{2}\) with \(b\neq c\) and \(b\not\in\mathsf{con}(\mathbf{u}_{2})\). Since \(\mathsf{con}(\mathbf{u}^{\prime}a)=\mathsf{con}(\mathbf{v}^{\prime}b)\), we have \(c\in\mathsf{con}(\mathbf{v}^{\prime})\), and so \(\mathbf{v}^{\prime}=\mathbf{v}_{1}c\mathbf{v}_{2}\) with \(c\not\in\mathsf{con}(\mathbf{u}_{2})\). Thus \(\mathbf{u}=\mathbf{u}_{1}bc\mathbf{u}_{2}\mathbf{w}\) and \(\mathbf{v}=\mathbf{v}_{1}c\mathbf{v}_{2}b\mathbf{w}\). On the one hand, we claim that at least one of four cases \(b,b^{*}\in\mathsf{con}(\mathbf{u}_{1})\), \(c,c^{*}\in\mathsf{con}(\mathbf{u}_{1})\), \(b^{*},c^{*}\in\mathsf{con}(\mathbf{u}_{1})\) or \(b,c\in\mathsf{con}(\mathbf{u}_{1})\) is true. If \(b,b^{*},c,c^{*}\not\in\mathsf{con}(\mathbf{u}_{1})\), then \(\mathsf{occ}(b,\mathbf{u}^{\prime}a)=1\), and so \(\mathbf{u}[b,c]\) starts with \(b\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (I) of Theorem 4.2. If only \(b\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \(b^{\mathsf{occ}(b,\mathbf{u}^{\prime}a)}c\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIa) of Theorem 4.2. If only \(b^{*}\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \((b^{*})^{\mathsf{occ}(b^{*},\mathbf{u}_{1})}b\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (Ia) of Theorem 4.2. If only \(c\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \(c^{\mathsf{occ}(c,\mathbf{u}_{1})}b\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIa) of Theorem 4.2. If only \(c^{*}\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \((c^{*})^{\mathsf{occ}(c^{*},\mathbf{u}_{1})}b\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIa) of Theorem 4.2. If only \(c,b^{*}\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \((\mathsf{c}^{*})^{\mathsf{occ}(c^{*},\mathbf{u}_{1})}b\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIa) of Theorem 4.2. If only \(c,b^{*}\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \(\mathsf{con}(\mathbf{u}_{1}^{\prime})=\{c,b^{*}\}\) and \(\mathsf{occ}(c,\mathbf{v}_{1}^{\prime})=\mathsf{occ}(c,\mathbf{u}_{1}^{\prime})\), which contradicts with (IIIa) of Theorem 4.2. If only \(c^{*},b\) occurs in \(\mathbf{u}_{1}\), then \(\mathbf{u}[b,c]\) starts with \(\mathsf{u}_{1}^{\prime}c\) with \(\mathsf{con}(\mathbf{u}_{1}^{\prime})=\{c^{*},b\}\) but \(\mathbf{v}[b,c]\) does not starts with \(\mathsf{v}_{1}^{\prime}b^{*}\) or \(\mathbf{v}_{1}^{\prime}c\) with \(\mathsf{con}(\mathbf{v}_{1}^{\prime})=\{c^{*},b\}\) and \(\mathsf{occ}(b,\mathbf{v}_{1}^{\prime})=\mathsf{occ}(b,\mathbf{u}_{1}^{\prime})\), which contradicts with (IIIa) of Theorem 4.2.
On the other hand, we claim that at least one of four cases \(b,b^{*}\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\), \(c,c^{*}\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\), \(b^{*},c^{*}\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\) or \(b,c\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\) is true. If \(b,b^{*},c,c^{*}\not\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\), then \(\mathbf{u}[b,c]\) ends with \(c\) but \(\mathbf{v}[b,c]\) ends with \(b\), which contradicts with (I) of Theorem 4.2. If only \(b\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\mathbf{u}[b,c]\) ends with \(c^{\mathsf{occ}(b,\mathbf{w})}\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIb) of Theorem 4.2. If only \(b^{*}\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\mathbf{u}[b,c]\) ends with \(c^{\mathsf{occ}(b^{*},\mathbf{u}_{2}\mathbf{w})}\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIb) of Theorem 4.2. If only \(c\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\mathbf{u}[b,c]\) ends with \(bc^{1+\mathsf{occ}(c,\mathbf{u}_{2}\mathbf{w})}\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (IIb) of Theorem 4.2. If only \(b\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\mathbf{u}[b,c]\) ends with \(c(c^{*},\mathbf{u}_{2}\mathbf{w})\) but \(\mathbf{v}[b,c]\) does not, which contradicts with (Ib) of Theorem 4.2. If only \(c,b^{*}\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\mathbf{u}[b,c]\) ends with \(b\mathbf{v}_{2}^{\prime}\) or \(c^{*}\mathbf{v}_{2}^{\prime}\) with \(\mathsf{con}(\mathbf{v}_
which contradicts with (IIIb) of Theorem 4.2. Therefore, the identities (5.1)-(5.7) and their reverses can be used to convert \(\mathbf{u}_{1}b\mathbf{c}\mathbf{u}_{2}\mathbf{w}\) into \(\mathbf{u}_{1}cb\mathbf{u}_{2}\mathbf{w}\). By repeating this process, the word \(\mathbf{u}=\mathbf{u}^{\prime}a\mathbf{w}=\mathbf{u}_{1}bc\mathbf{u}_{2} \mathbf{w}\) can be converted into the word \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\) by the identities (5.1)-(5.7) and their reverses.
Clearly the identities \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{u}^{\prime}a\mathbf{w} \approx\mathbf{v}^{\prime}b\mathbf{w}\) holds in \((\mathsf{baxt}_{2},\ ^{\sharp})\). Note that \(|\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}|=|\mathbf{u}^{\prime}a\mathbf{w}|=| \mathbf{v}^{\prime}b\mathbf{w}|\) and words in the identity \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) have a longer common suffix than words in the identity \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\). Hence \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\notin\Sigma\) by the minimality assumption on the lengths \(\mathbf{u}^{\prime}\) and \(\mathbf{v}^{\prime}\), that is, \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) can be deducible from (5.1)-(5.7) and their reverses. We have shown that \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\) can be deducible from (5.1)-(5.7) and their reverses. Hence \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) can be deducible from (5.1)-(5.7) and their reverses, which contradicts with \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\in\Sigma\). Therefore \(\Sigma=\emptyset\).
Next, we show that \((\mathsf{baxt}_{3},\ ^{\sharp})\) is non-finitely based.
**Theorem 5.2**.: _Suppose that \((M,\ ^{*}\,)\) is an involution monoid satisfying the following conditions:_
1. _for each_ \(k\geq 2\)_,_ \((M,\ ^{*}\,)\) _satisfies the identity_ \(\mathbf{p}_{k}\approx\mathbf{q}_{k}\) _where_ \[\mathbf{p}_{k} =x_{1}^{*}x_{2}^{*}\cdots x_{2k}^{*}\cdot xx^{*}\cdot x^{*}x_{1} \cdots x_{2k}x\cdot x^{*}x_{1}^{*}x_{3}^{*}\cdots x_{2k-1}^{*}x_{2}^{*}x_{4} ^{*}\cdots x_{2k}^{*},\] \[\mathbf{q}_{k} =x_{1}^{*}x_{2}^{*}\cdots x_{2k}^{*}\cdot xx^{*}\cdot xx_{1} \cdots x_{2k}x^{*}\cdot x^{*}x\cdot x_{1}^{*}x_{3}^{*}\cdots x_{2k-1}^{*}x_{2 }^{*}x_{4}^{*}\cdots x_{2k}^{*};\]
2. \(\mathsf{ip}(\mathbf{u})=\mathsf{ip}(\mathbf{v}),\mathsf{fp}(\mathbf{u})= \mathsf{fp}(\mathbf{v})\) _and_ \(\mathbf{u}\approx\mathbf{v}\) _is balanced for any_ \(\mathbf{u}\approx\mathbf{v}\) _satisfied by_ \((M,\ ^{*}\,)\)_;_
3. _if_ \((M,\ ^{*}\,)\) _satisfies a non-trival identity_ \(y^{*}xx^{*}\cdot x^{*}yx\cdot x^{*}xy^{*}\approx\mathbf{w}\)_, then_ \(\mathbf{w}=y^{*}xx^{*}\cdot xyx^{*}\cdot x^{*}xy^{*}\)_;_
4. _if_ \(\mathbf{u}[x,y]\in\mathbf{a}y\{x,x^{*},y,y^{*}\}^{\times}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x,x^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\mathbf{a}^{\prime}y\{x,x^{*},y,y^{*}\}^{\times}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_,_ 2. _if_ \(\mathbf{u}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}y\mathbf{a}\) _with_ \(\mathsf{con}(\mathbf{a})=\{x,x^{*}\}\)_, then_ \(\mathbf{v}[x,y]\in\{x,x^{*},y,y^{*}\}^{\times}y\mathbf{a}^{\prime}\) _where_ \(\mathbf{a}^{\prime}\) _is a permutation of_ \(\mathbf{a}\)_._
_Then \((M,\ ^{*}\,)\) is non-finitely based._
The proof of Theorem 5.2 is given at the end of Lemma 5.4.
For each \(k\geq 2\), define
\[\mathsf{P}_{k} =x_{1}^{*}x_{2}^{*}\cdots x_{2k}^{*}\cdot xx^{*}\cdot x^{*}x_{1 \pi}\cdots x_{2k\pi}x\cdot x^{*}x\cdot x_{1}^{*}x_{3}^{*}\cdots x_{2k-1}^{*}x_{2 }^{*}x_{4}^{*}\cdots x_{2k}^{*},\] \[\mathsf{Q}_{k} =x_{1}^{*}x_{2}^{*}\cdots x_{2k}^{*}\cdot xx^{*}\cdot xx_{1\sigma }\cdots x_{2k\sigma}x^{*}\cdot x^{*}x\cdot x_{1}^{*}x_{3}^{*}\cdots x_{2k-1}^{*}x _{2}^{*}x_{4}^{*}\cdots x_{2k}^{*},\]
where \(\pi,\sigma\) are any permutations on \(\{1,2,\dots,2k\}\).
**Lemma 5.3**.: _Let \((M,\,^{*}\,)\) be an involution monoid satisfying conditions (II) and (III) in Theorem 5.2. Suppose that \(\mathbf{p}_{k}\approx\mathbf{w}\) is any word identity satisfied by \((M,\ ^{*}\,)\) such that \(\mathbf{p}_{k}\in\mathsf{P}_{k}\). Then \(\mathbf{w}\in\mathsf{P}_{k}\cup\mathsf{Q}_{k}\)._
Proof.: Note that \(\mathbf{p}_{k}[x_{i},x]=x_{i}^{*}xx^{*}\cdot x^{*}x_{i}x\cdot x^{*}xx_{i}^{*}\) for \(i=1,2,\cdots,2k\). It follows from the condition (II) of Theorem 5.2 that \(\mathbf{w}[x_{i},x]=x_{i}^{*}xx^{*}\cdot x^{*}x_{i}x\cdot x^{*}xx_{i}^{*}\) or \(\mathbf{w}[x_{i},x]=x_{i}^{*}xx^{*}\cdot xx_{i}x^{*}\cdot x^{*}xx_{i}^{*}\). Then it follows from (III) Theorem 5.2 that \(\mathbf{w}\in\mathsf{P}_{k}\cup\mathsf{Q}_{k}\).
A word identity \(\mathbf{u}\approx\mathbf{v}\) is _k-limited_ if \(\mathsf{con}(\overline{\mathbf{u}\mathbf{v}})\leq k\). For any involution semigroup \((S,\ ^{*})\), let \(\mathsf{id}_{k}(S,\ ^{*})\) denote the set of all \(k\)-limited word identities of \((S,\ ^{*})\).
**Lemma 5.4**.: _Suppose that \((M,\ ^{*}\,)\) is an involution monoid satisfying conditions (II)-(IV) in Theorem 5.2. Let \(\mathbf{s}\approx\mathbf{t}\) be an identity which is directly deducible from some identity in \(\mathsf{id}_{2k}(M,\ ^{*}\,)\) with \(\lfloor\,\mathbf{s}\,\rfloor\in\mathsf{P}_{k}\). Then \(\lfloor\,\mathbf{t}\,\rfloor\in\mathsf{Q}_{k}\)._
Proof.: Let \(\mathbf{u}\approx\mathbf{v}\) be a word identity in \(\mathsf{id}_{2k}(S,\;^{*}\,)\) from which the identity \(\mathbf{s}\approx\mathbf{t}\) is directly deducible. There is a substitution \(\varphi:\mathcal{X}\to\mathsf{T}(\mathcal{X})\) such that \(\varphi(\mathbf{u})\) is a subterm of \(\mathbf{s}\), and replacing this particular subterm \(\varphi(\mathbf{u})\) of \(\mathbf{s}\) with \(\varphi(\mathbf{v})\) results in \(\mathbf{t}\). By Remark 2.2, either \(\lfloor\varphi(\mathbf{u})\rfloor\) or \(\lfloor(\varphi(\mathbf{u}))^{*}\rfloor\) is a factor of \(\lfloor\mathbf{s}\rfloor\). It suffices to consider the former case since the latter is similar. Hence there exist words \(\mathbf{a},\mathbf{b}\in(\mathcal{X}\cup\mathcal{X}^{*})^{+}\) such that \(\lfloor\mathbf{s}\rfloor=\mathbf{a}\lfloor\varphi(\mathbf{u})\rfloor\mathbf{b}\). Since \(\mathbf{t}\) is obtained by replacing \(\varphi(\mathbf{u})\) in \(\mathbf{s}\) with \(\varphi(\mathbf{v})\), it follows that \(\lfloor\mathbf{t}\rfloor=\mathbf{a}\lfloor\varphi(\mathbf{v})\rfloor\mathbf{b}\). Since \(\lfloor\mathbf{s}\rfloor\in\mathsf{P}_{k}\), it follows from Lemma 5.3 that \(\lfloor\mathbf{t}\rfloor\in\mathsf{P}_{k}\cup\mathsf{Q}_{k}\). Working toward a contradiction, suppose that \(\lfloor\mathbf{t}\rfloor\in\mathsf{Q}_{k}\). Since \(\lfloor\mathbf{s}\rfloor\) and \(\lfloor\mathbf{t}\rfloor\) share the same prefix \(\mathbf{a}\) and the same suffix \(\mathbf{b}\) and \(\mathsf{ip}(\lfloor\varphi(\mathbf{u})\rfloor)=\mathsf{ip}(\lfloor\varphi( \mathbf{v})\rfloor)\) and \(\mathsf{fp}(\lfloor\varphi(\mathbf{u})\rfloor)=\mathsf{fp}(\lfloor\varphi( \mathbf{v})\rfloor),\;\lfloor\varphi(\mathbf{u})\rfloor\) contains the factor
\[x^{*}\cdot x^{*}x_{1\pi}\cdots x_{2k\pi}x\cdot x^{*}\]
and \(\lfloor\mathbf{v}\varphi\rfloor\) contains the factor
\[x^{*}\cdot xx_{1\pi}\cdots x_{2k\pi}x^{*}\cdot x^{*}.\]
It follows from (IV) of Theorem 5.2 that \(\widehat{\mathsf{occ}}_{x_{i\pi}}(x_{i\pi}^{*},\mathbf{u}\varphi)\neq 0\) and \(\widehat{\mathsf{occ}}_{x_{i\pi}}(x_{i\pi}^{*},\varphi(\mathbf{u}))\neq 0\) for each \(i\in\{1,2,\ldots,2k\}\), and so \(\mathbf{a},\mathbf{b}=\emptyset\). Since \(\mathbf{u}\approx\mathbf{v}\) is \(2k\)-limited and \(\lfloor\varphi(\mathbf{u})\rfloor\approx\lfloor\varphi(\mathbf{v})\rfloor\) is \(2k+1\)-limited, there exists a variable \(s\in\mathsf{con}(\mathbf{u})\) such that \(\lfloor\varphi(s)\rfloor\) contains one of the following factors:
\[x^{*}x_{1\pi},\ x_{1\pi}x_{2\pi},\ x_{2\pi}x_{3\pi},\ \cdots,\ x_{2k-1\pi}x_{2k\pi},\ x _{2k\pi}x.\]
Suppose that there exists a variable \(s\in\mathsf{con}(\mathbf{u})\) such that \(\lfloor\varphi(s)\rfloor\) contains \(x_{2k\pi}x\) as a factor. Then \(\lfloor\mathbf{t}\rfloor\) also contains \(x_{2k\pi}x\) as a factor, which contradicts with \(\lfloor\mathbf{t}\rfloor\in\mathsf{Q}_{k}\). Similarly, there does not exist \(s\in\mathsf{con}(\mathbf{u})\) such that \(\lfloor\varphi(s)\rfloor\) contains \(x^{*}x_{1\pi}\) as a factor. Suppose that there exists a variable \(s\in\mathsf{con}(\mathbf{u})\) such that \(\lfloor\varphi(s)\rfloor\) contains \(x_{i\pi}x_{(i+1)\pi}\) as a factor for some \(i=1,2,\ldots,2k-1\). Clearly, \(\mathsf{occ}(s,\mathbf{u})=1\). Then \(x_{i\pi}x_{(i+1)\pi}\) is a factor of \(\lfloor\varphi(\mathbf{v})\rfloor\) since \(\mathbf{u}\approx\mathbf{v}\) is balanced. It follows from the definition of \(\mathsf{P}_{k}\) that \(\mathbf{u}=\mathbf{u}_{1}\mathbf{s}\mathbf{u}_{2}\), and \(s^{*}\) can not occur in \(\mathsf{con}(\mathbf{u}_{1})\) and \(\mathsf{con}(\mathbf{u}_{2})\) simultaneously. Suppose that \(s^{*}\not\in\mathsf{con}(\mathbf{u}_{2})\). Note that \(xx^{*}x\) is a factor of \(\lfloor\mathbf{s}\rfloor\) occurring after \(\lfloor\varphi(s)\rfloor\). Then there exist at most three variables in \(\mathsf{con}(\mathbf{u}_{2})\), say \(s_{1},s_{2},s_{3}\), such that \(\lfloor\varphi(s_{1}s_{2}s_{3})\rfloor\) contains \(xx^{*}x\) as a factor in \(\lfloor\mathbf{s}\rfloor\). It follows from conditions (II) and (IV) of Theorem 5.2 that \(\mathbf{v}=\mathbf{v}_{1}s\mathbf{v}_{2}\) satisfying \(s,s^{*}\not\in\mathsf{con}(\mathbf{v}_{2}),\mathsf{occ}(s_{1},\mathbf{v}_{2} )=\mathsf{occ}(s_{1},\mathbf{u}_{2}),\mathsf{occ}(s_{2},\mathbf{v}_{2})= \mathsf{occ}(s_{1},\mathbf{u}_{2})\) and \(\mathsf{occ}(s_{3},\mathbf{v}_{2})=\mathsf{occ}(s_{3},\mathbf{u}_{2})\), and so \(xx^{*}x\) occurs after \(x_{i\pi}x_{i+1\pi}\) in \(\lfloor\mathbf{t}\rfloor\), but this contradicts with \(\mathbf{a}\lfloor\varphi(\mathbf{v})\rfloor\mathbf{b}\in\mathsf{Q}_{k}\). Consequently, \(\mathbf{a}\lfloor\varphi(\mathbf{v})\rfloor\mathbf{b}\in\mathsf{P}_{k}\).
**Proof of Theorem 5.2.** Let \((M,\;^{*})\) be any involution monoid that satisfies conditions (I)-(IV) in Theorem 5.2. Then there exists some set \(\Sigma\) of word identities such that \((1.1)\cup\Sigma\) is an identity basis for \((M,\;^{*})\). Working toward a contradiction, suppose that \((M,\;^{*})\) is finitely based. Then there exists a finite subset \(\Sigma_{\mathsf{fin}}\) of \(\Sigma\) such that all identities of \((M,\;^{*})\) are deducible from \((1.1)\cup\Sigma_{\mathsf{fin}}\). Hence there exists some fixed integer \(k\) such that \(\Sigma_{\mathsf{fin}}\subseteq\Sigma\cap\mathsf{id}_{2k}(M,\;^{*})\). By (I), the involution monoid \((M,\;^{*})\) satisfies some word identity \(\mathbf{p}_{k}\approx\mathbf{q}_{k}\) with \(\mathbf{p}_{k}\in\mathsf{P}_{k}\) and \(\mathbf{q}_{k}\in\mathsf{Q}_{k}\). Therefore there exists some sequence
\[\mathbf{p}_{k}=\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{m}=\mathbf{q}_{k}\]
of terms such that each identity \(\mathbf{s}_{i}\approx\mathbf{s}_{i+1}\) is directly deducible from some identity \(\mathbf{u}_{i}\approx\mathbf{v}_{i}\in(1.1)\cup\Sigma_{\mathsf{fin}}\). The equality \(\mathbf{s}_{1}=\mathbf{p}_{k}\in\mathsf{P}_{k}\) holds. If \(\mathbf{s}_{i}\in\mathsf{P}_{k}\) for some \(i\geq 1\), then there are two cases depending on whether the identity \(\mathbf{u}_{i}\approx\mathbf{v}_{i}\) is from \((1.1)\) or \(\Sigma_{\mathsf{fin}}\). If \(\mathbf{u}_{i}\approx\mathbf{v}_{i}\) is from \((1.1)\), then \(\mathbf{s}_{i}=\mathbf{s}_{i+1}\) by Remark 2.3, whence \(\mathbf{s}_{i+1}\in\mathsf{P}_{k}\). If \(\mathbf{u}_{i}\approx\mathbf{v}_{i}\) is from \(\Sigma_{\mathsf{fin}}\), then \(\mathbf{s}_{i+1}\in\mathsf{P}_{k}\) by Lemma 5.4. Therefore \(\mathbf{s}_{i+1}\in\mathsf{P}_{k}\) in any case, whence by induction, \(\mathbf{s}_{i}\in\mathsf{P}_{k}\) for all \(i\). But this implies the contradiction \(\mathbf{q}_{k}=\mathbf{s}_{m}\in\mathsf{P}_{k}\). Consequently, the involution monoid \((M,\;^{*})\) is non-finitely based.
**Theorem 5.5**.: _The involution monoid \((\mathsf{baxt}_{3},\ ^{\sharp})\) is non-finitely based._
Proof.: It follows from Theorem 4.3 that \((\mathsf{baxt}_{3},\ ^{\sharp})\) satisfies all of conditions of Theorem 5.2. Therefore \((\mathsf{baxt}_{3},\ ^{\sharp})\) is non-finitely based.
Now, we consider the finite basis problem for \((\mathsf{baxt}_{n},\ ^{\sharp})\) with \(n\geq 4\).
**Theorem 5.6**.: _The identities (1.1) and_
\[xhyk\,xy\,sxty\approx xhyk\,yx\,sxty,\ \ \ xhyk\,xy\,sytx\approx xhyk\,yx\, sytx, \tag{5.3}\]
_constitute an identity basis for \((\mathsf{baxt}_{n},\ ^{\sharp})\) for \(n\geq 4\)._
Proof.: By Theorem 3.7, we only need to show that the result holds for \((\mathsf{baxt}_{4},\ ^{\sharp})\). Clearly, \((\mathsf{baxt}_{4},\ ^{\sharp})\) satisfies the identities (1.1), and (5.3) by Theorem 4.4.
Note that the identities (1.1) can be used to convert any non-empty term into some unique word. It suffices to show that each non-trivial word identity satisfied by \((\mathsf{baxt}_{4},\ ^{\sharp})\) can be derived from (5.3). Note that any identity satisfied by \((\mathsf{baxt}_{4},\ ^{\sharp})\) is balanced. Let \(\Sigma\) be the set of all non-trivial balanced word identities satisfied by \((\mathsf{baxt}_{4},\ ^{\sharp})\) but can not be deducible from (5.3). Suppose that \(\Sigma\neq\emptyset\). Note that any balanced identity \(\mathbf{u}\approx\mathbf{v}\) in \(\Sigma\) can be written uniquely into the form \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) where \(a,b\) are distinct variables and \(|\mathbf{u}^{\prime}|=|\mathbf{v}^{\prime}|\). Choose an identity, say \(\mathbf{u}\approx\mathbf{v}\), from \(\Sigma\) such that when it is written as \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\), the lengths of the words \(\mathbf{u}^{\prime}\) and \(\mathbf{v}^{\prime}\) are as short as possible. Since \(\mathbf{u}\approx\mathbf{v}\) is balanced, we have \(\mathsf{con}(\mathbf{u}^{\prime}a)=\mathsf{con}(\mathbf{v}^{\prime}b)\) and \(\mathsf{occ}(x,\mathbf{u}^{\prime}a)=\mathsf{occ}(x,\mathbf{v}^{\prime}b)\) for any \(x\in\mathsf{con}(\mathbf{u}^{\prime}a)\).
Since \(\mathsf{con}(\mathbf{u}^{\prime}a)=\mathsf{con}(\mathbf{v}^{\prime}b)\), we have \(b\in\mathsf{con}(\mathbf{u}^{\prime})\), and so \(\mathbf{u}^{\prime}a=\mathbf{u}_{1}bc\mathbf{u}_{2}\) with \(b\neq c,b\not\in\mathsf{con}(\mathbf{u}_{2})\). Since \(\mathsf{con}(\mathbf{u}^{\prime}a)=\mathsf{con}(\mathbf{v}^{\prime}b)\), we have \(c\in\mathsf{con}(\mathbf{v}^{\prime})\), and so \(\mathbf{v}^{\prime}=\mathbf{v}_{1}\mathbf{cv}_{2}\) with \(c\not\in\mathsf{con}(\mathbf{u}_{2})\). Thus \(\mathbf{u}=\mathbf{u}_{1}bc\mathbf{u}_{2}\mathbf{w}\) and \(\mathbf{v}=\mathbf{v}_{1}c\mathbf{v}_{2}b\mathbf{w}\). We claim that \(b,c\in\mathsf{con}(\mathbf{u}_{1})\). If \(b,c\not\in\mathsf{con}(\mathbf{u}_{1})\), then \(\mathsf{occ}(b,\mathbf{u}^{\prime}a)=1\), and so \(\widehat{\mathsf{occ}}c_{c}(b,\mathbf{u})=1\) but \(\widehat{\mathsf{occ}}c_{c}(b,\mathbf{v})=0\), which contradicts with Theorem 4.4. If only \(b\) occurs in \(\mathbf{u}_{1}\), then \(\widehat{\mathsf{occ}}c_{c}(b,\mathbf{u})>\widehat{\mathsf{occ}}c_{c}(b, \mathbf{v})\), which contradicts with Theorem 4.4. If only \(c\) occurs in \(\mathbf{u}_{1}\), then \(\widehat{\mathsf{occ}}c_{b}(c,\mathbf{u})<\widehat{\mathsf{occ}}b_{(}c, \mathbf{v})\), which contradicts with Theorem 4.4. If only \(c\) occurs in \(\mathbf{u}_{1}\), then \(\widehat{\mathsf{occ}}b_{(}c,\mathbf{u})<\widehat{\mathsf{occ}}b_{(}c,\mathbf{v})\), which contradicts with Theorem 4.4. We also claim that \(b,c\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\). If \(b,c\not\in\mathsf{con}(\mathbf{u}_{2}\mathbf{w})\), then \(\widehat{\mathsf{occ}}b_{(}c,\mathbf{u})=1\) but \(\widehat{\mathsf{occ}}b_{(}c,\mathbf{v})=0\), which contradicts with Theorem 4.4. If only \(b\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\widehat{\mathsf{occ}}c_{c}(b,\mathbf{v})<\widehat{\mathsf{occ}}c_{c}(b, \mathbf{v})\), which contradicts with Theorem 4.4. If only \(c\) occurs in \(\mathbf{u}_{2}\mathbf{w}\), then \(\widehat{\mathsf{occ}}b_{(}c,\mathbf{u})>\widehat{\mathsf{occ}}b_{(}c, \mathbf{v})\), which contradicts with Theorem 4.4. As such, we can deduce the word \(\mathbf{u}_{1}cb\mathbf{u}_{2}\mathbf{w}\), by applying the identities (5.3). By repeating this process, the word \(\mathbf{u}_{1}bc\mathbf{u}_{2}\mathbf{w}\) can be converted into the word \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\).
Clearly the identities \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{u}^{\prime}a\mathbf{w} \approx\mathbf{v}^{\prime}b\mathbf{w}\) holds in \((\mathsf{baxt}_{4},\ ^{\sharp})\). Note that \(|\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}|=|\mathbf{u}^{\prime}a\mathbf{w}|=| \mathbf{v}^{\prime}b\mathbf{w}|\) and words in the identity \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) have a longer common suffix than words in the identity \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\). Hence \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\not\in\Sigma\) by the minimality assumption on the lengths of \(\mathbf{u}^{\prime}\) and \(\mathbf{v}^{\prime}\), that is, \(\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) can be deducible from (5.3). We have shown that \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{u}_{1}c\mathbf{u}_{2}b\mathbf{w}\) can be deducible from (5.3). Hence \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\) can be deducible from (5.3), which contradicts with \(\mathbf{u}^{\prime}a\mathbf{w}\approx\mathbf{v}^{\prime}b\mathbf{w}\in\Sigma\). Therefore \(\Sigma=\emptyset\).
In the end, we consider the number of subvarieties of \(\mathsf{Var}(\mathsf{baxt}_{n},\ ^{\sharp})\) for each \(n\geq 2\). Recall that a word \(\mathbf{u}\) is an _isoterm_ for an involution monoid if it does not satisfy any non-trivial word identity of the form \(\mathbf{u}\approx\mathbf{v}\).
**Lemma 5.7** ([21, Theorem 3.6]).: _Let \((M,\ ^{*})\) be any involution monoid with isoterms \(xx^{*}yy^{*}\) and \(xyy^{*}x^{*}\). Then the variety \(\mathsf{Var}(M,\ ^{*})\) contains continuum many subvarieties._
**Theorem 5.8**.: _The variety \(\mathsf{Var}(\mathsf{baxt}_{n},\ ^{\sharp})\) for each finite \(n\geq 2\) contains continuum many subvarieties._
Proof.: By Theorems 4.2-4.4, it is routine to show that both the words \(xx^{*}yy^{*}\) and \(xyy^{*}x^{*}\) are isoterms for \((\mathsf{baxt}_{n},\ ^{\sharp})\) for each finite \(n\geq 2\). Now the results follows from Lemma 5.7.
## 6. Recognizing identities of \((\mathsf{baxt}_{n},\ ^{\sharp})\) in polynomial time
In this section, it is shown that the identity checking problem of \((\mathsf{baxt}_{n},\ ^{\sharp})\) for each finite \(n\) belongs to the complexity class \(\mathsf{P}\).
Let \(\mathbf{u}\in(\mathcal{X}\cup\mathcal{X}^{*})^{+}\). If \(x,x^{*}\in\mathsf{con}(\mathbf{u})\), then \(\{x,x^{*}\}\) is called a _mixed pair_ of \(\mathbf{u}\). Denote by \(\mathsf{pre}(\mathbf{u})\) the longest prefix of \(\mathbf{u}\) containing only one variable, that is, \(|\,\mathsf{con}(\mathsf{pre}(\mathbf{u}))|=1\), and \(\mathsf{suf}(\mathbf{u})\) the longest suffix of \(\mathbf{u}\) containing only one variable, that is, \(|\,\mathsf{con}(\mathsf{suf}(\mathbf{u}))|=1\); and denote by \(\mathsf{pre}(\mathbf{u})\) the longest prefix of \(\mathbf{u}\) which does not contain any mixed pair, and \(\mathsf{sufn}(\mathbf{u})\) the longest suffix of \(\mathbf{u}\) which does not contain any mixed pair.
**Theorem 6.1**.: _The decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{n},\ ^{\sharp})\) for each finite \(n\) belongs to the complexity class \(\mathsf{P}\)._
Proof.: For decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{1},\ ^{\sharp})\), given any word identity \(\mathbf{u}\approx\mathbf{v}\), it suffices to show that whether \(\mathsf{con}(\overline{\mathbf{u}})=\mathsf{con}(\overline{\mathbf{v}})\) and \(\mathsf{occ}(x,\mathbf{u})+\mathsf{occ}(x^{*},\mathbf{u})=\mathsf{occ}(x, \mathbf{v})+\mathsf{occ}(x^{*},\mathbf{v})\) for any \(x\in\mathsf{con}(\overline{\mathbf{u}})\). Clearly, these can be completed in polynomial time.
For decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{2},\ ^{\sharp})\), given any word identity \(\mathbf{u}\approx\mathbf{v}\), it suffices to show that one can check whether the identity \(\mathbf{u}\approx\mathbf{v}\) is balanced and the words \(\mathbf{u}\) and \(\mathbf{v}\) satisfy conditions of Theorem 4.2 in polynomial time.
To check whether \(\mathbf{u}\approx\mathbf{v}\) is balanced, it suffices to check whether \(\mathsf{con}(\mathbf{u})=\mathsf{con}(\mathbf{v})\) and \(\mathsf{occ}(x,\mathbf{u})=\mathsf{occ}(x,\mathbf{v})\) for any \(x\in\mathsf{con}(\mathbf{u})\). Clearly, these can be completed in polynomial time.
To check conditions (I)-(II) of Theorem 4.2, it suffices to check that, for any \(x,y\in\mathsf{con}(\mathbf{u})\) with \(x,x^{*}\neq y\), whether \(\mathsf{pre}(\mathbf{u}[x,y])=\mathsf{pre}(\mathbf{v}[x,y])\), and the variable following immediately \(\mathsf{pre}(\mathbf{u}[x,y])\) is the same as the variable following immediately \(\mathsf{pre}(\mathbf{v}[x,y])\); and to check whether \(\mathsf{suf}(\mathbf{u}[x,y])=\mathsf{suf}(\mathbf{v}[x,y])\), and the variable preceding immediately \(\mathsf{suf}(\mathbf{u}[x,y])\) is the same as the variable preceding immediately \(\mathsf{suf}(\mathbf{v}[x,y])\). There are at most \(\binom{|\,\mathsf{con}(\mathbf{u})|}{2}\) pairs of such \(x,y\). Clearly, these can be completed in polynomial time.
To check condition (III) of Theorem 4.2, it suffices to check that, for any \(x,y\in\mathsf{con}(\mathbf{u})\) with \(x,x^{*}\neq y\), whether \(\mathsf{con}(\mathsf{pre}(\mathbf{u}[x,y]))=\mathsf{con}(\mathsf{pre}(\mathbf{ v}[x,y]))\) and \(\mathsf{occ}(z,\mathsf{pre}(\mathbf{u}[x,y]))=\mathsf{occ}(z,\mathsf{pre}( \mathbf{v}[x,y]))\) for any \(z\in\mathsf{con}(\mathsf{pre}(\mathbf{u}[x,y]))\); and whether \(\mathsf{con}(\mathsf{sufn}(\mathbf{u}[x,y]))=\mathsf{con}(\mathsf{sufn}( \mathbf{v}[x,y]))\) and \(\mathsf{occ}(z,\mathsf{sufn}(\mathbf{u}[x,y]))=\mathsf{occ}(z,\mathsf{sufn}( \mathbf{v}[x,y]))\) for any \(z\in\mathsf{con}(\mathsf{sufn}(\mathbf{u}[x,y]))\). There are at most \(\binom{|\,\mathsf{con}(\mathbf{u})|}{2}\) pairs of such \(x,y\). Clearly, these can be completed in polynomial time.
Therefore the decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{2},\ ^{\sharp})\) belongs to the complexity class \(\mathsf{P}\).
For decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{3},\ ^{\sharp})\), given any word identity \(\mathbf{u}\approx\mathbf{v}\), by Theorem 4.2 and the above arguments, we only need to show that one can check whether the words \(\mathbf{u}\) and \(\mathbf{v}\) satisfy conditions (III)-(V) of Theorem 4.3 in polynomial time.
To check condition (III) of Theorem 4.3, it suffices to check that, for any \(x,y\in\mathsf{con}(\mathbf{u})\) with \(x,x^{*}\neq y\), whether \(\mathsf{con}(\mathsf{pre}(\mathbf{u}[x,y]))=\mathsf{con}(\mathsf{pre}(\mathbf{ v}[x,y]))\), \(\mathsf{occ}(z,\mathsf{pre}(\mathbf{u}[x,y]))=\mathsf{occ}(z,\mathsf{pre}( \mathbf{v}[x,y]))\) for any \(z\in\mathsf{con}(\mathsf{pre}(\mathbf{u}[x,y]))\) and the variable following immediately \(\mathsf{pre}(\mathbf{u}[x,y])\) is the same as the variable following immediately \(\mathsf{pre}(\mathbf{v}[x,y])\); and to check whether \(\mathsf{con}(\mathsf{sufn}(\mathbf{u}[x,y]))=\mathsf{con}(\mathsf{sufn}( \mathbf{v}[x,y]))\), \(\mathsf{occ}(z,\mathsf{sufn}(\mathbf{u}[x,y]))=\mathsf{occ}(z,\mathsf{sufn}( \mathbf{v}[x,y]))\) for any \(z\in\mathsf{con}(\mathsf{sufn}(\mathbf{u}[x,y]))\) and the variable preceding immediately \(\mathsf{pre}(\mathbf{u}[x,y])\) is the same
as the variable preceding immediately \(\mathsf{pren}(\mathbf{v}[x,y])\). There are at most \(\binom{|\mathsf{con}(\mathbf{u})|}{2}\) pairs of such \(x,y\). Clearly, these can be completed in polynomial time.
To check condition (IV) of Theorem 4.3, it suffices to check that, for any \(x,y\in\mathsf{con}(\mathbf{u})\) with \(x,x^{*}\neq y\), whether \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})+\widehat{\mathsf{occ}}_{y}(x^{*}, \mathbf{u})=\widehat{\mathsf{occ}}_{y}(x,\mathbf{v})+\widehat{\mathsf{occ}}_ {y}(x^{*},\mathbf{v})\) and \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})+\widehat{\mathsf{occ}}_{y}(x^{*}, \mathbf{u})=\widehat{\mathsf{occ}}_{y}(x,\mathbf{v})+\widehat{\mathsf{occ}}_ {y}(x^{*},\mathbf{v})\). There are at most \(\binom{|\mathsf{con}(\mathbf{u})|}{2}\) pairs of such \(x,y\). Clearly, these can be completed in polynomial time.
To check condition (V) of Theorem 4.3, it suffices to check that, for any \(x,y\in\mathsf{con}(\mathbf{u})\) with \(x,x^{*}\neq y\) satisfying \(\widehat{\mathsf{occ}}_{y}(y^{*},\mathbf{u})=0\) whether \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})=\widehat{\mathsf{occ}}_{y}(x, \mathbf{v})\) and \(\widehat{\mathsf{occ}}_{y}(x^{*},\mathbf{v})=\widehat{\mathsf{occ}}_{y}(x^{ *},\mathbf{v})\), \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})=0\), whether \(\widehat{\mathsf{occ}}_{y}(x,\mathbf{u})=\widehat{\mathsf{occ}}_{y}(x, \mathbf{v})\) and \(\widehat{\mathsf{occ}}_{y}(x^{*},\mathbf{v})=\widehat{\mathsf{occ}}_{y}(x^{ *},\mathbf{v})\). There are at most \(\binom{|\mathsf{con}(\mathbf{u})|}{2}\) pairs of such \(x,y\). Clearly, these can be completed in polynomial time.
Therefore the decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{n},\ ^{\sharp})\) belongs to the complexity class \(\mathsf{P}\).
For decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{n},\ ^{\sharp})\) with \(n\geq 4\), it suffices to check that, for any \(x,y\in\mathsf{con}(\mathbf{u})\), whether \(\widehat{\mathsf{occ}}_{x}(y,\mathbf{u})=\widehat{\mathsf{occ}}_{x}(y, \mathbf{v})\) and \(\widehat{\mathsf{occ}}_{x}(y,\mathbf{u})=\widehat{\mathsf{occ}}_{x}(y, \mathbf{v})\). There are at most \(\binom{|\mathsf{con}(\mathbf{u})|}{2}\) pairs of such \(x,y\). Clearly, these can be completed in polynomial time. Therefore the decision problem \(\textsc{Check-Id}(\mathsf{baxt}_{n},\ ^{\sharp})\) with \(n\geq 4\) belongs to the complexity class \(\mathsf{P}\).
|
2310.13544 | A Diachronic Perspective on User Trust in AI under Uncertainty | In a human-AI collaboration, users build a mental model of the AI system
based on its reliability and how it presents its decision, e.g. its
presentation of system confidence and an explanation of the output. Modern NLP
systems are often uncalibrated, resulting in confidently incorrect predictions
that undermine user trust. In order to build trustworthy AI, we must understand
how user trust is developed and how it can be regained after potential
trust-eroding events. We study the evolution of user trust in response to these
trust-eroding events using a betting game. We find that even a few incorrect
instances with inaccurate confidence estimates damage user trust and
performance, with very slow recovery. We also show that this degradation in
trust reduces the success of human-AI collaboration and that different types of
miscalibration -- unconfidently correct and confidently incorrect -- have
different negative effects on user trust. Our findings highlight the importance
of calibration in user-facing AI applications and shed light on what aspects
help users decide whether to trust the AI system. | Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, Mrinmaya Sachan | 2023-10-20T14:41:46Z | http://arxiv.org/abs/2310.13544v1 | # A Diachronic Perspective on User Trust in AI under Uncertainty
###### Abstract
In a human-AI collaboration, users build a mental model of the AI system based on its reliability and how it presents its decision, e.g. its presentation of system confidence and an explanation of the output. Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust. In order to build trustworthy AI, we must understand how user trust is developed and how it can be regained after potential trust-eroding events. We study the evolution of user trust in response to these trust-eroding events using a betting game. We find that even a few incorrect instances with inaccurate confidence estimates damage user trust and performance, with very slow recovery. We also show that this degradation in trust reduces the success of human-AI collaboration and that different types of miscalibration--unconfidently correct and confidently incorrect--have different negative effects on user trust. Our findings highlight the importance of calibration in user-facing AI applications and shed light on what aspects help users decide whether to trust the AI system.
## 1 Introduction
AI systems are increasingly being touted for use in high-stakes decision-making. For example, a doctor might use an AI system for cancer detection from lymph node images (Bejnordi et al., 2017), a teacher may be assisted by an AI system when teaching students (Cardona et al., 2023), or individuals may rely on AI systems to fulfill their information requirements (Mitra et al., 2018). AI systems are integrated across diverse domains, with an expanding presence in user-centric applications. Despite their growing performance, today's AI systems are still sometimes inaccurate, reinforcing the need for human involvement and oversight.
An effective approach for facilitating decision-making in collaborative settings is for the AI system to offer its confidence alongside its predictions. This is shown in Figure 1, where the AI system provides an additional message that enables the user to either accept or reject the system's answer based on the additional message, such as the confidence score. This makes a strong case for the AI's confidence being calibrated (Guo et al., 2017) - when the confidence score aligns with the probability of the prediction being correct.
When a user interacts with an AI system, they develop a mental model (Hartson and Pyla, 2012) of how the system's confidence relates to the integrity of its prediction. The issue of trust has been extensively studied in psychology and cognitive science with Mayo (2015); Stanton et al. (2021) finding that incongruence (mismatch between mental model and user experience) creates distrust. Given the ever-increasing reliance on AI systems, it is crucial that users possess a well-defined mental model that guides their trust in these systems. Nevertheless, our current understanding regarding the evolution of user trust over time, its vulnerability to trust-depleting incidents, and the methods to re
Figure 1: Diachronic view of a typical human-AI collaborative setting. At each timestep \(t\), the user uses their prior mental model \(\psi_{t}\) to accept or reject the AI system’s answer \(y_{t}\), supported by an additional message \(m_{t}\) (AI’s confidence), and updates their mental model of the AI system to \(\psi_{t+1}\). If the message is rejected, the user invokes a fallback process to get a different answer.
store trust following such events remain unclear. Addressing these inquiries holds great significance in the advancement of reliable AI systems.
In this paper, our objective is to investigate user interactions with an AI system, with a specific focus on how the system's confidence impacts these interactions. Through a series of carefully designed user studies, we explore the implications of miscalibrated confidences on user's perception of the system and how this, in turn, influences their trust in the system. Our experiments shed light on how users respond to various types of miscalibrations. We find that users are especially sensitive to confidently incorrect miscalibration (Section 4.1) that the trust does not recover even after a long sequence of calibrated examples. Subsequently, we delve into an analysis of how trust degradation corresponds to the extent of miscalibration in the examples provided (Section 4.2). Then, we assess whether diminished trust in an AI system for a specific task can extend to affect a user's trust in other tasks (Section 4.3). We also explore different methodologies for modeling a user's trust in an AI system (Section 5). Our results show how reduced trust can lower the performance of the human-AI team thus highlighting the importance of holistic and user-centric calibration of AI systems when they are deployed in high-stakes settings.
## 2 Related Work
Human-AI Collaboration.Optimizing for cooperation with humans is more productive than focusing solely on model performance (Bansal et al., 2021). Human-AI collaboration research has focused on AI systems explaining their predictions (Ribeiro et al., 2016) or examining the relationship between trust and AI system's accuracy (Rechnemer and Yin, 2022; Ma et al., 2023). Related to our work, Papenmeier et al. (2019); Bansal et al. (2021); Wang and Yin (2022); Papenmeier et al. (2022) examined the influence of explanations and found that inaccurate ones act as deceptive experiences which erode trust.
Nourani et al. (2021); Mozannar et al. (2022) study the development of mental models which create further collaboration expectations. This mental model, or the associated expectations, can be violated, which results in degraded trust in the system and hindered collaboration (Grimes et al., 2021). The field of NLP offers several applications where trust plays a vital role, such as chatbots for various tasks or multi-domain question answering (Law et al., 2021; Vikander, 2023; Chiesurin et al., 2023) and transparency and controllability are one of the key components that increase users' trust Bansal et al. (2019); Guo et al. (2022).
Trust and Confidence Calibration.A common method AI systems use to convey their uncertainty to the user is by its confidence (Benz and Rodriguez, 2023; Liu et al., 2023). For the system's confidence to reflect the probability of the system being correct, the confidence needs to be calibrated, which is a long-standing task (Guo et al., 2017; Dhuliawala et al., 2022). This can be any metric, such as quality estimation (Specia et al., 2010; Zouhar et al., 2021) that makes it easier for the user to decide on the AI system's correctness. Related to calibration is selective prediction where the model can abstain from predicting. The latter has been studied in the context of machine learning (Chow, 1957; El-Yaniv et al., 2010) and its various applications (Rodriguez et al., 2019; Kamath et al., 2020; Zouhar et al., 2023).
Trust calibration is the relation between the user's trust in the system and the system's abilities (Lee and Moray, 1994; Turner et al., 2022; Zhang et al., 2020; Yin et al., 2019; Rechkemmer and Yin, 2022; Gonzalez et al., 2020; Vodrahalli et al., 2022). Specifically, Vodrahalli et al. (2022) explore jointly optimization of calibration (transformation of the AI system reported confidence) with human feedback. They conclude that uncalibrated models improve human-AI collaboration. However, apart from their experimental design being different from ours, they also admit to not studying the temporal effect of miscalibrations. Because of this, our results are not in contradiction.
Modeling User Trust.Ajenaghughrure et al. (2019); Zhou et al. (2019) predictively model the user trust in the AI system. While successful, they use psychological signals, such as EEG or GSR, for their predictions, which is usually inaccessible in the traditional desktop interface setting. Li et al. (2023) use combination of demographic information together with interaction history to predict whether the user is going to accept or reject AI system's suggestion. The field has otherwise focused on theoretical frameworks to explain factors that affect trust in mostly human-robot interaction scenarios (Nordheim et al., 2019; Khavas et al., 2020; Ajenaghughrure et al., 2021; Gebru et al., 2022).
Human AI Interaction over Time
We begin by providing a preliminary formalism for a human-AI interaction over time. It comprises of two interlocutors, an **AI system** and a **user**. At time \(t\), the user provides the AI system with an input or a question \(q_{t}\) and the AI system responds with an answer \(y_{t}\) along with a message comprising of its confidence in the answer \(m_{t}\). The user has two options, either they accept the AI's answer or reject it and try to find an answer themselves. The AI is either **correct** (\(a_{t}=1\)) or **incorrect** (\(a_{t}=0\)). The combination of correctness \(a_{t}\) and confidence \(m_{t}\) results in four different possibilities each with a different reward, or risk, shown in Figure 2. For example, confidently incorrect may lead to the user disastrously accepting a false answer while unconfidently correct will make the user spend more time finding the answer themselves.
During the interaction, the user learns a **mental model** (\(\Psi_{t}\)) of the AI system that they can use to reject and accept the AI's prediction. This mental model encapsulates something commonly referred to as **user trust**, which is, however, abstract and can not be measured directly. Instead, in our study, we rely on a proxy that describes a manifestation of this trust. We ask the user to make an estimate of their trust by tying it to a monetary reward. We assume that both depend on the given question \(q_{t}\), message \(m_{t}\), and history. The users place a bet between \(0\epsilon\) and \(10\epsilon\), i.e. \(u_{t}^{B}=U^{B}(q_{t},m_{t},\Psi_{t})\in[0\epsilon,10\epsilon]\). We formally define the user's decision to accept or reject the AI's answer as \(u_{t}^{D}=U^{D}(q_{t},m_{t},\Psi_{t})\)\(\in\)\(\{1,0\}\), given question \(q_{t}\), message \(m_{t}\), and history. In this work, by the user's mental model, we refer to it in the context of the features the user might use to decide how much they are willing to bet on the AI's prediction and how likely they are to agree with the AI and how \(\Psi_{t}\) changes over time.
### Study Setup
To study how user trust changes temporally we design a set of experiments with a sequence of interactions between a user and a simulated AI question-answering (QA) system. We recruit participants who are told that they will evaluate a QA system's performance on a sequence of question-answer pairs. The participants are shown the AI's produced confidence in its answer and then are instructed to use this confidence to assess its veracity. We term an instance of the AI's question, prediction, and confidence as a stimulus to the user. This method of using user interactions with a system to study user trust is similar to the study performed by Gonzalez et al. (2020). After the participant decides if the system is correct or incorrect, they bet from \(0\epsilon\) to \(10\epsilon\) on their decision about the system's correctness. We then reveal if the AI was correct or incorrect and show the user the gains or losses. The monetary risk is chosen intentionally in order for the participants to think deeply about the task. An alternative, used by Vodrahalli et al. (2022), is to simply ask for participants' confidence in the answer. While straightforward, we consider this to be inadequate in the crowdfunding setting. This decision is further supported by the fact that there is a difference between what participants report and what they do (Papenmeier et al., 2019). The average duration of the experiment was 6.7 minutes (Figure 9) and we collected 18k stimuli interactions (Table 3). See Figure 3 for an overview of the experiment design and Figure 13 for the annotation interface.1
Footnote 1: Online demo: zouharvi.github.io/trust-intervention
### Simulating AI
To investigate users' interactions, we simulate an AI system that outputs predictions and confidences. The prediction and confidence are produced using a pre-defined generative process.
Our simulated AI encompasses four modes for the generation of AI 'correctness' and confidence values. For miscalibrated questions, we have two modes: confidently incorrect (CI) and unconfidently correct (UC) modes, while for calibrated questions we use the accurate mode (control) to generate questions.
We define a conditional variable \(c_{t}\) which denotes the aforementioned conditions. Then, based on the condition \(c_{t}\), we have the following data generation process at timestep \(t\). In our data generation process, we first decide the AI correctness \(a_{t}\in[0,1]\) and then decide the confidence \(m_{t}\in[0\%,100\%]\) as below:
Figure 2: Possible correctness and confidence combinations of an AI system. Confidently incorrect and unconfidently correct are _miscalibrated_ while the rest is _calibrated_ (i.e. confidence corresponds to correctness.
\[a_{t}\sim\begin{cases}\text{Bernoulli}(0.7)&\text{if }c_{t}=\text{calibrated}\\ \text{Bernoulli}(0.0)&\text{if }c_{t}=\text{CI}\\ \text{Bernoulli}(1.0)&\text{if }c_{t}=\text{UC}\end{cases}\]
\[m_{t}\sim\begin{cases}\text{Uniform}(0.45,0.85)&\text{if }c_{t}=\text{cal.} \wedge a_{t}=1\\ \text{Uniform}(0.2,0.55)&\text{if }c_{t}=\text{cal.}\wedge a_{t}=0\\ \text{Uniform}(0.7,1.0)&\text{if }c_{t}=\text{CI }\wedge a_{t}=0\\ \text{Uniform}(0.1,0.4)&\text{if }c_{t}=\text{UC}\wedge a_{t}=1\end{cases}\]
To control for participants prior knowledge of the answers to the provided questions, we use randomly generated questions with fictional premises. We also experimented with questions sourced from a combination of Natural Questions Kwiatkowski et al. (2019) and TriviaQA Joshi et al. (2017). Unfortunately, this approach resulted in a lot of noise and instances of misconduct as participants would look up the answers to increase their monetary reward. See Appendix A for a description of stimuli generation. We note that the set of questions that the participants see have similar ECE (Expected Calibration Error) scores and we compare this to a real NLP model in Appendix B.
## 4 Experiments
We perform three types of experiments. In Section 4.1, we establish the different effects of confidently incorrect and unconfidently correct stimuli. Then, in Section 4.2 we see how the size of confidently incorrect intervention affects the users interaction with the AI system and in Section 4.3 explore if miscalibration is transferable between question types. Lastly, we predict the user interaction in Section 5.
### Effect of Miscalibration
We categorize AI behavior into four categories (Figure 2) and design an experiment to answer:
**RQ1:** Do miscalibrated examples affect user trust and alter how they interact with the AI system?
We posit that miscalibrated stimuli decrease user trust and subsequently verify the hypotheses:
**H1:** Confidently incorrect examples lower participants' trust in the system
**H2:** Unconfidently correct examples lower participants' trust in the system, but less so
**H3:** Miscalibrated examples reduce the human-AI collaboration performance
We assign each user to a particular condition. For the control group, we show 60 calibrated stimuli. For confidently incorrect and unconfidently correct groups, we show 10 calibrated, then 5 miscalibrated (according to the particular mode), and then 45 calibrated stimuli. We then observe, in particular, the user bet value and accuracy (Figure 4).
**Confidently incorrect intervention.** The control group, which was shown only calibrated stimuli, quickly learns to bet higher than at the beginning and becomes progressively better at it. The confidently incorrect intervention group has the same start but then is faced with the intervention, where they bet incorrectly because of the inaccurate confidence estimation. Even after the intervention, their bet values remain significantly lower and they are worse at judging when the AI is correct. The difference in bet values before and after intervention across confidence levels is also observable in Figure 11. We use the user bet value as a proxy for trust (\(\bar{u}^{B}_{\text{control}}=7\mathfrak{e},\bar{u}^{B}_{\text{CI}}=5\mathfrak{e}\)) and the user correctness of the bet (\(\bar{u}^{B}_{\text{control}}=89\%,\bar{u}^{B}_{\text{CI}}=78\%\)). The significances are \(p{<}10^{-4}\) and \(p{=}0.03\), respec
Figure 4: Average user bet values (y-axis) and bet correctness (point & histogram color) with no intervention (control, top) and confidently incorrect intervention (bottom). The spline shows a 3rd degree polynomial fitted with MSE. Transparent features are overlayed from the other graph. See Figure 14 for an annotated version.
Figure 3: Pipeline for a single stimulus out of 60. The maximum payout for a bet is 10%. UI Elements show possible user actions. See Figure 13 for screenshots.
tively, with two-sided t-test.
Owing to possible errors due to user randomization, we also performed a quasi-experimental analysis of our data to better quantify the effect of our intervention. Interrupted Time Series (Ferron and Rendina-Gobioff, 2014, ITS) analysis is a quasi-experimental method that allows us to assess and quantify the causal effect of our intervention on a per-user basis. ITS models the user's behavior before and after the intervention and quantifies the effect of the intervention. As the comparison is intra-user, it helps mitigate randomness arising from the inter-user comparison between treatment and control. We use ITS with ARIMA modeling, which is expressed as
\[u_{t}^{B}=\beta_{0}+\beta_{1}t+\beta_{2}\mathbb{1}_{t>15}+\epsilon_{t}+\ldots\]
where \(\mathbb{1}_{t>15}\) is the indicator variable indicating whether \(t\) is after the intervention.2 We are interested in the \(\beta_{2}\) values that indicate the coefficient of deviation from the user bet values before the intervention. Using ITS we find a \(\beta_{2}=-1.4\) (\(p{<}0.05\) with two-sided t-test), showing a significant drop in user bet value after the confidently incorrect intervention. We thus reject the null hypothesis and empirically verify **H1**.
Footnote 2: We ignore the moving average and error terms for brevity. See Appendix C for the full formula.
Unconfidently correct intervention.We now turn to the unconfidently correct intervention. From Figure 2, this type of intervention is symmetric to confidently incorrect apart from the fact that the baseline model accuracy is 70%. Figure 5 shows that users are much less affected by this type of miscalibration. A one-sided t-test shows a statistically significant difference between the average bet values across control and unconfidently correct groups (\(p{<}10^{-3}\) with two-sided t-test), which provides evidence for **H2**. Prior work in understanding psychology has found similar results where humans tend to be more sympathetic to underconfident subjects (Thoma, 2016). While applying findings from human-human interaction to human-AI interactions, we exercise caution and acknowledge the need for further research.
Consequences of lower trust.We now examine how user's fallen trust in the system affects their task performance. We assert that when the human's trust is calibrated, i.e., the human can effectively decide when the AI is likely to be right or wrong, signifies a strong collaboration. The overall monetary gain, which the user accumulates, acts as a good proxy for the collaboration. To analyze this difference we fit a linear model after the intervention to predict the rate of score increase. We model the cumulative gain at timestep \(t\) as \(t\cdot\alpha+c\) where \(\alpha\) is perceived as the expected gain in \(\mathfrak{e}\) per one interaction. We report \(\alpha\) for all three interventions. The results in Figure 6 show that without intervention, \(\alpha=5.2\), which is much higher than with unconfidently correct intervention (\(\alpha=4.2\)) and confidently incorrect intervention (\(\alpha=4.0\)). Notably, the confidently incorrect intervention has a more negative effect than the unconfidently correct intervention. We thus empirically validate **H3**, miscalibrated examples significantly reduce the performance of the human-AI team in the long run.
**RQ1 Takeaways:**
* User trust in the AI system is affected by miscalibrated examples.
* Confidently incorrect stimuli reduce trust more than unconfidently correct stimuli.
Figure 5: Average user bet values (y-axis) and bet correctness (point & histogram color) with unconfidently correct intervention. The spline shows 3\({}^{\text{rd}}\) degree polynomial fitted with MSE. Transparent features are overlaid from control group (Figure 4, top).
Figure 6: Average accumulated reward. The \(\alpha\) is the primary coefficient of linear fits after the 15\({}^{\text{th}}\) stimulus (after intervention). Lines in black are fit using ordinary least squares (\(p{<}10^{-4}\) with two-sided t-test).
### Intervention Size
Seeing a noticeable drop in user trust when faced with model confidence errors, we ask:
**RQ2:** How many miscalibrated examples does it take to break the user's trust in the system?
We do so by changing the number of confidently incorrect stimuli from original 5 to 1, 3, 7, and 9 and measure how much are users able to earn _after_ the intervention, how much they are betting immediately after the intervention and later one. We now discuss the average results in Table 1.
Upon observing an increase in intervention size, we note an initial decreasing trend followed by a plateau in \(\beta_{2}\) (4th column), implying a decrease in trust and user bet values, albeit only up to a certain level. Shifting our focus to accuracy, which measures the users' ability to determine the AI's correctness, we observe an initial decline as well (6th column). This decline suggests that users adapt to the presence of miscalibrated examples. However, after 40 examples (25 after intervention), the accuracy begins to rise (8th column) once again, indicating that users adapt once more. Next, we analyze \(\epsilon\) and \(\alpha\), which represent the total reward and the rate of reward increase. As the intervention size increases, both \(\epsilon\) and \(\alpha\) (2nd and 3rd columns) continue to decline. This means that the performance is what is primarily negatively affected. Based on these findings, we conclude that users possess the ability to adapt their mental models as they encounter more calibrated stimuli. However, the decreased trust still leads them to place fewer bets on the system's predictions, resulting in a diminished performance of the human-AI team.
**RQ2 Takeaways:**
* even 5 inaccurate confidence estimation examples are enough to long-term affect users' trust
* with more inaccurate confidence estimation examples, users are more cautious
### Mistrust Transferability
Increasingly, single machine learning models are used on a bevy of different topics and tasks (Kaiser et al., 2017; OpenAI, 2023). Owing to the distribution of the training data, the AI's performance will vary over input types. Although users are generally not privy to training data input types, Mozannar et al. (2022) show that users use this variance in model behavior to learn when the model is likely to be wrong. Inspired by this we ask:
**RQ3:** Do miscalibrated questions of one type of question affect user trust in the model's output for a different type of question?
In the next experiment, we simulate this by having two types of questions - either related to trivia or math. Then, we introduce a confidently incorrect intervention only for one of the types and observe the change in trust on the other one. For example, we introduce a confidently incorrect math questions and then observe how it affects trust on trivia stimuli. We refer to the type of questions we provide intervention for as "affected" questions while the other as "unaffected" questions. We run two sets of experiments where we mix trivia and math as affected questions.
The results in Figure 7 show that there is a gap between trust in the unaffected and affected stimuli type. The gap (\(\bar{u}_{\text{unaffected}}^{B}=5.4\epsilon,\bar{u}_{\text{affected}}^{B}=5.0\epsilon\)) is smaller than in the control settings (Figure 4) but still statistically significant (\(p\)\(<\)\(10^{-3}\) with two-sided t-test). This is supported by the analysis using ITS where we look for the relative change to compare user bet values before and after the intervention. We find a significant decrease in bet values for both affected and unaffected questions (\(\beta_{\text{affected}}=-0.94\), \(\beta_{\text{unaffected}}=-0.53\), \(p\)\(<\)\(0.05\) with two-sided t-test).
**RQ3 Takeaways:**
* Miscalibrated responses of one type affect the user's overall trust in the system
* Miscalibrated responses of one type further reduce user trust in examples of the same type
* Thus users also take into consideration question types as they create mental models of the AI system correctness
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{\(\leq\) 40} & \multicolumn{3}{c}{\(>\) 40} \\
**Int.** & \(\epsilon\) & \(\alpha\) & \(\beta_{2}\) & Bet & Acc. & Bet & Acc. \\ \hline
0 & 207 & 5.3 & - & 6.6 & 92\% & 6.8 & 92\% \\
1 & 188 & 4.8 & -0.5\({}^{\dagger}\) & 6.2 & 87\% & 6.4 & 88\% \\
3 & 193 & 5.0 & -0.8 & 5.9 & 84\% & 5.9 & 82\% \\
5 & 158 & 4.0 & -1.4 & 5.4 & 86\% & 5.3 & 90\% \\
7 & 147 & 3.7 & -1.2 & 5.5 & 80\% & 5.5 & 86\% \\
9 & 118 & 2.9 & -0.9 & 5.6 & 72\% & 5.8 & 84\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experiments with varying numbers of confidently incorrect stimuli. The \(\alpha\) and the gain \(\epsilon\) are shown from 19th sample (after intervention for all). The columns \(\leq 40\) and \(>40\) signify which stimuli in the sequence are considered. All \(\beta\) are with \(p<10^{-3}\) with two-sided t-test apart from \(\dagger\) which is \(p=0.24\).
## 5 Modeling User Trust
In human-AI collaboration systems, it is the collaboration performance that is more important than the accuracy of the AI system itself (Bansal et al., 2021). In such cases, an AI system that can understand and adapt to how its output is used is more better. An important challenge in understanding the user's behavior is estimating how likely the user is to trust the system. This would also allow the system to adapt when user trust in the system is low by perhaps performing a positive intervention that increases user trust. We apply our learnings from the previous section and show that systems that explicitly model the user's past interactions with the system are able to better predict and estimate the user's trust in the system. We now develop increasingly complex predictive statistical models of user behavior, which will reveal what contributes to the user process and affects trust. For evaluation, we use \(F_{1}\) and accuracy (agreement) and mean _absolute_ error (bet value) for interpretability.
* \(u_{t}^{D}\in\{T,F\}\) Will the user agree? (\(F_{1}\))
* \(u_{t}^{D}\in[0,10]\) How much will the user bet? (MAE)
### Local Decision Modeling
We start by modeling the user decision at a particular timestep without explicit access to the history and based only on the pre-selected features that represent the current stimuli and the aggregated user history. These are:
* Average previous bet value
* Average previous TP/FP/TN/FN decision. For example, FP means that the user decided the AI system was correct which was not the case.
* AI system confidence
* Stimulus number in user queue
Each sample (input) is turned into a vector,3 and we treat this as a supervised machine learning task for which we employ linear/logistic regression, decision trees, and multilayer perceptron (see code for details). We evaluate the model on a dev set which is composed of 20% of users4 which do not appear in the training data and present the results in Table 2. It is important to consider the uninformed baseline because of the class imbalance. The results show, that non-linear and autoregressive models predict the user decisions better although not flawlessly.
Footnote 3: For example, \(\langle\)avg. bet: 6.7, TP: 50%, FP: 10%, TN: 30%, FN: 10%, conf: 81%, i: 13\(\rangle\)
Footnote 4: \((30+30+30)\cdot 20\%\cdot 60=1080\) samples
Decision trees provide both the importance of each feature and also an explainable decision procedure for predicting the user bet (see Figure 15). They also offer insights into feature importance via Gini index (Gini, 1912). For our task of predicting bet value, it is: previous average user bet (63%), AI system confidence (31%), stimulus number (1%), and then the rest. The \(R^{2}\) feature values of linear regression reveal similar importance: previous average user bet (0.84), AI system confidence (0.78), previous average TP (0.70) and then the rest. The mean absolute error for bet value prediction of random forest models based only on the current confidence (stateless, i.e. no history information) is 2.9\(\epsilon\). This is in contrast to a mean absolute error of 2.0\(\epsilon\) for a full random forest model. This shows that the interaction history is key in predicting user trust.
### Diachronic Modeling
Recurrent networks can selectively choose to remember instances of the context that are crucial to making a prediction. Unlike alternate approaches
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **Will agree?** & **Bet value** \\ \hline Constant Baseline & 81.8\% (69.2\%) & 3.2\(\epsilon\) \\ \hline Random Forest (stateless) & 86.8\% (81.8\%) & 2.9\(\epsilon\) \\ \hline Logistic/Lin. Regression & 87.8\% (82.0\%) & 2.1\(\epsilon\) \\ Random Forest & 87.9\% (82.8\%) & 2.0\(\epsilon\) \\ Multi-Layer Perceptron & 87.7\% (82.9\%) & 1.9\(\epsilon\) \\ GRU & 89.7\% (85.0\%) & 1.8\(\epsilon\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of modeling various aspects of user decisions. ‘MAE Bet value’ column shows mean absolute error ‘Will agree?’ is formatted as ‘F1 (ACC)’. See Section 5 for a description of target variables. ‘Stateless’ uses only confidence as an input feature.
Figure 7: Average user bet values (y-axis) and bet correctness (point & histogram color). The spline shows a 3\({}^{\text{rd}}\) degree polynomial fitted with MSE. ‘Affected’ is the question type that undergoes confidently incorrect intervention.
that use an average of mean interactions a user had with a system, a GRU can effectively track where user trust in the system underwent a large change. To test this, we look at the information in the hidden state of the GRU we train on the user interactions (see Figures 8 and 12). The GRU's internal state is able to identify areas that caused shifts in the user's trust and changed their future interactions. This peak is much higher for the confidently incorrect than for the unconfidently correct interventions which is in line with our conclusion that confidently incorrect examples deteriorate trust more than unconfidently correct examples.
## 6 Discussion
We now contextualize our findings to real-world applications and discuss the differences and their implications.
Miscalibration impacts user trust.Even a small (5) number of miscalibrated examples affects how users trust the system in the future. In our controlled setting we consider a symmetric risk-reward setup. However, past work has shown that trust is linked to risk. In real applications, the reward and cost of trusting the system might not be the same. For example in an AI system detecting cancer, having a doctor manualy do the screening has lower cost than a misdiagnosis.
Confidently incorrect examples lower trust more than confidently correct examples.Standard methods of evaluating model calibration, such as the Expected Calibration Error (ECE), do not take this into account. A holistic calibration metric should take these user-centric aspects into account, particularly, how users interpret these confidence scores and how it affects their trust in the system.
Miscalibration effects persist and affect user behavior over long time spans.In our setup, users interact with the system continuously over a session. After the intervention, their trust decreases over several interactions. Real-life user interactions with AI systems might not always follow this pattern. For example, a user might use a search engine in bursts when they have an information need. The larger time intervals between interactions might dampen the strong feelings of trust or mistrust.
Mistrust transfers between input types.Our experiments reveal that the model's miscalibration on a certain type of input also reduces the user's trust in the model on other types of inputs. In real-world applications, AI systems are generally presented to users as an abstraction and the user may or may not be aware of the underlying workings of the system. For example, recent user-facing LLMs often employ techniques such as a mixture-of-experts or smaller specialized models that perform different tasks. In such cases, the transfer of miscalibration can be erroneous.
RNN outperforms linear models in modeling user trust.This is indicative that modeling user trust is complex and requires more sophisticated non-linear models. Like most deep learning models, a recurrent network requires more data for accurate prediction. However, user-facing applications can collect several features and with more data deep learning models might generalize better and help us dynamically track and predict user trust.
## 7 Conclusion
When interacting with AI systems, users create mental models of the AI's prediction and identify regions of the system's output they can trust. Our research highlights the impact of miscalibrations, especially in confidently incorrect predictions, which leads to a notable decline in user trust in the AI system. This loss of trust persists over multiple interactions, even with just a small number of miscalibrations (as few as five), affecting how users trust the system in the future. The lower trust in the system then hinders the effectiveness of human-AI collaboration. Our experiments also show that user mental models adapt to consider different input types. When the system is miscalibrated for a specific input type, user trust is reduced for that type of input. Finally, our examination of various trust modeling approaches reveals that models capable of effectively capturing past interactions, like recurrent networks, provide better predictions of user trust over multiple interactions.
Figure 8: Vector similarity (inner product) between subsequent hidden states of the recurrent GRU model. See Figure 12 for comparison across queues.
Future work
Regaining trust.We examined how miscalibrated examples shatter user trust and we show that this effect persists. We also show that this lack of trust adversely affects human-AI collaboration. Understanding how to build user trust in systems could greatly aid system designers.
Complex reward structures.In our experiments, the user is rewarded and penalized equally when they are correct and incorrect. This reward/penalty is also instantly provided to the user. This might not hold for other tasks, for example, in a radiology setting, a false negative (i.e. missing a tumor) has a very large penalty. Past work in psychology has shown that humans suffer from loss-aversion [20] and are prone to making irrational decisions under risk. [19]. We leave experimentation involving task-specific reward frameworks to future work.
Ethics Statement
The participants were informed that their data (anonymized apart from interactions) would be published for research purposes and had an option to raise concerns after the experiment via online chat. The participants were paid together with bonuses, on average, \(\simeq\)$24 per hour, which is above the Prolific's minimum of $12 per hour. The total cost of the experiment was \(\simeq\)$1500.
Broader impact.As AI systems get more ubiquitous, user trust calibration is increasingly crucial. In human-AI collaboration, it is important that the user's trust in the system remains faithful to the system's capabilities. Over-reliance on faulty AI can be harmful and caution should be exercised during deployment of critical systems.
Limitations
Simulated setup.Our experiments were conducted on users who were aware that their actions were being observed, which in turn affects their behavior [18]. We hope our work inspires large-scale experiments that study how users interact directly with a live system.
Domain separation.In the Type-Sensitivity Experiment (Section 4.3) we consider only two question types, trivia and math, and provide the participant with an indicator for the question type. In real-world usage, the user might provide inputs that may not be clearly distinct from each other.
Monetary reward.A user interacting with an information system to seek information. In our experiments, we replace this goals with a monetary reward. This misalignment in the motivation also affects the participant behavior [10].
## Acknowledgments
We thank Hussein Mozannar and Danish Pruthi for their feedback at various stages of the project. We also thank Shreya Sharma, Abhinav Lalwani, and Niharika Singh for being our initial test subjects for data collection. MS acknowledges support from the Swiss National Science Foundation (Project No. 197155), a Responsible AI grant by the Hasler-stifung; and an ETH Grant (ETH-19 21-1).
|
2305.17594 | Fully Automatic Gym Exercises Recording: An IoT Solution | In recent years, working out in the gym has gotten increasingly more
data-focused and many gym enthusiasts are recording their exercises to have a
better overview of their historical gym activities and to make a better
exercise plan for the future. As a side effect, this recording process has led
to a lot of time spent painstakingly operating these apps by plugging in used
types of equipment and repetitions. This project aims to automate this process
using an Internet of Things (IoT) approach. Specifically, beacons with embedded
ultra-low-power inertial measurement units (IMUs) are attached to the types of
equipment to recognize the usage and transmit the information to gym-goers and
managers. We have created a small ecosystem composed of beacons, a gateway,
smartwatches, android/iPhone applications, a firebase cloud server, and a
dashboard, all communicating over a mixture of Bluetooth and Wifi to distribute
collected data from machines to users and gym managers in a compact and
meaningful way. The system we have implemented is a working prototype of a
bigger end goal and is supposed to initialize progress toward a smarter, more
efficient, and still privacy-respect gym environment in the future. A
small-scale real-life test shows 94.6\% accuracy in user gym session recording,
which can reach up to 100\% easily with a more suitable assembling of the
beacons. This promising result shows the potential of a fully automatic
exercise recording system, which enables comprehensive monitoring and analysis
of the exercise sessions and frees the user from manual recording. The
estimated battery life of the beacon is 400 days with a 210 mAh coin battery.
We also discussed the shortcoming of the current demonstration system and the
future work for a reliable and ready-to-deploy automatic gym workout recording
system. | Sizhen Bian, Alexander Rupp, Michele Magno | 2023-05-27T23:12:25Z | http://arxiv.org/abs/2305.17594v1 | # Fully Automatic Gym Exercises Recording: An IoT Solution
###### Abstract
In recent years, working out in the gym has gotten increasingly more data-focused and many gym enthusiasts are recording their exercises to have a better overview of their historical gym activities and to make a better exercise plan for the future. As a side effect, this recording process has led to a lot of time spent painstakingly operating these apps by plugging in used types of equipment and repetitions. This project aims to automate this process using an Internet of Things (IoT) approach. Specifically, beacons with embedded ultra-low-power inertial measurement units (IMUs) are attached to the types of equipment to recognize the usage and transmit the information to gym-goers and managers. We have created a small ecosystem composed of beacons, a gateway, smartwatches, android/H Phone applications, a firebase cloud server, and a dashboard, all communicating over a mixture of Bluetooth and With to distribute collected data from machines to users and gym managers in a compact and meaningful way. The system we have implemented is a working prototype of a bigger end goal and is supposed to initialize progress toward a smarter, more efficient, and still privacy-respect gym environment in the future. A small-scale real-life test shows 94.6% accuracy in user gym session recording, which can reach up to 100% easily with a more suitable assembling of the beacons. This promising result shows the potential of a fully automatic exercise recording system, which enables comprehensive monitoring and analysis of the exercise sessions and frees the user from manual recording. The estimated battery life of the beacon is 400 days with a 210 mAh coin battery. We also discussed the shortcoming of the current demonstration system and the future work for a reliable and ready-to-deploy automatic gym workout recording system.
Workouts recording, Exercise recording, Internet of Things +
Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## I Introduction
Regularly visiting the gym has been an important part of a healthy lifestyle for many people worldwide [1]. Thus, gym activity digitalization has become a popular topic in industry and academia. Progress in the "smartification" of gym exercising is currently moving on many different paths, and no solution has been widely and properly adopted. One can find many pieces of workout equipment, but those are usually still seen as gadgets used for more casual workout programs rather than for a fully functioning gym ecosystem. A fully automatic gym exercise tracking enables gym enthusiasts thoroughly focus on their workouts and frees them from swiping and tapping a commercial App like Bodyspace to record their workout history [2].
Current solutions for automatic gym activity recording rely either on the applications installed on smart wearable devices, which require a lot of interaction from the user, or the embedded algorithms for automatic gym recognition based on the data collected from different sensors. Table I listed the recent research work on gym exercise recognition sorted by the applied sensing modalities. Wearable solutions are inherently popular as no extra off-body devices are needed. Thus, the inertial measurement unit was widely explored as it is embedded in every portable smart device and enjoys the advantages of cost, power consumption, and privacy. In [3], the authors used a single chest-mounted tri-axial accelerometer, followed by an LSTM neural network, to recognize a wide range of gym-based free-weight exercises. To be noticed, although the recognition accuracy is impressive (82%) for 42 exercises, the data acquisition process involves only four athletes and the testing data set includes data from all four athletes. And the need for a rather large body harness for information gathering also limits the possibility of widespread use. Also, the chest-mounted sensor resulted in the misclassification of exercises that do not involve chest movement. Thus, to get better accuracy, extra inertial measurement units are needed to cover all the moving body parts during the exercises. Tian et al. [4] analyzed the positions of the sensor on the body on the recognition performance, a stratification fusion method using two sensors was proved to efficiently identify eight kinds of gym exercises with an accuracy of 91.26%. Besides the inertial sensors, the EMG [5] and passive capacitive sensors [6, 7] were also explored in a wearable way for exercise recognition. Capacitive sensing was proved to be a competitive motion sensing approach for wearables enjoying the same advantage of low cost and low power consumption [15]. With non-wearable sensors, including the image-based approaches, sensors are normally deployed near the user, either targeting the user for signal perceiving [9, 10, 12] or the equipment [11]. To be noticed, all the above-mentioned sensors for exercise recognition were explored on a very limited number of exercises. Large-scale exercise recognition is still challenging the efficiency of those sensing modalities. The image-based solution backed by the computer version technique could solve this problem as it perceives the full body movement through continuous frames. Thus, a
near-perfect recognition accuracy [13, 14] could be achieved by using advanced machine learning skills. However, a few factors that the vision solution suffers from makes it hard for broad deployment, like privacy issues, as not only the data from user could be perceived and processed, and the high computing load caused by the large data volumes, etc.
This work addresses the above-described problem, aiming to provide gym enthusiasts with a fully automatic exercise recording system. More specifically, the paper proposes and presents an IoT system, including hardware, firmware, and cloud platform designed for equipment activity data sensing and distribution. On the hardware side, the inertial sensor-embedded, ultra-low-power beacons will be developed in advertising mode and deployed to the gym equipment, recognizing machine activity (the machine type and the repetition number) and transmitting the information via wireless packages. Nearby users could use their smartwatch or smartphone to scan the advertisements around and record the one with the strongest Bluetooth Received Signal Strength Indication (RSSI), which means this advertisement was generated by the moving equipment the user is working on. Meanwhile, a gateway will be deployed to listen to all the packages and send the processed data to a Firebase cloud server. Thus a dashboard-like interface can be presented to the gym manager for statistical data analysis, like everyday equipment usage situation, studio crowd analysis, etc.
Compared with other solutions, this IoT approach enjoys the following advantages:
1. Low cost and low power consumption. Only the inertial sensor-embedded beacons are needed for equipment status data perceiving, and a very light computing load is needed for data processing. A coin battery could support over one year of life for the beacon.
2. Easy of deployment. Beacons are deployed onto the existing machines in a plug-and-play manner, making the IoT system economically deployable and easy to maintain.
3. High accuracy. The broadcasting package includes both the equipment type (pre-configured) and the repetition number that can be reliably sensed by the inertial sensor in the beacon.
4. Privacy-respect. Only the equipment status data is broadcasted. No personal user data is involved in the system.
5. Large number of exercise types. Almost all types of exercises assisted by the equipment in a gym studio can be covered.
6. Extra benefits for gym managers. The equipment status advertisements broadcasted by the beacons are collected by the gateway and presented to the gym manager in the form of a cloud-based dashboard for potential services like crowd density monitoring, equipment maintenance, etc.
The proposed IoT solution for fully automatic gym exercise recording aims to provide considerable improvements in the area as we're using considerably smaller and energy-efficient devices. Unlike current commercially available intelligent gym equipment with real-time data visualization and feedback, the proposed system is economical for integration into existing gym equipment. The digitalization of the gym studio with the IoT approach provides benefits for both users and managers,
with which they could make more informed decisions.
In summary, we have the following two contributions:
1. the design and implementation of an IoT system for digitalizing the gym studio with minimum cost, providing fully automatic gym exercise recording for users and equipment usage status monitoring for managers.
2. With the real-life experiment, we showed the system's feasibility in digitalizing gym activities with near-perfect accuracy in exercise counting and information broadcasting. As a result, both the user and the manager get reliable real-time data of their interest.
## II System architecture
To reach the target of interest, three module components are needed for prototyping the ecosystem, as Figure 1 depicts. At first, a beacon embedded with a low-power inertial sensor working in advertisement mode is deployed on the moving part of the equipment, aiming to sense the working status of it. Second, a smartwatch or smartphone that can perceive the advertisement package on the user side is needed. Without manually recording the gym session, the user's smart device automatically records the gym activities by checking the received advertisement package that has the RSSI and equipment type and repetition information included. Third, a gateway is used to listen to all the beacon packages and send them to the cloud database, thus, a dashboard can be presented to the gym manager with real-time gym studio usage situation monitoring. The following subsections give a detailed description of each component used in the demonstrated ecosystem.
### _Beacon and Gateway_
The heart of the beacon is an NRF52840 from Nordic Semiconductor, an advanced multi-protocol system-on-chip ideally suited for ultra-low power wireless applications. The embedded 2.4 GHz transceiver supports BLE-related functions like advertising and communication. The sensing component is the LIS3DH from STMicroelectronics, an ultra-low-power, high-performance three-axis linear accelerometer. The inertial sensor features ultra-low-power operational modes that allow advanced power saving and smart embedded functions. Specifically, the inertial sensor can be configured to generate interrupt signals using two independent programmable inertial events like wake-up and free-fall. By configuring the thresholds and timing of interrupt, the inertial sensor could work mostly in low power mode and generate an interrupt signal to NRF52840 only when a threshold is reached in a particular direction as defined. The similar working status is also located in NRF52840, as it works mainly in sleeping mode and changes to advertising mode only when an interrupt signal from the inertial unit is received. Figure 2 presents how the power is consumed on the beacon measured by the Keysight N6705C DC power analyzer, showing an average current of the beacon with 0.03 mA in ideal mode and 0.04mA in advertising mode (four repetitions caused eight packages advertised in twenty seconds, we set double advertisements to decrease the loss rate of package perceiving), which indicates a battery life of around 400 days when powered by the CR2032 coin battery (210mAh, 80% of the charge is regarded as efficient powering source), assuming that the beacon will work in advertising mode for six hours per day and idle mode in the left time. The iBeacon-form advertised package's universally unique identifier (UUID) chunk is pre-programmed to represent the gym equipment type. The minor data is updated for broadcasting the repetition number, which corresponds to the number of interrupts received from the inertial sensor. The repetition count is accumulated until no new interrupt signal is received in five seconds and reset to zero.
The gateway is an ESP32C3 SoC based on the open-source RISC-V architecture. We used it for its simplicity of programming and extensive WiFi and Bluetooth support. The availability of WiFi and Bluetooth 5.0 connectivity facilitates a variety of use cases based on dual connectivity. Especially the Bluetooth long-range support enables networking with great coverage and improved usability. To filter out any other Bluetooth advertisements, we stored each beacon identifier in the gateway beforehand so that it only focuses on the specific devices and ignores all other packages that are not advertised from the beacons. Using a permanently loaded repetition vector, the gateway keeps track of the current status of each machine. Each configured beacon and its correlating identifier is bound to a specific index in the array that gets updated with all received packages every couple of seconds. Every update also triggers the HTTP refreshing protocol. The gateway sends an updated HTTP PATCH packet to the Google database using an HTTP protocol. These packets consist of a JSON-form body with the repetition vector, the identifier of the respective machine, and an HTTP header that describes
Fig. 1: System overview
Fig. 2: System power consumption: advertising mode (left) and idle mode (right)
the sent information. The advertisement listening time is set to 3 seconds, and each HTTP packet transmission is measured with 0.9 seconds. For the gateway, the power consumption is not considered as it has no limitation of installing position and can be powered from a wire source.
### _Smarrwatch and Dashboard_
In the demonstration system, a programmable smartwatch is used by the gym user as the wearable device for gym session recording, and a dashboard is used for gym equipment status presentation.
The smartwatch (LilyGo T-Wristband-NRF52 [16]) is commercially available and programmable. The watch features an NRF52832 SoC, an inertial measurement unit, a PCF8563 for Real-time clock/calendar, a capacitive touch button, a 160x80 resolution, general 0.96 inch LCD display Module (as Figure 1 right depicts), and a rechargeable lithium battery. A tiny programming board is delivered together for flashing. We programmed the firmware for the function switch by touching the capacitive button. As long as a "long-touch" occurs, the watch steps into Bluetooth scanning mode and perceives the nearby broadcasted packages. The RSSI value of each package is used to pick up the wearer's exercise, as a stronger RSSI means a closer distance from the beacon to the watch.
For the dashboard, we used an online service called Freeboard, developed by Bugs Labs, Inc., for real-time and interactive IoT visualization. The platform uses a simple GUI to stitch together panels in a grid layout. We used it to create vertical panels corresponding to each piece of equipment and displayed all available information (real-time and historical) related, as Figure 3 displays.
Besides the smartwatch and dashboard, we also developed two smartphone apps for the user and the manager, as Figure 4 shows. The user one works similarly with the smartwatch. By activating the App, the smartphone's Bluetooth starts scanning the beacon packages and making the record. The manager one is connected to the firebase database, allowing internal logic and communication with the server. The two Apps aim to provide an alternative approach to data perception and presentation.
## III Real-life experiment
To preliminarily test the system's feasibility, we conducted the experiment in a university gym studio. Figure 5 is a screenshot of the recorded video. The gateway was connected to the in-house WiFi network and attached to our laptop to see the live debug feed. The beacon was attached to the part of the machines that undergoes the most movement during use(here, we first tested the equipment of leg-curl, leg-extension, and lat-pull). The readout from the dashboard and the smartwatch are used to verify the feasibility of the gym recording system. For the battery reason, the smartwatch is connected to the laptop but still close to the wrist. The exercise logos shown on the dashboard and smartwatch are not synchronized as they are not crucial to the test result during a preliminary experiment. We did three sets on each piece of equipment. Table II lists the counting result. As can be seen, both the dashboard and smartwatch can present reliable repetition values most of the time. However, we noticed two imperfections in the preliminary test. First, the dashboard missed the last package quite often, such as set three on Leg-Curl and Leg-Extension, and set one on Lat-Pull. The reason behind locates in the advertisement scanning time of the gateway, as the gateway has to swing between WIFI uploading mode and Bluetooth scanning mode, which holds for 0.9 seconds and 3 seconds separately. Thus, the gateway was losing the advertisement package occasionally. As the repetition value is broadcasted directly from the beacon, the counting error caused by package loss will not be accumulated. Thus, we see it as a negligible flaw (or using more than one gateway to guarantee a full detection rate of the advertisement package). Second, during set three of Lat-Pull, the first six repetitions in the ten were successfully captured by the smartwatch and the gateway
Fig. 4: Apps for user (left) and manager (right)
Fig. 5: Experiment with the crucial components in an university gym studio
Fig. 3: Dashboard
and showed in the dashboard. Still, as the beacon was not tightly attached to the moving frame of the equipment, a slight displacement caused the beacon to lose the orientation used for interrupt generation. This revealed a critical fact that the deployment site and direction of the beacon play an important role in reliable equipment status detection.
## IV Future work
The preliminary experiment demonstrates the feasibility of the proposed system for automatic gym exercise recording. However, a systematic and large-scale experiment still needs to be included to check the practical performance (we have only a few beacons and one smartwatch currently at hand). Thus, for future work, we will first make more critical hardware components for the following long-term and extensive experiment. Secondly, the smartwatch and dashboard are currently only for real-time data presentation. However, data storage space is needed for historical data playing. This function module on both the user and manager sides will be implemented in the following system versions. Thirdly, to cover more exercises other than the equipment-assistant ones, like the free weight exercises, this straightforward interrupt-triggered advertisement will be invalid because of the moving pattern complexity of the weights. Therefore, a precise data processing model will be explored to abstract the repetition information. Lastly, users may be interested not only in the repetition of an exercise but also in the weight loaded, which is out of the current system's ability. To address this, a separate sensing module is needed to provide extra exercise information.
## V Conclusion
This paper proposed an economical IoT solution for gym studio digitalization and automatic gym exercise recording. With low-power motion sensor embedded beacons, the usage status of each specific piece of gym equipment will be advertised through the beacon that features a battery life of around 400 days. Users' smart devices working in the advertisement listening mode will record the packages without interaction from the users. Thus the target of automatic gym exercise recording can be achieved. Meanwhile, a gateway collects all advertisements to monitor the gym equipment usage status. A dashboard enables the gym manager to have quick and visual access to all equipment and to make potential equipment upgrades or necessary machine additions. Depending on the daily usage, gym managers can make informed choices about their gym facility's future. A preliminary real-life test with the critical system components shows the feasibility of the proposed gym exercise assistive system with 94.6% accuracy, which can easily reach up to 100% with a more suitable assembling. A long-term comprehensive experiment will be carried out in the future to demonstrate the practicability of the proposed solution.
|
2302.04229 | Weighted Edit Distance Computation: Strings, Trees and Dyck | Given two strings of length $n$ over alphabet $\Sigma$, and an upper bound
$k$ on their edit distance, the algorithm of Myers (Algorithmica'86) and Landau
and Vishkin (JCSS'88) computes the unweighted string edit distance in
$\mathcal{O}(n+k^2)$ time. Till date, it remains the fastest algorithm for
exact edit distance computation, and it is optimal under the Strong Exponential
Hypothesis (STOC'15). Over the years, this result has inspired many
developments, including fast approximation algorithms for string edit distance
as well as similar $\tilde{\mathcal{O}}(n+$poly$(k))$-time algorithms for
generalizations to tree and Dyck edit distances. Surprisingly, all these
results hold only for unweighted instances.
While unweighted edit distance is theoretically fundamental, almost all
real-world applications require weighted edit distance, where different weights
are assigned to different edit operations and may vary with the characters
being edited. Given a weight function $w: \Sigma \cup \{\varepsilon \}\times
\Sigma \cup \{\varepsilon \} \rightarrow \mathbb{R}_{\ge 0}$ (such that
$w(a,a)=0$ and $w(a,b)\ge 1$ for all $a,b\in \Sigma \cup \{\varepsilon\}$ with
$a\ne b$), the goal is to find an alignment that minimizes the total weight of
edits. Except for the vanilla $\mathcal{O}(n^2)$-time dynamic-programming
algorithm and its almost trivial $\mathcal{O}(nk)$-time implementation, none of
the aforementioned developments on the unweighted edit distance apply to the
weighted variant. In this paper, we propose the first
$\mathcal{O}(n+$poly$(k))$-time algorithm that computes weighted string edit
distance exactly, thus bridging a fundamental gap between our understanding of
unweighted and weighted edit distance. We then generalize this result to
weighted tree and Dyck edit distances, which lead to a deterministic algorithm
that improves upon the previous work for unweighted tree edit distance. | Debarati Das, Jacob Gilbert, MohammadTaghi Hajiaghayi, Tomasz Kociumaka, Barna Saha | 2023-02-08T17:59:03Z | http://arxiv.org/abs/2302.04229v1 | # Weighted Edit Distance Computation: Strings, Trees and Dyck
###### Abstract
Given two strings of length \(n\) over alphabet \(\Sigma\), and an upper bound \(k\) on their edit distance, the algorithm of Myers (Algorithmica'86) and Landau and Vishkin (JCSS'88) from almost forty years back computes the unweighted string edit distance in \(\mathcal{O}(n+k^{2})\) time. Till date, it remains the fastest algorithm for exact edit distance computation, and it is optimal under the Strong Exponential Hypothesis (STOC'15). Over the years, this result has inspired many developments, including fast approximation algorithms for string edit distance as well as similar \(\tilde{\mathcal{O}}(n+\mathrm{poly}(k))\)-time algorithms for generalizations to tree and Dyck edit distances. Surprisingly, all these results hold only for unweighted instances.
While unweighted edit distance is theoretically fundamental, almost all real-world applications require weighted edit distance, where different weights are assigned to different edit operations (insertions, deletions, and substitutions), and the weights may vary with the characters being edited. Given a weight function \(w:\Sigma\cup\{\varepsilon\}\times\Sigma\cup\{\varepsilon\}\to\mathbb{R}_{\geq 0}\) (such that \(w(a,a)=0\) and \(w(a,b)\geq 1\) for all \(a,b\in\Sigma\cup\{\varepsilon\}\) with \(a\neq b\)), the goal is to find an alignment that minimizes the total weight of edits. Except for the vanilla \(\mathcal{O}(n^{2})\)-time dynamic-programming algorithm and its almost trivial \(\mathcal{O}(nk)\)-time implementation (\(k\) being an upper bound on the sought total weight), none of the aforementioned developments on the unweighted edit distance applies to the weighted variant. In this paper, we propose the first \(\mathcal{O}(n+\mathrm{poly}(k))\)-time algorithm that computes weighted string edit distance exactly, thus bridging a fundamental decades-old gap between our understanding of unweighted and weighted edit distance. We then generalize this result to weighted tree and Dyck edit distances, bringing in several new techniques, which lead to a deterministic algorithm that improves upon the previous work even for unweighted tree edit distance. Given how fundamental weighted edit distance is, we believe our \(\mathcal{O}(n+\mathrm{poly}(k))\) algorithm for weighted edit distance will be instrumental for further significant developments in the area.
## 1 Introduction
String edit distance and its several variants have been studied for decades since the 1960s [14, 15, 16]. Historically, most work on these problems assumed that the edit operations have unit weights in order to simplify the problem and streamline theoretical results. Till date, the
fastest exact algorithm for unweighted edit distance is due to Myers [14] and Landau and Vishkin [11], who obtained an \(\mathcal{O}(n+k^{2})\)-time solution for two strings of length \(n\) with an upper bound \(k\) on their edit distance. This bound is now known to be optimal (up to subpolynomial factors) under the Strong Exponential Hypothesis [1]. Over the years, the Holy-Grail result of [14, 11] has inspired many developments on fast approximation algorithms for (unweighted) string edit distance [1, 1, 15, 16] and similar \(\tilde{\mathcal{O}}(n+\mathrm{poly}(k))\)-time1 algorithms for generalizations such as the (unweighted) Dyck and tree edit distances [1, 13, 15, 17]. However, almost all real-world applications require weighted edit distance, where different weights are assigned to different edit operations (insertions, deletions, and substitutions), and the weights may vary with the characters being edited [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. As a result, there is a major gap between the theoretical results of prior research and real-world utility of these results. In this paper, we bridge this fundamental gap between the understanding of unweighted and weighted edit distance: We provide the first non-trivial algorithm computing the weighted edit distance and its generalizations to weighted tree and Dyck tree edit distance.
Footnote 1: The \(\tilde{\mathcal{O}}(\cdot)\) notation suppresses factors polylogarithmic in the input size \(n\).
More specifically, in this paper we propose the first \(\mathcal{O}(n+\mathrm{poly}(k))\)-time algorithm for exact weighted edit distance computation in which, given a weight function \(w:\Sigma\cup\{\varepsilon\}\times\Sigma\cup\{\varepsilon\}\to\mathbb{R}_{\geq 0}\) (normalized so that \(w(a,b)\geq 1\) for \(a\neq b\)), the goal is to find an alignment that minimizes the total weight of edit operations (insertions, deletions, and substitutions) assuming that it does not exceed a provided threshold \(k\). Strikingly, except for the vanilla \(\mathcal{O}(n^{2})\)-time dynamic-programming algorithm and its almost trivial \(\mathcal{O}(nk)\)-time implementation, none of the aforementioned developments on unweighted edit distance apply to this weighted variant. We then generalize our result to weighted tree and Dyck edit distances, bringing in several new techniques that lead to improvements even for the unweighted tree edit distance problem: As a byproduct of our results, we present a deterministic \(\mathcal{O}(n+k^{7}\log k)\)-time solution, which is much faster than the randomized \(\mathcal{O}(n\log n+k^{15}\log^{2}k)\)-time algorithm of Das, Gilbert, Hajiaghayi, Kociumaka, Saha, and Saleh [1].
Can this apparent lack of progress in weighted edit distance computation be explained? As we observe later, even basic properties like monotonicity, which was fundamental for efficient computation of unweighted edit distance [14, 11], break down when considering weighted operations. This precludes any local matching approach, which seemed necessary for a linear-time algorithm for bounded (unweighted) edit distance [14, 11, 15, 17, 18]; instead, a global view of the sequences is needed to find matching substrings and yet maintain the linear runtime. Faced with such barriers, our biggest contribution is a kernelization method for weighted edit distance, not just for strings, but also for tree and Dyck edit distance instances. Interestingly, our kernels are weight-agnostic, that is, the kernelization algorithms do not need to know the weight function \(w\). Given how fundamental weighted edit distance is, we believe our \(\mathcal{O}(n+\mathrm{poly}(k))\) algorithm for weighted edit distance will be instrumental for further significant developments in the area.
### Related Work
String Edit Distance:_Edit distance_ is one of the most fundamental problems in computer science studied since the 1960s [13, 12, 14]. In the unweighted edit distance problem, given two strings of length at most \(n\), the goal is to find the minimum number of edit operations (insertions, deletions, and substitutions) required to transform one string into the other. Given a parameter \(k\) as an upper bound on the edit distance, an algorithm proposed in the 1980s by
Myers [14] and Landau and Vishkin [21] achieves this task in \(\mathcal{O}(n+k^{2})\) time by combining suffix trees with an elegant greedy approach. As long as \(k=\mathcal{O}(\sqrt{n})\), the running time of the above algorithm in linear in \(n\). For larger values of \(k\), approximation algorithms for edit distance have been studied extensively [13, 15, 1, 16, 17], especially recently [19, 18, 19, 20, 21, 22]. This culminated with the currently best bound by Andoni and Nosatzki [1], who obtained a constant-factor approximation algorithm with running time \(\mathcal{O}(n^{1+\epsilon})\) time for any constant \(\epsilon>0\). All of these works require monotonicity and assume that an optimal solution can be extended easily if matching suffixes are added to both strings, none of which may hold in weighted edit distance instances. As a result, the state-of-the-art approximation algorithm for weighted edit distance, by Kuszmaul [16], offers much worse trade-off, with an \(\mathcal{O}(n^{\tau})\)-factor approximation in \(\tilde{\mathcal{O}}(n^{2-\tau})\) time for any \(0\leq\tau\leq 1\).
Tree Edit Distance:The _tree edit distance_ problem, first introduced by Selkow [13], is a generalization of edit distance in which the task is to compute a measure of dissimilarity between two rooted ordered trees with node labels. In the unweighted version of tree edit distance, every node insertion, deletion, or relabeling operation has unit cost. The problem has numerous applications in compiler optimization [10], structured data analysis [1, 17, 18], image analysis [20], and computational biology [14, 21, 22, 23, 24]. The current best bound on running time of an algorithm for finding exact tree edit distance is due to Durr [19] who obtained an \(\mathcal{O}(n^{2.9149})\)-time algorithm for the problem, after a long series of improvements from \(\mathcal{O}(n^{6})\)[20] to \(\mathcal{O}(n^{4})\)[20], to \(\mathcal{O}(n^{3}\log n)\)[20], to \(\mathcal{O}(n^{3})\)[10], and to \(\mathcal{O}(n^{2.9546})\)[14]. Moreover, there is a \((1+\epsilon)\)-approximation algorithm for tree edit distance with running time \(\tilde{\mathcal{O}}(n^{2})\) time due to Boroujeni, Ghodsi, Hajiaghayi, and Seddighin [11]. Recently, Seddighin and Seddighin [20] gave an \(\mathcal{O}(n^{1.99})\)-time \((3+\epsilon)\)-approximation algorithm for tree edit distance (building on a previous \(\tilde{\mathcal{O}}(n)\)-time \(\mathcal{O}(\sqrt{n})\)-factor approximation algorithm of [11]). Furthermore, Das, Gilbert, Hajiaghayi, Kociumaka, Saha, and Saleh [12] obtained an \(\tilde{\mathcal{O}}(n+k^{15})\)-time algorithm for exact tree edit distance with an upper bound \(k\) on the distance (see also an \(\tilde{\mathcal{O}}(nk^{2})\)-time algorithm of Akmal and Jin [1], which improves upon a previous algorithm with running time \(\mathcal{O}(nk^{3})\) for the bounded tree edit distance problem [23]).
As far as the weighted tree edit distance is concerned, the fastest algorithm, by Demaine, Mozes, Rossman, and Weimann [10], takes \(\mathcal{O}(n^{3})\) time, which matches the conditional lower-bound of Bringmann, Gawrychowski, Mozes, and Weimann [1] (earlier conjectured by Abboud [1]). Specifically, there is no truly subcubic-time algorithm for weighted tree edit distance unless APSP has a truly subcubic-time solution. The lower bound still holds for trees over a constant-size alphabet unless the weighted \(k\)-clique problem admits an \(\mathcal{O}(n^{k-\epsilon})\)-time algorithm.
Dyck Edit Distance:The _Dyck edit distance_ problem is another variation of edit distance which falls under the umbrella of general language edit distance [1, 14, 20, 21] and has numerous practical applications, e.g., for fixing hierarchical data files, in particular XML and JSON files [13, 15]. In the unweighted version of this problem, given a string of \(n\) parentheses, the goal is to find the minimum number of edits (character insertions, deletions, and substitutions) to make the string well-balanced. Several algorithms for both exact [21, 19, 20] and approximation [22, 23] versions of the problem have been obtained. Finding exact Dyck edit distance is at least as hard as Boolean matrix multiplication [1]. The bounded Dyck edit problem was subject to several recent studies as well: Backurs and Onak [1] obtained the first algorithm with running time \(\mathcal{O}(n+k^{16})\), which was further improved to \(\mathcal{O}(n+k^{5})\)[22], and finally to \(\mathcal{O}(n+k^{4.5442})\) using fast matrix multiplication [24, 19]. Except for the
\(\mathcal{O}(n^{3})\)-time exact algorithm for language edit distance [14], these results are not applicable to the weighted setting.
### Our Contribution
The main contributions of our paper are new algorithms for weighted string, tree, and Dyck edit distance. We define a _weight function_ as a function \(w:\Sigma\cup\{\varepsilon\}\times\Sigma\cup\{\varepsilon\}\to\mathbb{R}_{\geq 0}\) such that \(w(a,a)=0\) and \(w(a,b)\geq 1\) for \(a\neq b\). If \(a,b\in\Sigma\), then \(w(a,\varepsilon)\) is the cost of deleting \(a\), \(w(\varepsilon,b)\) is the cost of inserting \(b\), whereas \(w(a,b)\) is the cost of substituting \(a\) for \(b\). The assumption \(w(a,a)=0\) indicates that matching symbols can be aligned at no cost, whereas the assumption \(w(a,b)\geq 1\) for \(a\neq b\) indicates that the weights are normalized so that every edit costs at least one. A weight function is a _quasimetric_ if it also satisfies the triangle inequality (which we assume for tree and Dyck edit distance). When it comes to computations on weights, we consider any uniform model in which real numbers are subject to only comparison and addition [13], e.g., the RAM model.
We define \(\mathsf{ed}^{w}(X,Y)\) to be the minimum cost of an alignment of strings \(X\) and \(Y\) for weight function \(w\). Furthermore, we define \(\mathsf{ed}^{w}_{\leq k}(X,Y)\) as \(\mathsf{ed}^{w}(X,Y)\) (if it is at most \(k\)) or \(\infty\) (otherwise). We give the first weighted bounded edit distance algorithm with runtime \(\mathcal{O}(n+\operatorname{poly}(k))\).
**Theorem 1.1**.: _Given strings \(X,Y\) of length at most \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a weight function \(w\), the value \(\mathsf{ed}^{w}_{\leq k}(X,Y)\) can be computed in \(\mathcal{O}(n+k^{5})\) time._
Similarly to string edit distance, we define \(\mathsf{ted}^{w}(F,G)\) as the minimum cost of a tree alignment of forests \(F\) and \(G\) for weight function \(w\). We define \(\mathsf{ted}^{w}_{\leq k}(F,G)\) analogously and give the first weighted tree edit distance algorithm with runtime \(\mathcal{O}(n+\operatorname{poly}(k))\). In the unweighted case, our deterministic algorithm is significantly faster than the state-of-the-art randomized algorithm from [12].
**Theorem 1.2**.: _Given forests \(F,G\) of length at most \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a quasimetric \(w\), the value \(\mathsf{ted}^{w}_{\leq k}(F,G)\) can be computed in \(\mathcal{O}(n+k^{15})\) time. Moreover, \(\mathsf{ted}_{\leq k}(F,G)\) can be computed in \(\mathcal{O}(n+k^{7}\log k)\) time._
Finally, we define \(\mathsf{d}\mathsf{yck}^{w}_{\leq k}(X)\) to be the minimum distance \(\mathsf{ed}^{w}_{\leq k}(X,Y)\) between \(X\) and a string \(Y\) in the Dyck language. We give the first algorithm for weighted Dyck edit distance with runtime \(\mathcal{O}(n+\operatorname{poly}(k))\). In this setting, the alphabet consists of opening and closing parentheses, and we need to assume that the weight function, apart from satisfying the triangle inequality, treats opening and closing parentheses of the same type similarly. This is captured in the notion of a _skewmetric_ formally defined in Section 4.2.
**Theorem 1.3**.: _Given a string \(X\) of length \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a skewmetric \(w\), the value \(\mathsf{d}\mathsf{yck}^{w}_{\leq k}(X)\) can be computed in \(\mathcal{O}(n+k^{12})\) time._
We note that, although our algorithms assume \(k\) is given, one can also obtain running times analogous to those of Theorems 1.1 to 1.3 but with the sought distance instead of the threshold \(k\). For this, it suffices to start from the largest value \(k\) that results in the running time of \(\mathcal{O}(n)\), e.g., \(k=\Theta(n^{1/5})\) for strings, and keep doubling the threshold \(k\) as long as the algorithm outputs \(\infty\). The first finite outcome is guaranteed to be the sought distance and, since the running times of the subsequent iterations form a geometric progression, the overall runtime is dominated by the last iteration, where \(k\) is at most twice the sought distance.
### Overview
The folklore algorithms to compute edit distance for unweighted and weighted instances use dynamic programming and runs in \(\mathcal{O}(n^{2})\) time. Given two strings \(X\) and \(Y\), the entry \(D[i,j]\) of the dynamic programming table \(D\) holds the weighted (unweighted) edit distance of prefixes of \(X\) and \(Y\) up to indices \(i\) and \(j\) respectively. That is \(D[i,j]:=\mathsf{ed}(X[0\mathinner{.\,.\,i}),Y[0\mathinner{.\,.\,j}))\). Then
\[D[i+1,j+1]=\min\{D[i,j+1]+1,D[i+1,j]+1,D[i,j]+\delta(X[i],Y[j])\}\;\mbox{ {\it unweighted edit distance}}\]
\[D[i+1,j+1]=\min\{D[i,j+1]+w(X[i],\varepsilon),D[i+1,j]+w( \varepsilon,Y[j]),D[i,j]+w(X[i],Y[j])\}.\] \[\;\mbox{{\it:weighted edit distance}}\]
The first entry in the recursive definition corresponds to deleting \(X[i]\), the second entry corresponds to inserting \(Y[j]\), and the third entry corresponds to either matching or substitution (\(\delta(X[i],Y[j])=0\) if \(X[i]=Y[j]\), otherwise \(\delta(X[i],Y[j])=1\)). Clearly, \(D[|X|,|Y|]\) equals the total weighted (unweighted) edit distance between \(X\) and \(Y\), and can be computed in \(\mathcal{O}(n^{2})\) time.
It is possible to improve the running time to \(\mathcal{O}(nk)\) if the weighted (unweighted) edit distance is bounded by \(k<n\). In this case the entries corresponding to only \(2k+1\) diagonals surrounding the main diagonal of \(D\) need to be computed. However, the similarities between the developments on unweighted and weighted edit distance computations end here.
The first major breakthrough in the unweighted edit distance computation came in the late eighties [21, 22]. An \(\mathcal{O}(n+k^{2})\)-time algorithm for unweighted edit distance was developed whenever edit distance is bounded by \(k\), thereby giving a linear time algorithm for \(k\leq\sqrt{n}\). The algorithm utilizes two simple but powerful properties of unweighted edit distance, namely (i) _monotonicity_: \(D[i+1,j+1]\geq D[i,j]\), and (ii) _greedy extension_: if \(X[i]=Y[j]\) then \(D[i+1,j+1]=D[i,j]\). These two properties together imply that if we can find maximal equal substrings in \(X\) and \(Y\) through a preprocessing step, only \(\mathcal{O}(k^{2})\) entries of \(D\) need to be computed. More precisely, for each of the \(2k+1\) diagonals, these are the at most \(k+1\) entries with \(k+1\geq D[i+1,j+1]>D[i,j]\). The preprocessing step utilizes a linear-time construction of a suffix tree to answer any maximal equal substring queries in constant time, leading to an overall running time of \(\mathcal{O}(n+k^{2})\). All subsequent developments on fast approximation algorithms for unweighted string edit distance rely on the above two properties without exception.
Unfortunately, none of the above two properties hold for weighted edit distance computation. The following simple examples will make this observation clear.
1. _No monotonicity:_ Let \(X=\mathsf{ab}\), \(Y=\mathsf{c}\), and \(w(\mathsf{a},\mathsf{c})+w(\mathsf{b},\varepsilon)<w(\mathsf{a},\varepsilon)\). Then \(D[1,0]=w(\mathsf{a},\varepsilon)\) and \(D[2,1]=w(\mathsf{a},\mathsf{c})+w(\mathsf{b},\varepsilon)<w(\mathsf{a},\varepsilon)\).
2. _No greedy extension:_ Let \(X=\mathsf{ab}\), \(Y=\mathsf{b}\), and \(w(\mathsf{a},\mathsf{b})+w(\mathsf{b},\varepsilon)<w(\mathsf{a},\varepsilon)\). Then substituting \(\mathsf{a}\) to \(\mathsf{b}\) and deleting \(\mathsf{b}\) from \(X\) is cheaper than deleting \(\mathsf{a}\) and matching the subsequent \(\mathsf{b}\).
In some sense, this explains the lack of progress on weighted instances in this field. We need a very different approach and new ideas.
When \(k\), the minimum weighted edit distance, is small for two input strings, clearly most characters of the input strings are perfectly matched and contribute no cost to the edit distance computation. The main idea of our algorithm is to find small representative instances for the input strings and then run the \(\mathcal{O}(nk)\)-time weighted edit distance solution on these representatives to find the original weighted edit distance. In fact, we prove that any instance of the bounded-weighted edit distance can be solved using strings of size \(\mathcal{O}(k^{4})\). Our algorithm constructs such an \(\mathcal{O}(k^{4})\)-size kernel from strings of size \(\mathcal{O}(n)\) in time \(\mathcal{O}(n)\), and then the resulting small instances can be solved in time \(\mathcal{O}(k^{5})\) using the \(\mathcal{O}(nk)\)-time weighted extension of the dynamic programming.
We are also able to extend the idea of kernelization to weighted instances of tree and Dyck edit distances by giving the first \(\mathcal{O}(n+\mathrm{poly}(k))\) algorithms for them. Notably, our algorithms are deterministic and give significant improvements over the recent randomized algorithms on unweighted tree edit distance [13]. We show it is possible to compute small \(\mathcal{O}(\mathrm{poly}(k))\)-size kernels from the original instances of each problem in linear time, and then run dynamic programming based algorithms to compute the final edit distance values.
To find such kernels, we utilize substrings that have _synchronized occurrences_ in both input strings \(X\) and \(Y\), that is, they occur in \(X\) and \(Y\) at positions \(x\) and \(y\), respectively, satisfying \(|x-y|=\mathcal{O}(k)\). Our kernelization algorithm first tries to cover the input strings (almost entirely) with \(\mathcal{O}(k)\) pairs of synchronized occurrences. If this is impossible, then we conclude that the edit distance must be large, that is, \(\mathsf{ed}_{\leq k}^{w}(X,Y)=\infty\). Otherwise, we apply a novel notion of _edit-distance equivalence_ so that synchronized occurrences of a substring \(P\) can be substituted with synchronized occurrences of an equivalent substring \(P^{\prime}\) without affecting the edit distance \(\mathsf{ed}_{\leq k}^{w}(X,Y)\). To this end, we provide a linear-time algorithm that, given any string \(P\), computes an edit-distance equivalent string \(P^{\prime}\) of size \(\mathrm{poly}(k)\).
A similar notion of equivalent pieces is also central to our algorithms for weighted tree and Dyck edit distance. Our three algorithms all utilize the following high-level steps:
1. Partition the input objects into \(\mathcal{O}(k)\) pieces most of which can be paired up to form synchronized occurrences.
2. If the algorithm failed to find sufficiently long synchronized occurrences, report that the edit distance exceeds \(k\).
3. Otherwise, for every pair of synchronized occurrences, substitute the original piece with a small equivalent replacement.
4. Solve the resulting small instance with a known dynamic-programming algorithm.
#### 1.3.1 Weighted String Edit Distance
We now describe how to obtain Theorem 1.1 by implementing the aforementioned high-level scheme.
Edit-Distance Equivalent StringsThe biggest technical contribution behind our weighted edit distance algorithm is a linear-time procedure (of Corollary 2.14) that, given a string \(P\), computes an equivalent string of length \(\mathcal{O}(k^{3})\). In the first phase, it eliminates _\(k\)-periodicity_: as long as the processed string contains a fragment of the form \(Q^{4k+1}\) with \(|Q|\in[1\mathinner{.}2k]\), this fragment is replaced by \(Q^{4k}\). As shown in Lemma 2.9, the strings \(Q^{4k+1}\) and \(Q^{4k}\) are equivalent, so this step preserves equivalence with the input string \(P\). Eventually, the first phase results in a string that _avoids \(k\)-periodicity_ and is equivalent with \(P\) (see Fig. 1 for example). It is implemented in Lemma 2.13, where the underlying algorithm processes the input string \(P\) from left to right and removes the first copy of \(Q\) for every encountered fragment of the form \(Q^{4k+1}\) with \(|Q|\in[1\mathinner{.}2k]\).
In Lemma 2.11, we prove that if \(P\) avoids \(k\)-periodicity and satisfies \(|P|\geq 42k^{3}\), then it is equivalent to \(P[0\mathinner{.}21k^{3})\cdot P[|P|-21k^{3}\mathinner{.}|P|)\), that is, the concatenation of its prefix of length \(21k^{3}\) and its suffix of length \(21k^{3}\) (with the characters in the middle removed). For this, we consider synchronized occurrences of \(P\) in strings \(X\) and \(Y\) and an optimal alignment \(\mathcal{A}\) of cost \(\mathsf{ed}^{w}(X,Y)\leq k\) that maps \(X\) onto \(Y\). We observe that \(\mathcal{A}\) must perfectly match a length-\(10k^{2}\) fragment within the length-\(21k^{3}\) prefix of the occurrence of \(P\) in \(X\). Moreover, since \(P\) avoids \(k\)-periodicity, \(\mathcal{A}\) can only match this fragment to the corresponding fragment of the occurrence of \(P\) in \(Y\). Symmetrically, \(\mathcal{A}\) must match the two copies of a length-\(10k^{2}\) fragment within the length-\(21k^{3}\) suffix of \(P\). We conclude that \(\mathcal{A}\) aligns the two copies of \(P[d\mathinner{.}|P|-e)\) for some
\(d,e\in[0\,\dots\,21k^{3}]\). Since \(\mathcal{A}\) is optimal, it must perfectly match the two copies of \(P[d\,\mathpunct{\cdot}\,|P|-e)\). Thus, \(P[21k^{3}\,\mathpunct{\cdot}\,|P|-21k^{3})\) can be removed from the synchronized occurrences of \(P\) in \(X\) and \(Y\) without affecting the cost \(\operatorname{\mathsf{ed}}_{\leq k}^{w}(X,Y)\). Consequently, if the first phase returns a string of length at least \(42k^{3}\), then the algorithm of Corollary 2.14 removes all but the leading \(21k^{3}\) and the trailing \(21k^{3}\) characters of that string (see Fig. 2 for example).
Linear-Time KernelIn order to apply the notion of edit-distance equivalence, we need to identify synchronized occurrences within \(X\) and \(Y\). To this end, we check whether \(\operatorname{\mathsf{ed}}(X,Y)\leq k\). If this is not the case, then \(\operatorname{\mathsf{ed}}^{w}(X,Y)\geq\operatorname{\mathsf{ed}}(X,Y)>k\) holds for every normalized weight function \(w\), and thus we already know that \(\operatorname{\mathsf{ed}}_{\leq k}^{w}(X,Y)=\infty\). If \(\operatorname{\mathsf{ed}}(X,Y)\leq k\), on the other hand, then we construct an optimal unweighted \(\mathcal{A}\) alignment mapping \(X\) onto \(Y\). As formally proved in Fact 2.7, the unedited characters of \(X\) form at most \(k+1\) fragments that \(\mathcal{A}\) matches perfectly. Each of these fragments of \(X\) forms a synchronized occurrence together with its image under \(\mathcal{A}\) in \(Y\). Thus, we can replace the synchronized occurrences with occurrences of an equivalent string of length \(\mathcal{O}(k^{3})\). Since we have partitioned \(X\) and \(Y\) into \(\mathcal{O}(k)\) edited characters plus \(\mathcal{O}(k)\) synchronized occurrences, this yields strings \(X^{\prime}\) and \(Y^{\prime}\) of length \(\mathcal{O}(k^{4})\) satisfying \(\operatorname{\mathsf{ed}}_{\leq k}^{w}(X^{\prime},Y^{\prime})=\operatorname{ \mathsf{ed}}_{\leq k}^{w}(X,Y)\). In order to construct \(\mathcal{A}\) efficiently, we use the \(\mathcal{O}(n+k^{2})\) unweighted edit distance algorithm of [10, 13]. However, if \(n\leq k^{4}\), then we do not need to worry about reducing the size of \(X\) and \(Y\) in the first place and therefore do not construct an optimal unweighted alignment; otherwise, \(\mathcal{O}(n+k^{2})=\mathcal{O}(n)\) and constructing the \(\mathcal{O}(k^{4})\)-size kernel takes linear time; see Theorem 2.15 for details on our kernel for weighted string edit distance.
As mentioned earlier, once we have a kernel \((X^{\prime},Y^{\prime})\) of size \(\mathcal{O}(k^{4})\), we can run the \(\mathcal{O}(nk)\)-time weighted edit-distance algorithm to compute \(\operatorname{\mathsf{ed}}_{\leq k}^{w}(X^{\prime},Y^{\prime})=\operatorname{ \mathsf{ed}}_{\leq k}^{w}(X,Y)\) in \(\mathcal{O}(k^{5})\) time.
#### 1.3.2 Weighted Tree Edit Distance
Our algorithm for weighted tree edit distance follows the same high-level approach. However, compared to the string edit distance, two major challenges arise. First, the structure of periodicity is much richer and requires two notions: _horizontal periodicity_ of _forests_ and _vertical periodicity_ of _contexts_. As a result, we need separate definitions of tree-edit-distance equivalence for forests and contexts. Nevertheless, assuming that the weight function \(w\) satisfies the triangle inequality, we can still construct equivalent forests and contexts of size \(\mathcal{O}(k^{3})\) and \(\mathcal{O}(k^{4})\), respectively. The second challenge is that the state-of-the-art algorithm for computing the unweighted tree edit distance is randomized and takes \(\mathcal{O}(n\log n+\operatorname{poly}(k))\) time rather than \(\mathcal{O}(n+\operatorname{poly}(k))\) time. Thus, in order to achieve a deterministic linear-time kernel, we need another method for identifying large synchronizing occurrences. Our workaround is to shrink the input in multiple iterations (essentially halving the size each time) rather than in a single shot. This way, we can still obtain a kernel of size \(\mathcal{O}(k^{5})\), which is asymptotically as small as we would get from an optimum unweighted alignment.
Periodicity in TreesIntuitively, the two types of periodicity in trees correspond to the two ways to interpret strings as trees. For a string \(X\), the _horizontal embedding_ constructs a tree with \(|X|\) leafs attached to the root and labeled by subsequent characters of \(X\), whereas the _vertical embedding_ constructs a path with \(|X|\) nodes labeled by subsequent characters of \(X\). Similarly, forest algebras (see [1] for a survey) in formal language theory involve two natural monoids: a _horizontal monoid_ of forests (with concatenation, denoted \(\cdot\)) and a _vertical monoid_ of contexts (with composition, denoted \(\star\)). A context can be defined as a tree with a single _hole_ in some leaf, and contexts can be composed by placing one of them in the hole of the other. Moreover, placing a forest in the hole of a context yields a forest. In order to formalize these notions and easily port
combinatorial and algorithmic tools designed for strings, we interpret forests as balanced strings of parentheses; see Section 3.1.
Following [13], a horizontal power is the concatenation of multiple copies of the same forest, whereas a vertical power is the composition of multiple copies of the same context; see Fig. 3 for an example. More specifically, we say that a forest contains horizontal \(k\)-periodicity if it has a subforest of the form \(Q^{4k+1}\) for some forest \(Q\) of size \(|Q|\leq 4k\), whereas a context contains vertical \(k\)-periodicity if it can be expressed as a composition of several contexts, including \(Q^{6k+1}\) for some context \(Q\) of size \(|Q|\leq 8k\).
Tree-Edit-Distance Equivalent ForestsThe first ingredient of our algorithm for weighted tree edit distance is a linear-time procedure that, given a forest \(P\), constructs an equivalent forest of size \(\mathcal{O}(k^{3})\). The first phase of this subroutine eliminates horizontal \(k\)-periodicity: as long as the processed forest contains a subforest of the form \(Q^{4k+1}\) with \(|Q|\in[1\mathinner{.}4k]\), this subforest is replaced by \(Q^{4k}\). As shown in Lemma 3.5, the forests \(Q^{4k+1}\) and \(Q^{4k}\) are equivalent, so this step preserves equivalence with the input forest \(P\). An efficient implementation of this phase relies on the fact that, if \(P\) is interpreted as a string, then horizontal \(k\)-periodicity can be interpreted as a substring of the form \(Q^{4k+1}\) for a sufficiently short _balanced_ string \(Q\). Thus, we can reuse Lemma 2.13 to obtain a forest equivalent with \(P\) that avoids horizontal \(k\)-periodicity.
In Lemma 3.7, we show the equivalence of any two forests of size at least \(74k^{3}\) that avoid horizontal \(k\)-periodicity.2 Based on this result, if horizontal periodicity reduction yields a forest of size at least \(74k^{3}\), we return a canonical forest of size exactly \(74k^{3}\); see Lemma 3.17 for details.
Footnote 2: This statement is stronger that its counterpart for strings, Lemma 2.11, because we now assume that the weight function \(w\) satisfies the triangle inequality.
Tree-Edit-Distance Equivalent ContextsOur next ingredient is a linear-time algorithm that, given a context \(P\), constructs an equivalent context of size \(\mathcal{O}(k^{4})\). First, we use the previous procedure for every maximal forest in \(P\) (that does not contain the hole). Then, we eliminate vertical \(k\)-periodicity: as long as \(P\) contains a context of the form \(Q^{6k+1}\) with \(|Q|\in[1\mathinner{.}8k]\), this context is replaced by \(Q^{6k}\). As shown in Lemma 3.10, the contexts \(Q^{6k+1}\) and \(Q^{6k}\) are equivalent, so this step preserves equivalence with the input context \(P\). For an efficient implementation, the _spine_, i.e., the path from the root of \(P\) to the hole, is interpreted as a string, with each character encoding the label of the underlying node and the subtrees attached there to the left and to the right of the spine. This way, vertical-\(k\) periodicity can be interpreted as periodicity in the constructed string, and hence Lemma 2.13 can be used again.
In Lemma 3.12, we show the equivalence of any two contexts of size at least \(578k^{4}\) that avoid vertical \(k\)-periodicity and subforests of size more than \(74k^{3}\). Thus, if vertical periodicity reduction yields a context of size at least \(578k^{4}\), we replace it with a canonical context of size exactly \(578k^{4}\); see Lemma 3.18 for details.
Linear-Time KernelAs for strings, in order to apply the notion of tree-edit-distance equivalence, we need to identify synchronized occurrences of forests and contexts within the input forests \(F\) and \(G\). As mention above, in order to obtain a deterministic linear-time kernel, we cannot use the algorithm of [13] to obtain a tree alignment mapping \(F\) to \(G\) with at most \(k\) edits. Instead, we develop an iterative workaround. At each step, we decompose \(F\) into \(\mathcal{O}(k)\) contexts and forests (jointly called pieces) of size at most \(\frac{n}{2k}\) each; see Lemma 3.15 for details. Next, we maximize the number of pieces (from the decomposition) that admit disjoint synchronized occurrences in \(G\); Lemma 3.16 implements this step in \(\mathcal{O}(n+k^{4})\) time using dynamic programming. If \(\mathsf{ted}(F,G)\leq k\)
then no more than \(k\) of the pieces are left unmatched (an optimal alignment may edit at most \(k\) pieces). We replace the matched pieces with equivalent pieces of size \(\mathcal{O}(k^{5})\), obtaining forests of size at most \(\frac{n}{2}+\mathcal{O}(k^{5})\), where the first term corresponds to the unmatched pieces; see Theorem 3.19. As long as \(n=\omega(k^{5})\), this procedure essentially halves the input size. Hence, as shown in Corollary 3.20, this still yields a linear-time algorithm producing forests \(F^{\prime}\) and \(G^{\prime}\) of size \(\mathcal{O}(k^{5})\) such that \(\mathsf{ted}_{\leq k}^{w}(F^{\prime},G^{\prime})=\mathsf{ted}_{\leq k}^{w}(F,G)\).
Once we have such a kernel \((F^{\prime},G^{\prime})\) of size \(\mathcal{O}(k^{5})\), we can run the cubic-time weighted edit-distance algorithm [10] to compute \(\mathsf{ted}_{\leq k}^{w}(X^{\prime},Y^{\prime})=\mathsf{ted}_{\leq k}^{w}(X,Y)\) in \(\mathcal{O}(k^{15})\) time, for a total runtime of \(\mathcal{O}(n+k^{15})\). Additionally, we significantly improve the state-of-the-art of the unweighted tree edit distance problem by using the \(\mathcal{O}(nk^{2}\log n)\)-time algorithm from [1], which gives us a total runtime of \(\mathcal{O}(n+k^{7}\log k)\) for unweighted tree edit distance.
#### 1.3.3 Weighted Dyck Edit Distance
In the final section of our paper, the weighted Dyck edit distance algorithm follows a similar approach to that of the string and tree edit distance algorithm. However, many of the proofs and details are specific to Dyck edit distance problem and come with their own set of intricacies and difficulties that we outline in the following.
Given a string \(X\) over an alphabet \(\Sigma=T\cup\overline{T}\) (where \(T\) and \(\overline{T}\) are the sets of opening and closing parentheses, respectively), an integer \(k\in\mathbb{Z}_{+}\), and a skewmetric weight function \(w\) representing the cost of each edit operation (parenthesis insertion, deletion, and substitution), our objective is to compute the minimum weight of a sequence of edits that convert \(X\) to a well-parenthesized expression over \(\Sigma\) provided the total weight of all edits is bounded by \(k\). In this work we design a deterministic algorithm that achieves this goal in \(\mathcal{O}(n+k^{12})\) time. For the unweighted counterpart of this problem, the recent solution of [11, 12] computes the Dyck edit distance in time \(\mathcal{O}(n+k^{4.5442})\). That algorithm, consistently with its predecessors [1, 11], starts with a greedy preprocessing step that exhaustively removes any two adjacent characters \(X[i]X[i+1]\) such that \(X[i]\) is an opening parenthesis and \(X[i+1]\) is a closing parenthesis of the same type. Following a simple argument, it can be shown that the Dyck edit distance of the preprocessed string stays exactly the same as the input string \(X\).
PreprocessingWe tried to follow a similar approach for the weighted version, but it turns out that such a simple analysis is not enough to construct a reduced string. For example, let the input string be ({}. For a general weight function \(w\), it is not evident that the optimal matching should always match the last two parentheses. In fact, if we consider a weight function where the cost of substituting { with } is 10 whereas the cost of substituting { with } is 5, then any optimal matching should match the first and the last parentheses instead of the last two. Thus, in this work, we consider our weight function \(w\) to be a skewmetric. Formally, we assume that \(w\) satisfies the triangle inequality and skew-symmetry, that is, \(w(p_{1},p_{2})=w(\overline{p_{2}},\overline{p_{1}})\) holds for all \(p_{1},p_{2}\in\Sigma\cup\{\varepsilon\}\), where \(\overline{p}\) is the parenthesis complementary to \(p\) (and \(\overline{\varepsilon}=\varepsilon\)). Following this property of \(w\), we show that one can apply a similar greedy preprocessing (as described for the unweighted version) to reduce \(X\) to a string \(X^{\prime}\) while preserving the weighted Dyck edit distance. Our argument is substantially more elaborate, though, and follows a case-by-case analysis depending on the structure of the other alternate alignments (Claim 4.7). Nevertheless, it is trivial to observe the greedy preprocessing can be done in linear time.
Dyck-Edit-Distance Equivalent StringsNext, following a similar strategy as described for string edit distance, we further reduce \(X^{\prime}\) to generate a string \(X^{\prime\prime}\) of length \(\mathcal{O}(k^{4})\) while preserving
the weighted Dyck edit distance. For this first we introduce the concept of \(k\)-synchronicity. A substring \(P\) containing only opening parentheses and a substring \(\overline{P}\) containing only closing parentheses are \(k\)-synchronized if \(\overline{P}\) appears after \(P\), they are of same length and their height difference is at most \(2k\). Following this and the non-crossing property of Dyck matching, first we argue that if the lengths of \(P,\overline{P}\) are large and the distance is bounded by \(k\), then there exist a substring \(\ell\in P\) that is matched with a substring \(\ell^{\prime}\in\overline{P}\) in the optimal alignment (we fix one for the analysis purpose). Now if we replace \(P\) with \(P\setminus\ell\) and \(P^{\prime}\) with \(P^{\prime}\setminus\ell^{\prime}\) then in the resulting string the distance stays the same (Fact 4.12). Following this, for any two \(k\)-synchronized substrings \(P,\overline{P}\), we can reduce their periodicity as follows: if \(P=Q^{e}\) and \(\overline{P}=\overline{Q^{e}}\), (where \(Q\) is a primitive string with large exponent \(e\)) then at least one occurrence of \(Q\) is matched with its reverse complement counterpart \(\overline{Q}\) in \(\overline{P}\). Thus, we can remove the matched part while not changing the distance, and it reduces the exponent by one. Repeat this until \(e\) become small (Lemma 4.13).
Next assuming that \(P,\overline{P}\) avoid periodicity, it can be shown that there exists a pair of indices \(i,j\in[0\,\mathpunct{\ldotp}\,78k^{3}]\) such that \(P[i]\) is matched with, \(\overline{P}[|P|-1-i]\) and \(P[|P|-1-j]\) is matched with, \(\overline{P}[j]\) in the optimal alignment. Thus, following the fact that \(|P|=|\overline{P}|\) and the non-crossing property of the Dyck optimal alignment, all the indices between \(i\) and \(|P|-1-j\) are also matched, and thus removing these matched characters from both \(P,\overline{P}\) does not affect the Dyck edit distance. Consequently, we replace each \(k\)-synchronized pairs with substrings of length just \(156k^{3}\) (replace \(P,\overline{P}\) with their first and last \(78k^{3}\) characters) to generate a string \(X^{\prime\prime}\) such that the weighted Dyck edit distance of \(X\) and \(X^{\prime\prime}\) is the same (Lemma 4.15, Corollary 4.17).
Linear-Time KernelLastly, we show if the distance is bounded by \(k\), then \(X\) can be partitioned in time \(\mathcal{O}(n+k^{5})\) into \(\mathcal{O}(k)\) disjoint \(k\)-synchronized pairs of substrings (plus \(\mathcal{O}(k)\) individual characters) and thus the total length of \(X^{\prime\prime}\) is bounded by \(\mathcal{O}(k^{4})\). Start by preprocessing input string \(X\) to generate \(X^{\prime}\). Next, we check if \(\mathsf{ded}(X^{\prime})\leq k\) and, if so, we compute an unweighted optimal Dyck alignment \(\mathcal{M}\) of \(X^{\prime}\) in time \(\mathcal{O}(n+k^{5})\)[12]. Then, we argue any pair of substrings of \(X^{\prime}\) that are matched by \(\mathcal{M}\) are \(k\)-synchronized. Thus, using \(\mathcal{M}\), we identify the set of maximal substrings from \(T^{*}\) and \(\overline{T}^{*}\) that are matched by \(\mathcal{M}\). A substring is maximal in a sense that either the substring itself or its matched counterpart can not be extended to the right or left without paying an edit. As unweighted Dyck edit distance is no more than the weighted version and hence, assuming cost of \(\mathcal{M}\) is bounded by \(k\), we can show string \(X^{\prime}\) can be partitioned into \(\mathcal{O}(k)\) different \(k\)-synchronized pairs. Also, these maximal fragments can be found in linear time with a left-to-right scan of \(X^{\prime}\). Subsequently, we create a string \(X^{\prime\prime}\) from \(X^{\prime}\) as follows: (i) for each \(k\)-synchronized pairs we reduce them following the algorithm as discussed above and add two corresponding strings each of length \(O(k^{3})\) (ii) add all the characters that are edited by \(\mathcal{M}\) just the same to \(X^{\prime\prime}\) (Theorem 4.19).
Finally, we compute the weighted Dyck edit distance of \(X^{\prime\prime}\) using the dynamic program algorithm of [16] in time \(\mathcal{O}(k^{12})\).
## 2 String Edit Distance
### Preliminaries
A _string_\(Y\in\Sigma^{n}\) is a sequence of \(|Y|:=n\) characters from an _alphabet_\(\Sigma\). For \(i\in[0\,\mathpunct{\ldotp}\,n)\), we denote the \(i\)th character of \(Y\) with \(Y[i]\). We say that a string \(X\)_occurs_ as a _substring_ of a string \(Y\) if \(X=Y[i]\cdots Y[j-1]\) holds for some integers \(0\leq i\leq j\leq|Y|\). We denote the underlying _occurrence_ of \(X\) as \(Y[i\,\mathpunct{\ldotp}\,j)\). Formally, \(Y[i\,\mathpunct{\ldotp}\,j)\) is a _fragment_ of \(Y\) that can be represented using a reference to \(Y\) as well as its endpoints \(i,j\). The fragment \(Y[i\,\mathpunct{\ldotp}\,j)\) can be alternatively denoted
as \(Y[i\mathinner{\ldotp\ldotp}j-1]\), \(Y(i-1\mathinner{\ldotp\ldotp}j-1]\), or \(Y(i-1\mathinner{\ldotp\ldotp}j)\). A fragment of the form \(Y[0\mathinner{\ldotp\ldotp}j)\) is a _prefix_ of \(Y\), whereas a fragment of the form \(Y[i\mathinner{\ldotp\ldotp}n)\) is a _suffix_ of \(Y\).
**Theorem 2.1** (LCE queries [10, 11]).: _Strings \(X,Y\) can be preprocessed in linear time so that the following longest common extension (LCE) queries can be answered in \(\mathcal{O}(1)\) time: given positions \(x\in[0\mathinner{\ldotp\ldotp}|X|]\) and \(y\in[0\mathinner{\ldotp\ldotp}|Y|]\), compute the largest \(\ell\) such that \(X[x\mathinner{\ldotp\ldotp}x+\ell)=Y[y\mathinner{\ldotp\ldotp}y+\ell)\)._
As mentioned in Section 1.3, high-power periodicity plays a key role in our algorithms, which we may now formally define for strings here. An integer \(p\in[1\mathinner{\ldotp\ldotp}n]\) is a _period_ of a string \(Y\in\Sigma^{n}\) if \(Y[i]=Y[i+p]\) holds for all \(i\in[0\mathinner{\ldotp\ldotp}n-p)\). In this case, the prefix \(Y[0\mathinner{\ldotp\ldotp}p)\) is called a _string period_ of \(Y\). By \(\mathsf{per}(Y)\) we denote the smallest period of \(Y\). The exponent of a string \(Y\) is defined as \(\exp(Y):=\frac{|Y|}{\mathsf{per}(Y)}\), and we say that a string \(Y\) is _periodic_ if \(\exp(Y)\geq 2\).
**Theorem 2.2** (2-Period queries [11, 12]).: _A string \(X\) can be preprocessed in linear time so that one can decide in constant time whether any given fragment \(X[i\mathinner{\ldotp\ldotp}j)\) is periodic and, if so, compute its shortest period \(\mathsf{per}(X[i\mathinner{\ldotp\ldotp}j))\)._
For a string \(Y\) and an integer \(m\geq 0\), we define the \(m\)th power of \(Y\), denoted \(Y^{m}\), as the concatenation of \(m\) copies of \(Y\). A non-empty string \(Y\in\Sigma^{n}\) is _primitive_ if it cannot be expressed as \(Y=X^{m}\) for some string \(X\) and integer \(m>1\). For a string \(Y\in\Sigma^{n}\), we define a _forward rotation_\(\mathsf{rot}(Y)=Y[1]\cdots Y[n-1]Y[0]\). In general, a _cyclic rotation_\(\mathsf{rot}^{s}(Y)\) with _shift_\(s\in\mathbb{Z}\) is obtained by iterating \(\mathsf{rot}\) or the inverse operation \(\mathsf{rot}^{-1}\). A string \(Y\) is primitive if and only if it is distinct from its non-trivial rotations, i.e., if \(Y=\mathsf{rot}^{s}(Y)\) holds only when \(s\) is a multiple of \(n\).
### Edit-Distance Alignments and Weighted Edit Distance
In this subsection, we discuss alignments and their weighted cost, which provide a formal way to describe a sequence of edits needed to transform a string \(X\) into \(Y\).
**Definition 2.3**.: A sequence \(\mathcal{A}=(x_{t},y_{t})_{t=0}^{m}\) is an _alignment_ of a fragment \(X[x\mathinner{\ldotp\ldotp}x^{\prime})\) onto a fragment \(Y[y\mathinner{\ldotp\ldotp}y^{\prime})\) if \((x_{0},y_{0})=(x,y)\), \((x_{m},y_{m})=(x^{\prime},y^{\prime})\), and \((x_{t+1},y_{t+1})\in\{(x_{t}+1,y_{t}+1),\allowbreakbreak(x_{t}+1,y_{t}),(x_{ t},y_{t}+1)\}\) for \(t\in[0\mathinner{\ldotp\ldotp}m)\). The set of all alignments of \(X[x\mathinner{\ldotp\ldotp}x^{\prime})\) onto \(Y[y\mathinner{\ldotp\ldotp}y^{\prime})\) is denoted with \(\mathsf{A}(X[x\mathinner{\ldotp\ldotp}x^{\prime}),Y[y\mathinner{\ldotp\ldotp}y^ {\prime})\).
Given an alignment \(\mathcal{A}=(x_{t},y_{t})_{t=0}^{m}\in\mathsf{A}(X[x\mathinner{\ldotp\ldotp}x^{ \prime}),Y[y\mathinner{\ldotp\ldotp}y^{\prime}))\), for every \(t\in[0\mathinner{\ldotp\ldotp}m)\), we say that
* \(\mathcal{A}\)_deletes_ \(X[x_{t}]\) if \((x_{t+1},y_{t+1})=(x_{t}+1,y_{t})\).
* \(\mathcal{A}\)_inserts_ \(Y[y_{t}]\) if \((x_{t+1},y_{t+1})=(x_{t},y_{t}+1)\).
* \(\mathcal{A}\)_aligns_ \(X[x_{t}]\) to \(Y[y_{t}]\), denoted by \(X[x_{t}]\sim_{\mathcal{A}}Y[y_{t}]\), if \((x_{t+1},y_{t+1})=(x_{t}+1,y_{t}+1)\).
* \(\mathcal{A}\)_matches_ \(X[x_{t}]\) with \(Y[y_{t}]\), denoted by \(X[x_{t}]\simeq_{\mathcal{A}}Y[y_{t}]\), if \(X[x_{t}]\sim_{\mathcal{A}}Y[y_{t}]\) and \(X[x_{t}]=Y[y_{t}]\).
* \(\mathcal{A}\)_substitutes_ \(X[x_{t}]\) for \(Y[y_{t}]\) if \(X[x_{t}]\sim_{\mathcal{A}}Y[y_{t}]\) but \(X[x_{t}]\neq Y[y_{t}]\).
Insertions, deletions, and substitutions are jointly called _(character) edits_.
_Example 2.4_.: For an example of an alignment, consider strings \(X=\mathtt{abc}\) and \(Y=\mathtt{bd}\). One optimal alignment \(\mathcal{A}\) might be \(\{(0,0),(1,0),(2,1),(3,2)\}\). The pairs \((0,0),(1,0)\) represent a deletion of \(X[0]=\mathtt{a}\) by \(\mathcal{A}\). The pairs \((1,0),(2,1),(3,2)\) signify that \(\mathcal{A}\) aligns \(X[1\mathinner{\ldotp\ldotp}2]\sim_{\mathcal{A}}Y[0\mathinner{\ldotp\ldotp}1]\), i.e. \(\mathtt{bc}\sim_{\mathcal{A}}\mathtt{bd}\). Moreover, \(X[1]\) is matched to \(Y[0]\) since \(X[1]=Y[0]=\mathtt{b}\) while \(X[2]\) is substituted for \(Y[1]\) since \(X[2]=\mathtt{c}\neq\mathtt{d}=Y[1]\).
For an alphabet \(\Sigma\), we define \(\bar{\Sigma}:=\Sigma\cup\{\varepsilon\}\), where \(\varepsilon\) is the empty string over \(\Sigma\). We say that a function \(w:\bar{\Sigma}\times\bar{\Sigma}\to\mathbb{R}_{\geq 0}\cup\{\infty\}\) is a _weight function_ if \(w(a,a)=0\) holds for all \(a\in\bar{\Sigma}\). The _cost_ of an alignment \(\mathcal{A}\in\mathsf{A}(X[x\mathinner{\ldotp\ldotp}x^{\prime}),Y[y\mathinner{ \ldotp\ldotp}y^{\prime}))\) with respect to a weight function \(w\), denoted \(\mathsf{ed}_{\mathcal{A}}^{w}(X[x\mathinner{\ldotp\ldotp}x^{\prime}),Y[y\mathinner{ \ldotp\ldotp}y^{\prime}))\), is defined as the total cost of edits that \(\mathcal{A}\) performs, where:
* the cost of deleting \(X[x]\) is \(w(X[x],\varepsilon)\),
* the cost of inserting \(Y[y]\) is \(w(\varepsilon,Y[y])\),
* the cost of substituting \(X[x]\) for \(Y[y]\) is \(w(X[x],Y[y])\).
The _width_ of an alignment \((x_{t},y_{t})_{t=0}^{m}\in\mathsf{A}(X[x\mathinner{\ldots}x^{\prime}),Y[y \mathinner{\ldots}y^{\prime}))\) is defined as \(\max_{t=0}^{m}|x_{t}-y_{t}|\).
We usually consider alignments of the entire string \(X[0\mathinner{\ldots}|X|)\) onto the entire string \(Y[0\mathinner{\ldots}|Y|)\), and we denote the set of all such alignment with \(\mathsf{A}(X,Y)=\mathsf{A}(X[0\mathinner{\ldots}|X|),Y[0\mathinner{\ldots}|Y |))\). The weighted edit distance of strings \(X,Y\in\Sigma^{*}\) with respect to a weight function \(w\) is defined as \(\mathsf{ed}^{w}(X,Y)=\min_{\mathcal{A}\in\mathsf{A}(X,Y)}\mathsf{ed}^{w}_{ \mathcal{A}}(X,Y)\). For \(k\in\mathbb{R}_{\geq 0}\), we also denote
\[\mathsf{ed}^{w}_{\leq k}(X,Y)=\begin{cases}\mathsf{ed}^{w}(X,Y)&\text{if } \mathsf{ed}^{w}(X,Y)\leq k,\\ \infty&\text{otherwise}.\end{cases}\]
In the literature, the (weighted) edit distance of \(X\) and \(Y\) is sometimes defined as the minimum cost of a sequence of edits that transform \(X\) into \(Y\). As shown in the following fact (whose technical proof is deferred to Appendix A), this sequence-based view is equivalent to our alignment-based view provided that \(w\) is a _quasimetric_, that is, it satisfies the triangle inequality \(w(a,b)+w(b,c)\geq w(a,c)\) for every \(a,b,c\in\bar{\Sigma}\). The assumption of \(w\) being quasimetric can be made without loss of generality in the sequence-based view (a single character can be edited multiple times, so one can replace \(w\) by its distance closure without affecting the edit distances). Our alignment-based view, on the other hand, is more general and captures weighted edit distances violating the triangle inequality.
**Fact 2.5**.: _If \(w\) is a quasimetric on \(\bar{\Sigma}\), then \(\mathsf{ed}^{w}\) is a quasimetric on \(\Sigma^{*}\). In this case, \(\mathsf{ed}^{w}(X,Y)\) can be equivalently defined as the minimum cost of a sequence of edits transforming \(X\) into \(Y\)._
Although our algorithm for strings works for any weight function, its tree and Dyck counterparts assume that \(w\) is a quasimetric. Specifically, they rely on the following fact proved in Appendix A.
**Fact 2.6**.: _Consider a string \(X\) and its fragment \(X[i\mathinner{\ldots}j)\). Then, for every quasimetric \(w\), we have \(\mathsf{ed}^{w}(X,X[i\mathinner{\ldots}j))=\mathsf{ed}^{w}(X[0\mathinner{ \ldots}i)\cdot X[j\mathinner{\ldots}|X|),\varepsilon)\)._
While our main results are on the weighted version of edit distance, our algorithm relies on unweighted edit distance procedures as well. If \(w\) is the discrete metric on \(\bar{\Sigma}\) (that is, for every \(a,b\in\bar{\Sigma}\), we have \(w(a,b)=0\) if \(a=b\) and \(w(a,b)=1\) otherwise), then we drop the superscript \(w\) in \(\mathsf{ed}^{w}\) and \(\mathsf{ed}^{w}_{\mathcal{A}}\). This yields the unit-cost edit distance (also known as the unweighted edit distance or the Levenshtein distance). We consider weight function \(w\) to be _normalized_ that is \(w(a,b)\geq 1\) holds for all \(a,b\in\bar{\Sigma}\) with \(a\neq b\). In this case, \(\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)\geq\mathsf{ed}_{\mathcal{A}}(X,Y)\) holds for all strings \(X,Y\in\Sigma^{*}\) and alignments \(\mathcal{A}\in\mathsf{A}(X,Y)\).
Given an alignment \(\mathcal{A}=(x_{t},y_{t})_{t=0}^{m}\in\mathsf{A}(X,Y)\), for every \(\ell,r\in[0\mathinner{\ldots}m]\) with \(\ell\leq r\), we say that \(\mathcal{A}\)_aligns_\(X[x_{\ell}\mathinner{\ldots}x_{r})\) to \(Y[y_{\ell}\mathinner{\ldots}y_{r})\), denoted \(X[x_{\ell}\mathinner{\ldots}x_{r})\sim_{\mathcal{A}}Y[y_{\ell}\mathinner{ \ldots}y_{r})\). In this case, for any weight function \(w\), we write \(\mathsf{ed}^{w}_{\mathcal{A}}(X[x_{\ell}\mathinner{\ldots}x_{r}),Y[y_{\ell} \mathinner{\ldots}y_{r}))\) to denote the cost of the induced alignment of \(X[x_{\ell}\mathinner{\ldots}x_{r})\) onto \(Y[y_{\ell}\mathinner{\ldots}y_{r})\). If \(\mathsf{ed}^{w}_{\mathcal{A}}(X[x_{\ell}\mathinner{\ldots}x_{r}),Y[y_{\ell} \mathinner{\ldots}y_{r}))=0\), we say that \(\mathcal{A}\)_matches_\(X[x_{\ell}\mathinner{\ldots}x_{r})\) with \(Y[y_{\ell}\mathinner{\ldots}y_{r})\), denoted \(X[x_{\ell}\mathinner{\ldots}x_{r})\simeq_{\mathcal{A}}Y[y_{\ell}\mathinner{ \ldots}y_{r})\).
**Fact 2.7**.: _Consider \(k\in\mathbb{Z}_{\geq 0}\), strings \(X,Y\), and an alignment \(\mathcal{A}\in\mathsf{A}(X,Y)\) of cost \(\mathsf{ed}_{\mathcal{A}}(X,Y)\leq k\). Then, the string \(X\) can be partitioned into at most \(k\) individual characters (that \(\mathcal{A}\) deletes or substitutes) and at most \(k+1\) fragments that \(\mathcal{A}\) matches perfectly to fragments of \(Y\)._
Proof.: Let \(\mathcal{A}=(x_{t},y_{t})_{t=0}^{m}\) and let \(t_{1}<\cdots<t_{e}\) be the indices in \([0\mathinner{\ldots}m)\) corresponding to edits in \(\mathcal{A}\). Then, the maximal fragments that \(\mathcal{A}\) matches perfectly are \(X[0\mathinner{\ldots}x_{t_{1}})\), \(X[x_{t_{i}+1}\mathinner{\ldots}x_{t_{i+1}})\) for \(i\in[1\mathinner{\ldots}e)\), and \(X[x_{t_{e}+1}\mathinner{\ldots}|X|)\). Moreover, \(\mathcal{A}\) deletes or substitutes \(X[x_{t_{i}}]\) for every \(i\in[1\mathinner{\ldots}e]\) such that \(x_{t_{i}+1}>x_{t_{i}}\). Each edit contributes one unit to the cost of \(\mathcal{A}\), so the decomposition contains at most \(e\leq k\) edited characters and \(e+1\leq k+1\) fragments matched perfectly.
### Combinatorial Foundations
Before giving our algorithms for weighted string edit distance, we discuss edit distance equivalent substrings, one of our main technical contributions.
**Definition 2.8**.: For \(k\in\mathbb{Z}_{\geq 0}\) and a weight function \(w\), strings \(P,P^{\prime}\) are called _\(\mathsf{ed}^{w}_{\leq k}\)-equivalent_ if
\[\mathsf{ed}^{w}_{\leq k}(X,Y)=\mathsf{ed}^{w}_{\leq k}(X[0\mathinner{\mathinner {\mathinner{\mathinner{\mathinner{\mathinner{\mathinner{\mathinner{\mathinner{\mathinner{ \mathinner{\mathinner{\mathinnerinner{\mathinnerinner{\mathinnerinnerinner{\mathinnerinnerinner{ \mathinnerinner{\mathinnerinnerinner{\mathinnerinnerinnerinner{\mathinnerinnerinnerinner{ \mathinnerinnerinner{\mathinnerinnerinnerinner{\innerinnerinnerinnerinner{ \mathinnerinnerinnerinner{\innerinnerinnerinnerinnerinner{ \innerinnerinnerinner{\innerinnerinnerinnerinnerinner{\innerinnerinnerinnerinnerinner{ \innerinnerinnerinner{\innerinnerinnerinnerinnerinner{ \innerinnerinnerinner{\innerinnerinnerinnerinnerinnerinner{ \innerinnerinnerinner{\inner
Proof.: Let \((t_{X},t_{Y})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(t_{X}\geq p_{X}\) and \(t_{Y}\geq p_{Y}\). By symmetry between \(X\) and \(Y\), we assume without loss of generality that \(t_{X}=p_{X}\). Consider the \(k+1\) occurrences of \(Q\) in \(X\) starting at positions \(p_{X}+i\cdot q\) for \(i\in[0\mathinner{\ldot}k]\). The alignment \(\mathcal{A}\) matches at least one of them exactly; we can thus define \(i_{X}\in[0\mathinner{\ldot}k]\) so that \(\mathcal{A}\) matches \(X[p_{X}+i_{X}\cdot q\mathinner{\ldot}p_{X}+(i_{X}+1)\cdot q)\) exactly to some fragment \(Y[s_{Y}\mathinner{\ldot}s_{Y}+q)\). Due to \((t_{X},t_{Y})\in\mathcal{A}\), the non-crossing property of \(\mathcal{A}\) implies that \(s_{Y}\geq t_{Y}\geq p_{Y}\). Moreover, since \(\mathsf{ed}_{\mathcal{A}}(X,Y)\leq k\) and \(X[p_{X}+i_{X}\cdot q]\simeq_{\mathcal{A}}Y[s_{Y}]\), we have \(s_{Y}\leq(p_{X}+i_{X}\cdot q)+k\leq p_{X}+kq+k\leq p_{Y}+kq+2k\leq p_{Y}+3kq\). Furthermore, since \(Q\) is primitive (i.e., distinct from all its non-trivial cyclic rotations), we conclude that \(s_{Y}=p_{Y}+i_{Y}\cdot q\) for some \(i_{Y}\in[0\mathinner{\ldot}3k]\).
Now, if \(Q^{e}=X[p_{X}\mathinner{\ldot}p_{X}+e\cdot q)=Y[p_{Y}\mathinner{\ldot}p_{Y}+e \cdot q)\) is replaced with \(Q^{e^{\prime}}\) for \(e^{\prime}\geq e-1\), we can interpret this as replacing \(Q=X[p_{X}+i_{X}\cdot q\mathinner{\ldot}p_{X}+(i_{X}+1)\cdot q)=Y[p_{Y}+i_{Y} \cdot q\mathinner{\ldot}p_{Y}+(i_{Y}+1)\cdot q)\) with \(Q^{1+e^{\prime}-e}\). By Claim 2.10, \(\mathcal{A}\) can be trivially adapted without modifying its cost, and hence \(\mathsf{ed}^{w}(X^{\prime},Y^{\prime})\leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y) =\mathsf{ed}^{w}(X,Y)\). If \(e^{\prime}<e-1\), we repeat the above argument to decrement the exponent \(e\) one step at a time, still concluding that \(\mathsf{ed}^{w}(X^{\prime},Y^{\prime})\leq\mathsf{ed}^{w}(X,Y)\). In either case, the converse inequality follows by symmetry between \((X,Y,e)\) and \((X^{\prime},Y^{\prime},e^{\prime})\).
We say that a string avoids \(k\)-periodicity if it does not contain any substring of the form \(Q^{4k+1}\) with \(|Q|\in[1\mathinner{\ldot}2k]\).
**Lemma 2.11**.: _Let \(k\in\mathbb{Z}_{+}\) and let \(P,P^{\prime}\) be strings of lengths at least \(42k^{3}\) such that \(P[0\mathinner{\ldot}21k^{3})=P^{\prime}[0\mathinner{\ldot}21k^{3})\) and \(P[|P|-21k^{3}\mathinner{\ldot}|P|)=P^{\prime}[|P^{\prime}|-21k^{3}\mathinner{ \ldot}|P^{\prime}|)\) avoid \(k\)-periodicity. Then, \(P\) and \(P^{\prime}\) are \(\mathsf{ed}^{w}_{\leq k}\)-equivalent for every weight function \(w\)._
Proof.: Suppose that \(P\) occurs in strings \(X\) and \(Y\) at positions \(p_{X}\) and \(p_{Y}\), respectively, satisfying \(|p_{X}-p_{Y}|\leq k\). Denote \(X^{\prime}=X[0\mathinner{\ldot}p_{X})\cdot P^{\prime}\cdot X[p_{X}+|P| \mathinner{\ldot}|X|)\) and \(Y^{\prime}=Y[0\mathinner{\ldot}p_{Y})\cdot P^{\prime}\cdot Y[p_{Y}+|P| \mathinner{\ldot}|Y|)\). Moreover, let \(\mathcal{A}\in\mathsf{A}(X,Y)\) be an alignment such that \(\mathsf{ed}^{w}(X,Y)=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)\leq k\).
**Claim 2.12**.: _There exist \(d,e\in[0\mathinner{\ldot}21k^{3}]\) such that_
\[X[p_{X}+d\mathinner{\ldot}p_{X}+|P|-e)\sim_{\mathcal{A}}Y[p_{Y}+d\mathinner{ \ldot}p_{Y}+|P|-e).\]
Figure 2: For any synchronized occurrences of a substring \(P\) that avoids \(k\)-periodicity, any optimal alignment (depicted by lines connecting characters of the two strings) must match most of the inner characters of \(P\) (see green lines). We can construct strings \(X^{\prime},Y^{\prime}\) removing these matched characters such that \(\mathsf{ed}(X,Y)=\mathsf{ed}(X^{\prime},Y^{\prime})\).
Proof.: Let us partition \(X[p_{X}\mathinner{\ldots}p_{X}+21k^{3})\) into individual characters representing deletions or substitutions of \(\mathcal{A}\) and maximal fragments that \(\mathcal{A}\) matches perfectly (to fragments of \(Y\)). By Fact 2.7, the number of such maximal fragments is at most \(k+1\) and their total length is at least \(21k^{3}-k\geq 20k^{3}\). Hence, one of these fragments is of length at least \(\frac{20k^{3}}{k+1}\geq 10k^{2}\). Thus, let \(R:=X[r_{X}\mathinner{\ldots}r_{X}+|R|)\) be a fragment of length at least \(10k^{2}\) contained in \(X[p_{X}\mathinner{\ldots}p_{X}+21k^{3})\) that \(\mathcal{A}\) matches perfectly to \(Y[r_{Y}\mathinner{\ldots}r_{Y}+|R|)\). Moreover, let \(r_{Y}^{\prime}:=r_{X}+p_{Y}-p_{X}\). If \(r_{Y}=r_{Y}^{\prime}\), then we set \(d:=r_{X}-p_{X}=r_{Y}-p_{Y}\) so that \((p_{X}+d,p_{Y}+d)\in\mathcal{A}\). Otherwise, both \(Y[r_{Y}\mathinner{\ldots}r_{Y}+|R|)\) and \(Y[r_{Y}^{\prime}\mathinner{\ldots}r_{Y}^{\prime}+|R|)\) are occurrences of \(R\) in \(Y\). Moreover, \(0<|r_{Y}-r_{Y}^{\prime}|\leq|r_{Y}-r_{X}|+|r_{Y}^{\prime}-r_{X}|\leq\mathsf{ ed}_{\mathcal{A}}^{w}(X,Y)+|p_{Y}-p_{X}|\leq 2k\). Hence, \(\mathsf{per}(R)\leq|r_{Y}-r_{Y}^{\prime}|\leq 2k\) and \(\exp(R)\geq\frac{|R|}{2k}\geq 4k+1\). Since \(Y[r_{Y}^{\prime}\mathinner{\ldots}r_{Y}^{\prime}+|R|)\) is contained in \(Y[p_{Y}\mathinner{\ldots}p_{Y}+21k^{3})=P[0\mathinner{\ldots}21k^{3})\), this contradicts the assumption about \(P[0\mathinner{\ldots}21k^{3})\) avoiding \(k\)-periodicity.
A symmetric argument shows that \((p_{X}+|P|-e,p_{Y}+|P|-e)\) holds for some \(e\in[0\mathinner{\ldots}21k^{3}]\), which lets us conclude that \(X[p_{X}+d\mathinner{\ldots}p_{X}+|P|-e)\sim_{\mathcal{A}}Y[p_{Y}+d\mathinner{ \ldots}p_{Y}+|P|-e)\).
By Claim 2.12, we have \(X[p_{X}+d\mathinner{\ldots}p_{X}+|P|-e)\sim_{\mathcal{A}}Y[p_{Y}+d\mathinner{ \ldots}p_{Y}+|P|-e)\). Both fragments match \(P[d\mathinner{\ldots}|P|-e)\), so the optimality of \(\mathcal{A}\) guarantees \(X[p_{X}+d\mathinner{\ldots}p_{X}+|P|-e)\simeq_{\mathcal{A}}Y[p_{Y}+d\mathinner{ \ldots}p_{Y}+|P|-e)\). Hence, if \(P=X[p_{X}\mathinner{\ldots}p_{Y}+|P|)=Y[p_{Y}\mathinner{\ldots}p_{Y}+|P|)\) is replaced with \(P^{\prime}\), we can interpret this as \(P[d\mathinner{\ldots}|P|-e)=X[p_{X}+d\mathinner{\ldots}p_{X}+|P|-e)=Y[p_{Y}+d \mathinner{\ldots}p_{Y}+|P|-e)\) with \(P^{\prime}[d\mathinner{\ldots}|P^{\prime}|-e)\). Since \(X[p_{X}+d\mathinner{\ldots}p_{X}+|P|-e)\simeq_{\mathcal{A}}Y[p_{Y}+d\mathinner{ \ldots}p_{Y}+|P|-e)\), the alignment \(\mathcal{A}\) can be trivially adapted without modifying its cost, and therefore \(\mathsf{ed}^{w}(X^{\prime},Y^{\prime})\leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)= \mathsf{ed}^{w}(X,Y)\). The converse inequality follows by symmetry between \((X,Y,P)\) and \((X^{\prime},Y^{\prime},P^{\prime})\).
### Algorithm
The following lemma lets us transform any string \(P\) to a string \(P^{\prime}\) that avoids \(k\)-periodicity and is \(\mathsf{ed}^{w}_{\leq k}\)-equivalent to \(P\) for every weight function \(w\). It is stated in a general form so that it can be reused in subsequent sections.
**Lemma 2.13**.: _Let \(e\in\mathbb{Z}_{+}\) and let \(\mathcal{Q}\) be a family of primitive strings of length at most \(e\). There is an algorithm that repeatedly transforms an input string \(P\) by replacing an occurrence of \(Q^{e+1}\) (for some \(Q\in\mathcal{Q}\)) with an occurrence of \(Q^{e}\), arriving at a string \(P^{\prime}\) that does not contain any occurrence of \(Q^{e+1}\) (for any \(Q\in\mathcal{Q}\)). Moreover, this algorithm can be implemented in linear time using a constant-time oracle that tests whether a given primitive fragment of \(P\) belongs to \(\mathcal{Q}\)._
Proof.: At preprocessing, we construct data structures for LCE and 2-Period queries in \(P\); see Theorems 2.1 and 2.2. In the main phase, our algorithm scans the string \(P\) from left to right maintaining a string \(R\) and an index \(r\in[0\mathinner{\ldots}|P|]\) such that \(R\cdot P[r\mathinner{\ldots}|P|)\):
* is obtained from \(P\) by repeatedly replacing an occurrence of \(Q^{e+1}\) (for some \(Q\in\mathcal{Q}\)) with an occurrence of \(Q^{e}\),
* does not contain any occurrence of \(Q^{e+1}\) (for \(Q\in\mathcal{Q}\)) starting at position smaller than \(|R|\).
We initialize the process with \(R:=\varepsilon\) and \(r:=0\). At each step, we test if \(P[r\mathinner{\ldots}r+2q)\) is periodic; if so, we retrieve its shortest period \(q\); otherwise, we set \(q:=1\). Then, we further check whether \(P[r\mathinner{\ldots}r+q)\in\mathcal{Q}\) and \(P[r\mathinner{\ldots}r+eq)=P[r+q\mathinner{\ldots}r+q+eq)\). If both tests are successful, we move the index \(r\) to position \(r+q\). Otherwise, we append \(P[r]\) to \(R\) and increment \(r\).
Let us analyze the correctness of this algorithm. First, suppose that \(P[r\mathinner{\ldots}|P|)\) does not have a prefix of the form \(Q^{e+1}\) for any \(Q\in\mathcal{Q}\). In particular, \(P[r\mathinner{\ldots}r+q)\notin\mathcal{Q}\) or \(m<eq\). Thus, our algorithm appends \(P[r]\) to \(R\) and increments \(r\). The invariant remains satisfied because \(R\cdot P[r\mathinner{\ldots}|P|)\) did not change and \(P[r\mathinner{\ldots}|P|)\) had no prefix of the form \(Q^{e+1}\) for any \(Q\in\mathcal{Q}\).
Next, suppose that \(P[r\ldots|P|)\) has a prefix of the form \(Q^{e+1}\) for some \(Q\in\mathcal{Q}\). If \(|Q|\neq 1\), then \(|Q|\) is the shortest period of \(P[r\ldots r+2e)\) because \(Q\) is primitive and \(|Q|\leq e\). If \(|Q|=1\), on the other hand, then \(P[r\ldots r+2e)\) either has period \(1\) or at least \(e+2\). In all cases, the algorithm correctly identifies \(q=|Q|\). Moreover, the subsequent tests whether \(P[r\ldots r+q)\) belongs to \(\mathcal{Q}\) and \(m\geq eq\) are successful. Hence, the algorithm transforms \(R\cdot P[r\ldots|P|)\) into \(R\cdot P[r+q\ldots|P|)\), which is a valid operation because the prefix \(Q^{e+1}\) of \(P[r\ldots|P|)\) is replaced with the prefix \(Q^{e}\) of \(P[r+q\ldots|P|)\). Thus, it remains to prove that \(R\cdot P[r+q\ldots|P|)\) does not contain any occurrence of \(\hat{Q}^{e+1}\) (for any \(\hat{Q}\in\mathcal{Q}\)) starting at position smaller than \(|R|\). Since \(R\cdot P[r\ldots|P|)\) did not contain such an occurrence, the occurrence of \(\hat{Q}^{e+1}\) would need to end at position \(|R|+m\) or larger. The fragment \(P[r+q\ldots r+q+m)\) thus has periods \(q\) and \(\hat{q}:=|\hat{Q}|\). Moreover, by primitivity of \(Q\) and \(\hat{Q}\), the Periodicity Lemma [12] implies \(q=\hat{q}\) due to \(m\geq eq\geq e+q-1\geq\hat{q}+q-1\). However, this means that \(q\) is a period of \(P[r\ldots r+q+m]\), contradicting the definition of \(m\).
The overall running time is linear, including the preprocessing and the query time of the data structures of Theorems 2.1 and 2.2, because each iteration of the **while** loop costs constant time.
**Corollary 2.14**.: _There exists a linear-time algorithm that, given a string \(P\) and an integer \(k\in\mathbb{Z}_{+}\), constructs a string of length at most \(42k^{3}\) that is \(\operatorname{\mathsf{ed}}_{\leq k}^{w}\)-equivalent to \(P\) for every weight function \(w\)._
```
1StringReduction\((P,k)\):
2\(P^{\prime}\leftarrow\)PeriodicityReduction\((P,4k,\{Q\in\Sigma^{+}:|Q|\leq 2k\text{ and }Q\text{ is primitive}\})\);
3if\(|P^{\prime}|\geq 42k^{3}\)thenreturn\(P^{\prime}[0\ldots 21k^{3})\cdot P^{\prime}[|P^{\prime}|-21k^{3}\ldots|P^{ \prime}|)\);
4elsereturn\(P^{\prime}\);
```
**Algorithm 2**Construct a string of length at most \(42k^{3}\) that is \(\operatorname{\mathsf{ed}}_{\leq k}^{w}\)-equivalent to \(P\).
Proof.: We set \(P^{\prime}:=\)PeriodicityReduction\((P,4k,\mathcal{Q})\) with \(\mathcal{Q}\) consisting of all primitive strings of length in \([1\ldots 2k]\). We return \(P^{\prime\prime}:=P^{\prime}[0\ldots 21k^{3})\cdot P^{\prime}[|P^{\prime}|-21k^{3} \ldots|P^{\prime}|)\) or \(P^{\prime}\) depending on whether \(|P^{\prime}|\geq 42k^{3}\) or not. By Lemmas 2.9 and 2.13, the string \(P^{\prime}\) is \(\operatorname{\mathsf{ed}}_{\leq k}^{w}\)-equivalent to \(P\) and avoids \(k\)-periodicity. Thus, if \(|P^{\prime}|\leq 42k^{3}\), then the algorithm is correct. Otherwise, Lemma 2.11 implies that \(P^{\prime\prime}\) is \(\operatorname{\mathsf{ed}}_{\leq k}^{w}\)-equivalent to \(P^{\prime}\) (and, by transitivity, to \(P\)) because \(P^{\prime}[0\ldots 21k^{3})\) and
\(P^{\prime}[|P^{\prime}|-21k^{3}\ldots|P^{\prime}|)\) avoid \(k\)-periodicity. Due to Lemma 2.13, the running time is linear (a primitive fragment belongs to \(\mathcal{Q}\) if and only if its length does not exceed \(2k\), which takes \(\mathcal{O}(1)\) time to test).
**Theorem 2.15**.: _There exists a linear-time algorithm that, given strings \(X\), \(Y\) and an integer \(k\in\mathbb{Z}_{+}\), constructs strings \(X^{\prime}\), \(Y^{\prime}\) of lengths at most \(85k^{4}\) such that \(\mathsf{ed}_{\leq k}^{w}(X,Y)=\mathsf{ed}_{\leq k}^{w}(X^{\prime},Y^{\prime})\) holds for every weight function \(w\)._
```
1StringKernel\((X,Y,k)\):
2if\(|X|\leq 85k^{4}\) and \(|Y|\leq 85k^{4}\)thenreturn\((X,Y)\);
3if\(\mathsf{ed}(X,Y)>k\)thenreturn\((a^{k+1},\varepsilon)\) for some \(a\in\Sigma\);
4 Let \((x_{t},y_{t})_{t=0}^{m}\in\mathsf{A}(X,Y)\) be an alignment satisfying \(\mathsf{ed}_{\mathcal{A}}(X,Y)\leq k\);
5\(X^{\prime},Y^{\prime},P\leftarrow\varepsilon\);
6for\(t\gets 0\)to\(m\)do
7if\(t<m\)and\(x_{t+1}>x_{t}\)and\(y_{t+1}>y_{t}\)and\(X[x_{t}]=Y[y_{t}]\)then
8\(P\gets P\cdot X[x_{t}]\)
9else
10\(P\leftarrow\mathsf{StringReduction}(P,k)\);
11\(X^{\prime}\gets X^{\prime}\cdot P\);
12\(Y^{\prime}\gets Y^{\prime}\cdot P\);
13\(P\leftarrow\varepsilon\);
14if\(t<m\)and\(x_{t+1}>x_{t}\)then\(X^{\prime}\gets X^{\prime}\cdot X[x_{t}]\);
15if\(t<m\)and\(y_{t+1}>y_{t}\)then\(Y^{\prime}\gets Y^{\prime}\cdot Y[y_{t}]\);
16return\((X^{\prime},Y^{\prime})\)
```
**Algorithm 3**Construct strings \(X^{\prime},Y^{\prime}\) of length at most \(85k^{4}\) such that \(\mathsf{ed}_{\leq k}^{w}(X,Y)=\mathsf{ed}_{\leq k}^{w}(X^{\prime},Y^{\prime})\)
Proof.: Our procedure is implemented as Algorithm 3. First, if \(X\) and \(Y\) are already of length at most \(85k^{4}\), then we return \(X\) and \(Y\) unchanged. If \(\mathsf{ed}(X,Y)>k\), we return strings \(a^{k+1}\) and \(\varepsilon\), where \(a\in\Sigma\) is an arbitrary character. If \(\mathsf{ed}(X,Y)\leq k\), we construct an alignment \(\mathcal{A}:=(x_{t},y_{t})_{t=0}^{m}\in\mathsf{A}(X,Y)\) of (unweighted) cost at most \(k\). We then build the output strings \(X^{\prime}\) and \(Y^{\prime}\) during a left-to-right scan of the alignment \(\mathcal{A}\): We append to \(X^{\prime}\) and \(Y^{\prime}\) every character of \(X\) and \(Y\) (respectively) that \(\mathcal{A}\) edits. Moreover, for every pair of maximal fragments in \(X\) and \(Y\) that \(\mathcal{A}\) matches perfectly, we apply the reduction of Corollary 2.14 and append the resulting string to both \(X^{\prime}\) and \(Y^{\prime}\).
Let us now prove that the resulting instance \((X^{\prime},Y^{\prime})\) satisfies \(\mathsf{ed}_{\leq k}^{w}(X,Y)=\mathsf{ed}_{\leq k}^{w}(X^{\prime},Y^{\prime})\). This is trivial when the algorithm returns \((X,Y)\) in Line 2. If \(\mathsf{ed}(X,Y)>k\), then \(\mathsf{ed}_{\leq k}(X,Y)=\infty=\mathsf{ed}_{\leq k}(a^{k+1},\varepsilon)\) and thus also \(\mathsf{ed}_{\leq k}^{w}(X,Y)=\infty=\mathsf{ed}_{\leq k}^{w}(a^{k+1},\varepsilon)\) because the weighted edit distance with a normalized weight function is at least as large as the unweighted edit distance. In the remaining case of \(\mathsf{ed}(X,Y)\leq k\), we maintain an invariant that \(|X^{\prime}|-|Y^{\prime}|=x_{t}-y_{t}\) and \(\mathsf{ed}_{\leq k}^{w}(X,Y)=\mathsf{ed}_{\leq k}^{w}(X^{\prime}\cdot P\cdot X [x_{t}\ldots x_{m}),Y^{\prime}\cdot P\cdot Y[y_{t}\ldots y_{m}))\) hold at the beginning of every iteration of the **for** loop as well as after every execution of Line 10 and Line 13. It is easy to see that the strings \(X^{\prime}\cdot P\cdot X[x_{t}\ldots x_{m})\) and \(Y^{\prime}\cdot P\cdot Y[y_{t}\ldots y_{m})\) change only at Line 10, when \(P\) is replaced with \(\mathsf{StringReduction}(P,k)\). The correctness of this step follows directly from the definition of \(\mathsf{ed}_{\leq k}^{w}\)-equivalence (Definition 2.8) since \(\mathsf{StringReduction}(P,k)\) is \(\mathsf{ed}_{\leq k}^{w}\)-equivalent to \(P\).
Next, we show that the returned strings are of length at most \(85k^{4}\). This is clear when the algorithm terminates at Line 2 or 3. Otherwise, we apply Fact 2.7 to observe that \(X\) is decomposed
into at most \(k\) characters that \(\mathcal{A}\) deletes or substitutes (which are copied to \(X^{\prime}\)) and at most \(k+1\) maximal fragments that \(\mathcal{A}\) matches perfectly to fragments of \(Y\) (which are copied to \(X^{\prime}\) after applying StringReduction). By the guarantee of Corollary 2.14, we conclude that \(|X^{\prime}|\leq k+(k+1)\cdot 42k^{3}\leq 85k^{4}\). Symmetrically, we have \(|Y^{\prime}|\leq 85k^{4}\).
It remains to analyze the time complexity of our procedure. We use the Landau-Vishkin algorithm [10] to check whether \(\mathsf{ed}(X,Y)\leq k\) and, if so, construct the alignment \(\mathcal{A}\). This costs \(\mathcal{O}(n+k^{2})\) time, which is \(\mathcal{O}(n)\) because we perform this step only if \(n\geq k^{4}\geq k^{2}\). The scan of the alignment \(\mathcal{A}\) takes \(\mathcal{O}(m)=\mathcal{O}(n)\) time, including the applications of Corollary 2.14, which operate on strings of total length at most \(n\).
Having reduced the string lengths to \(\mathcal{O}(k^{4})\), we can use the classic dynamic programming [11] to compute \(\mathsf{ed}_{\leq k}^{w}(X,Y)\) in \(\mathcal{O}(k^{8})\) time. However, since \(w\) is a normalized, the running of [11] can be reduced to \(\mathcal{O}(nk)\). For completeness, we describe this improvement below.
**Proposition 2.16**.: _Given strings \(X,Y\) of length at most \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a weight function \(w\), the value \(\mathsf{ed}_{\leq k}^{w}(X,Y)\) can be computed in \(\mathcal{O}(nk)\) time._
Proof.: Recall that the algorithm of [11] maintains a table \(D[0\mathinner{\ldots}|X|,0\mathinner{\ldots}|Y|]\) such that \(D[i,j]=\mathsf{ed}^{w}(X[0\mathinner{\ldots}i),Y[0\mathinner{\ldots}j))\) holds for each \(i\in[0\mathinner{\ldots}|X|]\) and \(j\in[0\mathinner{\ldots}|Y|]\). We have \(D[0,0]=0\), whereas the remaining entries are constructed in \(\mathcal{O}(1)\) time each using the following formula:
\[D[i,j]=\min\begin{cases}D[i-1,j]+w(X[i-1],\varepsilon)&\text{if $i>0$},\\ D[i,j-1]+w(\varepsilon,Y[j-1])&\text{if $j>0$},\\ D[i-1,j-1]+w(X[i-1],Y[j-1])&\text{if $i,j>0$}.\end{cases} \tag{1}\]
In order to compute \(\mathsf{ed}_{\leq k}^{w}(X,Y)\), we use a modified table \(D^{\prime}[0\mathinner{\ldots}|X|,0\mathinner{\ldots}|Y|]\) such that \(D^{\prime}[0,0]=0\), \(D^{\prime}[i,j]=\infty\) if \(|i-j|>\bar{k}\), whereas the remaining entries are computed using (1) (with \(D\) replaced by \(D^{\prime}\)). A straightforward inductive argument shows that \(D^{\prime}[i,j]\geq D[i,j]\) holds for all \(i\in[0\mathinner{\ldots}|X|]\) and \(j\in[0\mathinner{\ldots}|Y|]\) and, moreover, \(D[i,j]\leq k\) implies \(D^{\prime}[i,j]=D[i,j]\). For \(|i-j|>k\), this is true because \(w\) is normalized and thus \(D[i,j]=\mathsf{ed}^{w}(X[0\mathinner{\ldots}i),Y[0\mathinner{\ldots}j))\geq \mathsf{ed}(X[0\mathinner{\ldots}i),Y[0\mathinner{\ldots}j))\geq|i-j|>k\). For \(|i-j|\leq k\), on the other hand, the argument is based on the inductive hypothesis and the fact that the weight function \(w\) has non-negative values. The entries \(D^{\prime}[i,j]=\infty\) for \(|i-j|>k\) can be set implicitly, which reduces the running time to \(\mathcal{O}(nk)\).
**Theorem 1.1**.: _Given strings \(X,Y\) of length at most \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a weight function \(w\), the value \(\mathsf{ed}_{\leq k}^{w}(X,Y)\) can be computed in \(\mathcal{O}(n+k^{5})\) time._
Proof.: We first apply Theorem 2.15 to build strings \(X^{\prime},Y^{\prime}\) of length \(\mathcal{O}(k^{4})\) such that \(\mathsf{ed}_{\leq k}^{w}(X^{\prime},Y^{\prime})=\mathsf{ed}_{\leq k}^{w}(X,Y)\). Then, we compute \(\mathsf{ed}_{\leq k}^{w}(X^{\prime},Y^{\prime})\) using Proposition 2.16. The running times of these two steps are \(\mathcal{O}(n)\) and \(\mathcal{O}(k^{4}\cdot k)=\mathcal{O}(k^{5})\), respectively, for a total of \(\mathcal{O}(n+k^{5})\).
## 3 Tree Edit Distance
### Preliminaries
For an alphabet \(\Sigma\), we define a set \(\mathsf{P}_{\Sigma}:=\bigcup_{a\in\Sigma}\{\{\mathinner{a},\mathinner{a}\}_{a}\}\) of parentheses with labels over \(\Sigma\). A _forest_ with node labels over \(\Sigma\) is a _balanced_ string of parentheses over \(\Sigma\). Formally, the set of forests with labels over \(\Sigma\) is defined as the smallest subset \(\mathcal{F}_{\Sigma}\subseteq\mathsf{P}_{\Sigma}^{*}\) satisfying the following conditions:
* \(\varepsilon\in\mathcal{F}_{\Sigma}\),
* \(F\cdot G\in\mathcal{F}_{\Sigma}\) for every \(F,G\in\mathcal{F}_{\Sigma}\),
* \((_{a}\cdot F\cdot)_{a}\in\mathcal{F}_{\Sigma}\) for every \(F\in\mathcal{F}_{\Sigma}\) and \(a\in\Sigma\).
For a forest \(F\), we define the set of _nodes_\(V_{F}\) as the set of pairs \((i,j)\in[0\mathinner{\ldot}|F|)\) such that \(F[i]\) is an opening parenthesis, \(F[j]\) is a closing parenthesis, and \(F[i\mathinner{\ldot}j]\) is balanced. For a node \(u=(i,j)\in V_{F}\), we denote the positions of the opening and the closing parenthesis by \(o(u):=i\) and \(c(u):=j\). A forest \(F\) is a tree if \((0,|F|-1)\in V_{F}\).
**Fact 3.1**.: _A forest \(F\) can be preprocessed in linear time so that one can test in constant time whether any given fragment \(F[i\mathinner{\ldot}j)\) is balanced._
Proof.: Let us define the height function \(H:[0\mathinner{\ldot}|F|]\to\mathbb{Z}\) so that \(H(i)\) equals the number of opening parentheses in \(F[0\mathinner{\ldot}i)\) minus the number of closing parentheses in \(F[0\mathinner{\ldot}i)\). Since \(F\) is balanced, the fragment \(F[i\mathinner{\ldot}j)\) is balanced if and only if \(H(i)=H(j)=\min_{m\in[i\mathinner{\ldot}j]}H(m)\). This condition can be tested in \(\mathcal{O}(1)\) time after linear-time preprocessing using range minimum queries (RMQ) [1].
A context with node labels over \(\Sigma\) is a pair \(C=\langle C_{L};C_{R}\rangle\in\mathsf{P}_{\Sigma}\times\mathsf{P}_{\Sigma}\) such that \(C_{L}\cdot C_{R}\) is a tree. The node set \(V_{C}\) of a context \(C\) is identified with the node set of the underlying tree \(C_{L}\cdot C_{R}\). The _depth_ of a context \(C\) is the number of nodes \(u\in V_{C}\) whose opening parenthesis belongs to \(C_{L}\) and closing parenthesis belongs to \(C_{R}\), that is, \(o(u)<|C_{L}|<c(u)\).
The (vertical) composition of contexts \(C,D\) results in a context \(C\star D:=\langle C_{L}\cdot D_{L};D_{R}\cdot C_{R}\rangle\). Moreover, vertical composition of a context \(C\) and a forest \(F\) results in a tree \(C\star F:=C_{L}\cdot F\cdot C_{R}\). A context \(C\) is primitive if it cannot be expressed as vertical composition of \(e\geq 2\) copies of the same context.
A context \(C\)_occurs_ in a forest \(F\) at node \(u\in V_{F}\) if \(C_{L}=F[o(u)\mathinner{\ldot}o(u)+|C_{L}|)\) and \(C_{R}=F(c(u)-|C_{R}|\mathinner{\ldot}c(u)]\), or equivalently, \(F[o(u)\mathinner{\ldot}c(u)]=C\star G\) for some forest \(G\)
### Forest Alignments and Weighted Forest Edit Distance
We begin our discussion of weighted tree edit distance by formally defining forest alignments, which are similar to alignments on strings with just a few additional restrictions to make sure the alignments make valid edits on forests.
**Definition 3.2**.: We say that an alignment \(\mathcal{A}\in\mathsf{A}(F,G)\) is a _forest alignment_ of forests \(F\) and \(G\) if the following _consistency_ conditions are satisfied for each \(u\in V_{F}\):
* either \(\mathcal{A}\) deletes both \(F[o(u)]\) and \(F[c(u)]\), or
* there exists \(v\in V_{G}\) such that \(F[o(u)]\sim_{\mathcal{A}}G[o(v)]\) and \(F[c(u)]\sim_{\mathcal{A}}G[c(v)]\).
The set of all forests alignments of \(F\) onto \(G\) is denoted with \(\mathsf{TA}(F,G)\subseteq\mathsf{A}(F,G)\).
Define \(\overline{\mathsf{P}_{\Sigma}}=\mathsf{P}_{\Sigma}\cup\varepsilon\) and a mapping \(\lambda:\overline{\mathsf{P}_{\Sigma}}\to\bar{\Sigma}\) such that \(\lambda(\mathit{\ldot}_{a})=\lambda(\mathit{\ldot}_{a})=a\) for each \(a\in\Sigma\), and \(\lambda(\varepsilon)=\varepsilon\). For a weight function \(w:\bar{\Sigma}\times\bar{\Sigma}\to\mathbb{R}_{\geq 0}\), we define a corresponding weight function \(\tilde{w}:\overline{\mathsf{P}_{\Sigma}}\times\overline{\mathsf{P}_{\Sigma}} \to\mathbb{R}_{\geq 0}\) so that \(\tilde{w}(p,q)=w(\lambda(p),\lambda(q))\) for all \(p,q\in\overline{\mathsf{P}_{\Sigma}}\). The _cost_ of a forest alignment \(\mathcal{A}\in\mathsf{TA}(F,G)\) with respect to a weight function \(w\) is defined as \(\mathsf{ted}_{\mathcal{A}}^{w}(F,G):=\frac{1}{2}\mathsf{ed}_{\mathcal{A}}^{ \bar{w}}(F,G)\). Moreover, for any two forests \(F,G\), we define the _weighted tree edit distance_\(\mathsf{ted}^{w}(F,G)=\min_{\mathcal{A}\in\mathsf{TA}(F,G)}\mathsf{ted}_{ \mathcal{A}}^{w}(F,G)\), and for a threshold \(k\in\mathbb{R}_{\geq 0}\), we set
\[\mathsf{ted}_{\leq k}^{w}(F,G)=\begin{cases}\mathsf{ted}^{w}(F,G)&\text{if } \mathsf{ted}^{w}(F,G)\leq k,\\ \infty&\text{otherwise}.\end{cases}\]
The superscript is omitted if \(w\) is the discrete metric over \(\bar{\Sigma}\).
**Fact 3.3**.: _If \(w\) is a quasimetric on \(\Sigma\), then \(\mathsf{ted}^{w}\) is a quasimetric on \(\mathcal{F}_{\Sigma}\). In that case, \(\mathsf{ted}^{w}(F,G)\) can be equivalently defined as the minimum cost of a sequence of edits transforming \(F\) into \(G\), where inserting a node with label \(b\) costs \(w(\varepsilon,b)\), deleting a node with label \(a\) costs \(w(a,\varepsilon)\), and changing a node label from \(a\) to \(b\) costs \(w(a,b)\)._
Proof.: Consider arbitrary forests \(F,G,H\in\mathcal{F}_{\Sigma}\) as well as alignments \(A=(x_{t},y_{t})_{t=0}^{m}\in\mathsf{TA}(F,G)\) and \(B=(\hat{y}_{t},\hat{z}_{t})_{t=0}^{\hat{m}}\in\mathsf{TA}(G,H)\). We can construct the product alignment \(\mathcal{A}\otimes\mathcal{B}\) as in the proof of Fact 2.5, which has \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(F,H)\leq\mathsf{ed}^{w}_{ \mathcal{A}}(F,G)+\mathsf{ed}^{w}_{\mathcal{B}}(G,H)\). Therefore, it remains to prove that \(\mathcal{A}\otimes\mathcal{B}\) is a tree alignment.
Consider an arbitrary node \(u_{F}\in V_{F}\). If \(\mathcal{A}\) deletes \(u_{F}\) (that is, it deletes both characters \(F[o(u_{F})]\) and \(F[c(u_{F})]\)), then \(\mathcal{A}\otimes\mathcal{B}\) also deletes \(u_{F}\); see Case 2 in the recursive definition of \(\mathcal{A}\otimes\mathcal{B}\). The other possibility is that \(\mathcal{A}\) aligns \(u_{F}\) with some node \(u_{G}\in V_{G}\) (that is, it aligns \(F[o(u_{F})]\) with \(G[o(u_{G})]\) and \(F[c(u_{F})]\) with \(G[c(u_{G})]\)). If \(\mathcal{B}\) deletes \(u_{G}\), then \(\mathcal{A}\otimes\mathcal{B}\) deletes \(u_{F}\); see Case 6 in the recursive definition of \(\mathcal{A}\otimes\mathcal{B}\). Finally, if \(\mathcal{B}\) aligns \(u_{G}\) with some node \(u_{H}\in V_{H}\), then \(\mathcal{A}\otimes\mathcal{B}\) aligns \(u_{F}\) with \(u_{H}\); see Case 7 in the recursive definition of \(\mathcal{A}\otimes\mathcal{B}\).
### Combinatorial Foundations
#### 3.3.1 Forests
Similar to our discussion of weighted string edit distance, before giving our tree edit distance algorithms we prove the existence of small edit distance equivalent forests for synchronized occurrences of large subforests in the input instance forests.
**Definition 3.4**.: For \(k\in\mathbb{Z}_{\geq 0}\) and a weight function \(w\), forests \(P,P^{\prime}\) are called \(\mathsf{ted}^{w}_{\leq k}\)_-equivalent_ if
\[\mathsf{ted}^{w}_{\leq k}(F,G)=\mathsf{ted}^{w}_{\leq k}(F[0\mathinner{. \mskip-1.0mu \cdot\mskip-1.0mu }p_{F})\cdot P^{\prime}\cdot F[p_{F}+|P|\mathinner{. \mskip-1.0mu \cdot\mskip-1.0mu }|F|),G[0\mathinner{. \mskip-1.0mu \cdot\mskip-1.0mu }p_{G})\cdot P^{\prime}\cdot G[p_{G}+|P| \mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }|G|))\]
holds for all forests \(F\) and \(G\) in which \(P\) occurs at positions \(p_{F}\) and \(p_{G}\), respectively, satisfying \(|p_{F}-p_{G}|\leq 2k\).
**Lemma 3.5**.: _Let \(k\in\mathbb{Z}_{+}\), let \(Q\) be a forest, and let \(e,e^{\prime}\in\mathbb{Z}_{\geq 4k}\). Then, \(Q^{e}\) and \(Q^{e^{\prime}}\) are \(\mathsf{ted}^{w}_{\leq k}\)-equivalent for every normalized weight function \(w\)._
Proof.: We assume without loss of generality that \(Q\) is primitive. (If \(Q=R^{m}\) for \(m\in\mathbb{Z}_{\geq 2}\), then \(Q^{e}=R^{me}\) and \(Q^{e^{\prime}}=R^{me^{\prime}}\) can be interpreted as powers of \(R\) rather than powers of \(Q\).) Suppose that \(Q^{e}\) occurs in forests \(F\) and \(G\) at positions \(p_{F}\) and \(p_{G}\), respectively, satisfying \(|p_{F}-p_{G}|\leq 2k\). Denote \(F^{\prime}=F[0\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }p_{F})\cdot Q^{e^{\prime}} \cdot F[p_{F}+|Q^{e}|\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }|F|)\) and \(G^{\prime}=G[0\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }p_{G})\cdot Q^{e^{\prime}} \cdot G[p_{G}+|Q^{e}|\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }|G|)\). Moreover, let \(q=|Q|\) and let \(\mathcal{A}\) be a forest alignment such that \(\mathsf{ted}^{w}(F,G)=\mathsf{ted}^{u}_{\mathcal{A}}(F,G)\leq k\).
**Claim 3.6**.: _There exist \(i_{F},i_{G}\in[0\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }3k]\) such that_
\[F[p_{F}+i_{F}\cdot q\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }p_{F}+(i_{F}+1)\cdot q)\simeq_{\mathcal{A}}G[p_{G}+i_{G}\cdot q \mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }p_{G}+(i_{G}+1)\cdot q).\]
Proof.: Let \((f_{b},g_{b})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{b}\geq p_{F}\) and \(g_{b}\geq p_{G}\). By symmetry between \(F\) and \(G\), we assume without loss of generality that \(f_{b}=p_{F}\). Consider the \(k+1\) occurrences of \(Q\) in \(F\) starting at positions \(p_{F}+i\cdot q\) for \(i\in[0\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }k]\). Since \(Q\) is balanced, the alignment \(\mathcal{A}\) (of unweighted cost at most \(k\)) matches at least one of them exactly; we can thus define \(i_{F}\in[0\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }k]\) so that \(\mathcal{A}\) matches \(F[p_{F}+i_{F}\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }q\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }p_{F}+(i_{F}+1)\cdot q)\) exactly to some fragment \(G[g_{a}\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }g_{a}+q)\). By definition of \(b\), we have \(a\geq b\) and thus \(g_{a}\geq g_{b}\geq p_{G}\). Moreover, since \(\mathsf{ted}_{\mathcal{A}}(F,G)\leq k\) and \(F[p_{F}+i_{F}\cdot q]\sim_{\mathcal{A}}G[g_{a}]\), we have \(g_{a}\leq(p_{F}+i_{F}\cdot q)+2k\leq p_{F}+kq+2k\leq p_{G}+kq+4k\leq p_{G}+3kq\), where the last inequality follows from \(q\geq 2\) (recall that \(Q\) is balanced, so its length is even). Furthermore, since \(Q\) is primitive (i.e., distinct from all its non-trivial cyclic rotations), we conclude that \(g_{a}=p_{G}+i_{G}\cdot q\) for some \(i_{G}\in[0\mathinner{.\mskip-1.0mu \cdot\mskip-1.0mu }3k]\)
Now, if \(Q^{e}=F[p_{F}\dots p_{F}+e\cdot q)=G[p_{G}\dots p_{G}+e\cdot q)\) is replaced with \(Q^{e^{\prime}}\) for \(e^{\prime}\geq e-1\), we can interpret this as replacing \(Q=F[p_{F}+i_{F}\cdot q\dots p_{F}+(i_{F}+1)\cdot q)=G[p_{G}+i_{G}\cdot q\dots p _{G}+(i_{G}+1)\cdot q)\) with \(Q^{1+e^{\prime}-e}\). By Claim 3.6, \(\mathcal{A}\) can be trivially adapted without modifying its cost, and hence \(\mathsf{ted}^{w}(F^{\prime},G^{\prime})\leq\mathsf{ted}^{w}_{\mathcal{A}}(F,G )=\mathsf{ted}^{w}(F,G)\). If \(e^{\prime}<e-1\), we repeat the above argument to decrement the exponent one step at a time, still concluding that \(\mathsf{ted}^{w}(F^{\prime},G^{\prime})\leq\mathsf{ted}^{w}(F,G)\). In either case, the converse inequality follows by symmetry between \((F,G,e)\) and \((F^{\prime},G^{\prime},e^{\prime})\).
We say that a forest \(F\) avoids horizontal \(k\)-periodicity if there is no forest \(Q\) of length \(|Q|\in[1\dots 4k]\) such that \(Q^{4k+1}\) occurs in \(F\).
**Lemma 3.7**.: _Let \(k\in\mathbb{Z}_{+}\) and let \(P,P^{\prime}\) be forests of length \(|P|,|P^{\prime}|\geq 74k^{3}\) avoiding horizontal \(k\)-periodicity. Then, \(P\) and \(P^{\prime}\) are \(\mathsf{ted}^{w}_{\leq k}\)-equivalent for every normalized quasimetric \(w\)._
Proof.: Suppose that \(P\) occurs in forests \(F\) and \(G\) at positions \(p_{F}\) and \(p_{G}\), respectively, satisfying \(|p_{F}-p_{G}|\leq 2k\). Denote \(F^{\prime}=F[0\dots p_{F})\cdot P^{\prime}\cdot F[p_{F}+|P|\dots|F|)\) and \(G^{\prime}=G[0\dots p_{G})\cdot P^{\prime}\cdot G[p_{G}+|P|\dots|G|)\).
Let \(\mathcal{A}=(f_{t},g_{t})_{t=0}^{m}\) be an alignment such that \(\mathsf{ted}^{w}(F,G)=\mathsf{ted}^{w}_{\mathcal{A}}(F,G)\leq k\). Moreover, let \((f_{a},g_{a})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{a}\geq p_{F}\) or \(g_{a}\geq p_{G}\), and let \((f_{b},g_{b})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{b}\geq p_{F}+|P|\) and \(g_{b}\geq p_{G}+|P|\). We construct an alignment \(\mathcal{A}^{\prime}\) so that it:
* aligns \(F[0\dots f_{a})\) with \(G[0\dots g_{a})\) in the same way as \(\mathcal{A}\) does;
* deletes \(F[f_{a}\dots p_{F})\) and inserts \(G[g_{a}\dots p_{G})\) (at least one of these fragments is empty);
* matches \(F[p_{F}\dots p_{F}+|P|)=P\) with \(G[p_{G}\dots p_{G}+|P|)=P\);
* deletes \(F[p_{F}+|P|\ldots f_{b})\) and inserts \(G[p_{G}+|P|\ldots g_{b})\) (at least one of these fragments is empty);
* aligns \(F[f_{b}\ldots|F|)\) with \(G[g_{b}\ldots|G|)\) in the same way as \(\mathcal{A}\) does.
To prove that \(\mathcal{A}^{\prime}\) is a forest alignment, let us consider several possibilities for a node \(u\) in \(F\).
* If \(u\) is inside \(P=F[p_{F}\ldots p_{F}+|P|)\), then \(\mathcal{A}^{\prime}\) matches \(u\) to the corresponding node inside \(P=G[p_{G}\ldots p_{G}+|P|)\).
* If \(u\) is outside \(P=F[p_{F}\ldots p_{F}+|P|)\) and \(\mathcal{A}\) aligns \(u\) to a node \(v\) of \(G\) inside \(P=G[p_{G}\ldots p_{G}+|P|)\), then \(o(u),c(u)\in[f_{a}\ldots p_{F})\cup[p_{F}+|P|\ldots f_{b})\) because of the non-crossing property of \(\mathcal{A}\ni(f_{a},g_{a}),(f_{b},g_{b})\). Hence, \(\mathcal{A}^{\prime}\) deletes \(u\).
* If \(u\) is outside \(P=F[p_{F}\ldots p_{F}+|P|)\) and \(\mathcal{A}\) aligns \(u\) to a node \(v\) of \(G\) outside \(P=G[p_{G}\ldots p_{G}+|P|)\), then \(o(u),c(u)\in[0\ldots f_{a})\cup[f_{b}\ldots|F|)\) because of the non-crossing property of \(\mathcal{A}\ni(f_{a},g_{a}),(f_{b},g_{b})\). Hence, \(\mathcal{A}^{\prime}\) also aligns \(u\) to \(v\).
* If \(u\) is outside \(P=F[p_{F}\ldots p_{F}+|P|)\) and \(\mathcal{A}\) deletes \(u\), then \(\mathcal{A}^{\prime}\) also deletes \(u\).
Our next goal is to prove that \(\mathsf{ted}^{w}_{\mathcal{A}^{\prime}}(F,G)\leq\mathsf{ted}^{w}_{\mathcal{A} }(F,G)\). This relies on the following claim.
**Claim 3.8**.: _There exists \(t\in[a\ldots b]\) such that \(f_{t}-g_{t}=p_{F}-p_{G}\)._
Proof.: Let us partition \(P=F[p_{F}\ldots p_{F}+|P|)\) into individual characters representing deletions or substitutions of \(\mathcal{A}\) and maximal fragments that \(\mathcal{A}\) matches perfectly (to fragments of \(G\)). By Fact 2.7, the number of such fragments is at most \(2k+1\) and their total length is at least \(|P|-2k\). Hence, one of these fragments, denoted \(R=F[r_{F}\ldots r_{F}+|R|)\), is of length at least \(\frac{|P|-2k}{2k+1}\geq 24k^{2}\). Suppose that the fragment of \(G\) matched perfectly to \(R\) is \(G[r_{G}\ldots r_{G}+|R|)\). If \(r_{F}-p_{F}=r_{G}-p_{G}\), the claim holds for \(t\) such that \((f_{t},g_{t})=(r_{F},r_{G})\). Otherwise, we note that \(R\) has period \(q:=|(r_{F}-p_{F})-(r_{G}-p_{G})|\in[1\ldots 4k]\). Let \(q=R[0\ldots q)\) and observe that \(\frac{|R|}{q}\geq\frac{24k^{2}}{4k}\geq 4k+2\). Hence, \(Q^{4k+2}\) is a substring of \(P\); since \(P\) avoids horizontal \(k\)-periodicity, we conclude that no cyclic rotation of \(Q\) is balanced.
As \(Q\) is a substring of a balanced string \(P\), this means that the number of opening parentheses in \(Q\) does not match the number of closing parentheses in \(Q\). By symmetry (up to reversal), we assume without loss of generality that \(Q\) has more opening than closing parentheses. Thus, there exists a node \(u\) in \(F\) such that \(o(u)\in[r_{F}\ldots r_{F}+q)\) yet \(c(u)\geq r_{F}+|R|\). In particular, \(c(u)-o(u)\geq|R|-q\geq 24k^{2}-4k>8k\). Let \(v,v^{\prime}\) be the nodes in \(G\) matched with \(u\) by \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\), respectively. Note that \(|o(v)-o(v^{\prime})|\leq 4k\) and \(|c(v)-c(v^{\prime})|\leq 4k\). Due to \(c(v^{\prime})-o(v^{\prime})=c(u)-o(u)>8k\), we conclude that \(v\) is ancestor of \(v^{\prime}\) or vice versa. In either case, we have \(0\geq(o(v^{\prime})-o(v))\cdot(c(v^{\prime})-c(v))=((o(u)-o(v))-(p_{F}-p_{G})) \cdot((c(u)-c(v))-(p_{F}-p_{G}))\). The value \((f_{t}-g_{t})-(p_{F}-p_{G})\) can change by at most one for subsequent indices \(t\). The sign of this value is different when \((f_{t},g_{t})=(o(u),o(v))\) and \((f_{t},g_{t})=(c(u),c(v))\), so it must be equal to \(0\) at some intermediate index \(t\).
The alignments \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\) only differ in how they align \(F[f_{a}\ldots f_{t})\) with \(G[g_{a}\ldots g_{t})\) and \(F[f_{t}\ldots f_{b})\) with \(G[g_{t}\ldots g_{b})\), and, by Fact 2.6, \(\mathcal{A}^{\prime}\) provides an optimum alignment of these fragments. Now, if \(P=F[p_{F}\ldots p_{F}+|P|)=G[p_{G}\ldots p_{G}+|P|)\) is modified to \(P^{\prime}\), then \(\mathcal{A}^{\prime}\) can be trivially adapted without modifying its cost and hence \(\mathsf{ted}^{w}(F^{\prime},G^{\prime})\leq\mathsf{ted}^{w}_{\mathcal{A}^{ \prime}}(F,G)=\mathsf{ted}^{w}(F,G)\). The converse inequality follows by symmetry between \((F,G,P)\) and \((F^{\prime},G^{\prime},P^{\prime})\).
#### 3.3.2 Contexts
**Definition 3.9**.: For \(k\in\mathbb{Z}_{\geq 0}\) and a weight function \(w\), contexts \(P=\langle P_{L};P_{R}\rangle\) and \(P^{\prime}=\langle P^{\prime}_{L};P^{\prime}_{R}\rangle\) are called _\(\mathsf{ted}_{\leq k}^{w}\)-equivalent_ if
\[\mathsf{ted}_{\leq k}^{w}(F,G)=\mathsf{ted}_{\leq k}^{w}(F[0\ldots o (u))\cdot P^{\prime}_{L}\cdot F[o(u)+|P_{L}|\ldots c(u)-|P_{R}|]\cdot P^{\prime }_{R}\cdot F(c(u)\ldots|F|),\\ G[0\ldots o(v))\cdot P^{\prime}_{L}\cdot G[o(v)+|P_{L}|\ldots c(u )-|P_{R}|]\cdot P^{\prime}_{R}\cdot G(c(v)\ldots|G|))\]
holds for all forests \(F\) and \(G\) in which \(P\) occurs at nodes \(u\) and \(v\), respectively, satisfying \(|o(u)-o(v)|\leq 2k\) and \(|c(u)-c(v)|\leq 2k\).
**Lemma 3.10**.: _Let \(k\in\mathbb{Z}_{+}\), let \(Q\) be a context, and let \(e,e^{\prime}\in\mathbb{Z}_{\geq 6k}\). Then, \(Q^{e}\) and \(Q^{e^{\prime}}\) are \(\mathsf{ted}_{\leq k}^{w}\)-equivalent for every normalized weight function \(w\)._
Proof.: We assume without loss of generality that \(Q\) is primitive. (If \(Q=R^{m}\) for \(m\in\mathbb{Z}_{\geq 2}\), then \(Q^{e}=R^{me}\) and \(Q^{e^{\prime}}=R^{me^{\prime}}\) can be interpreted as powers of \(R\) rather than powers of \(Q\).) Let \(Q=\langle Q_{L};Q_{R}\rangle\) with \(q_{L}=|Q_{L}|\) and \(q_{R}=|Q_{R}|\). Suppose that \(Q^{e}\) occurs in forests \(F\) and \(G\) at nodes \(u\) and \(v\), respectively, satisfying \(|o(u)-o(v)|\leq 2k\) and \(|c(u)-c(v)|\leq 2k\). Denote
\[F^{\prime}=F[0\ldots o(u))\cdot Q_{L}^{e^{\prime}}\cdot F[o(u)+| Q_{L}^{e}|\ldots c(u)-|Q_{R}^{e}|]\cdot Q_{R}^{e^{\prime}}\cdot F(c(u)\ldots|F|),\] \[G^{\prime}=G[0\ldots o(v))\cdot Q_{L}^{e^{\prime}}\cdot G[o(v)+| Q_{L}^{e}|\ldots c(u)-|Q_{R}^{e}|]\cdot Q_{R}^{e^{\prime}}\cdot G(c(v) \ldots|G|).\]
For \(i\in[0\ldots e)\), let \(u_{i}\) be the node of \(F\) with \(o(u_{i})=o(u)+i\cdot q_{L}\) (and \(c(u_{i})=c(u)-i\cdot q_{R}\)) and let \(v_{i}\) be the node of \(G\) with \(o(v_{i})=o(v)+i\cdot q_{L}\) (and \(c(v_{i})=c(v)-i\cdot q_{R}\)). Moreover, let \(\mathcal{A}\) be an optimal forest alignment such that \(\mathsf{ted}(F,G)=\mathsf{ted}_{\mathcal{A}}(F,G)\leq k\).
**Claim 3.11**.: _There exist \(i_{F},i_{G}\in[0\ldots 5k]\) such that_
\[F[o(u)+i_{F}\cdot q_{L}\ldots o(u)+(i_{F}+1)\cdot q_{L})\simeq _{\mathcal{A}}G[o(v)+i_{G}\cdot q_{L}\ldots o(v)+(i_{G}+1)\cdot q _{L}),\] \[F(c(u)-(i_{F}+1)\cdot q_{R}\ldots c(u)-i_{F}\cdot q_{R}]\simeq _{\mathcal{A}}G(c(v)-(i_{F}+1)\cdot q_{R}\ldots c(v)-i_{G}\cdot q _{R}].\]
Proof.: Let \((f_{b},g_{b})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{b}\geq o(u)\) and \(g_{b}\geq o(v)\). By symmetry between \(F\) and \(G\), we may assume without loss of generality that \(f_{b}=o(u)\). Consider the \(k+1\) disjoint occurrences of \(C\) in \(F\) at positions \((o(u)+i\cdot q_{L},c(u)+i\cdot q_{R})\) for \(i\in[0\ldots k]\). The alignment \(\mathcal{A}\) (of unweighted cost at most \(k\)) must match one of these occurrences perfectly to a context within \(G\). We pick the index \(i_{F}\in[0\ldots k]\) of one such perfectly matched occurrence and suppose that it occurs at a node \(v^{\prime}\) of \(G\).
In particular,
\[F[o(u_{i_{F}})\ldots o(u_{i_{F}})+q_{L})\simeq_{\mathcal{A}}G[o(v ^{\prime})\ldots o(v^{\prime})+q_{L}),\] \[F(c(u_{i_{F}})-q_{R}\ldots c(u_{i_{F}})]\simeq_{\mathcal{A}}G(c(v ^{\prime})-q_{R}\ldots c(v^{\prime})].\]
Since \((f_{b},g_{b})\in\mathcal{A}\), we must have \(o(v^{\prime})\geq g_{b}\geq o(v)\) by the non-crossing property of \(\mathcal{A}\). At the same time, since the unweighted cost of \(\mathcal{A}\) does not exceed \(k\), we have \(o(v^{\prime})\leq o(u_{i_{F}})+2k\leq o(u)+kq_{L}+2k\leq o(v)+kq_{L}+4k\leq o (v)+5kq_{L}\). Similarly, \(c(v^{\prime})\geq c(v)-5kq_{R}\), which also implies \(c(v^{\prime})\leq c(v)\).
Our next goal is to show that \(v^{\prime}=v_{i_{G}}\) for some \(i_{G}\in[0\ldots 5k]\). For a proof by contradiction, suppose that \(o(v_{i})<o(v^{\prime})<o(v_{i+1})\) for some \(i\in[0\ldots 5k)\). Due to \(c(v^{\prime})>c(v)-5kq_{R}\), this also implies that \(c(v_{i})>c(v^{\prime})>c(v_{i+1})\), i.e., that \(v^{\prime}\) is a node on the path between \(v_{i}\) and \(v_{i+1}\). Suppose that the length of this path is \(\ell\) and the node \(v^{\prime}\) is at distance \(\ell^{\prime}\) from \(v_{i}\). Hence, \(G[o(v_{i})\ldots o(v^{\prime}))\) has
\(\ell^{\prime}\) unmatched opening parentheses out of the \(\ell\) unmatched opening parentheses in \(Q_{L}\). Moreover, \(G[o(v_{i})\,\mathinner{.\,.}o(v^{\prime}))\cdot G[o(v^{\prime})\,\mathinner{.\,.}o( v_{i+1}))=Q_{L}=G[o(v^{\prime})\,\mathinner{.\,.}o(v_{i+1}))\cdot G[o(v_{i})\, \mathinner{.\,.}o(v^{\prime}))\), and thus there is a primitive string \(Q_{L}\) such that \(G[o(v^{\prime})\,\mathinner{.\,.}o(v_{i+1}))\) and \(G[o(v_{i})\,\mathinner{.\,.}o(v^{\prime}))\) are both powers of \(Q_{L}\). The number of unmatched opening parentheses is \(Q_{L}\) must be a common divisor of \(\ell\) and \(\ell^{\prime}\), i.e., \(Q_{L}\) can be expressed as a string power with exponent \(\ell/\gcd(\ell,\ell^{\prime})\). A symmetric argument shows that \(Q_{R}\) can be expressed as a string power with exponent \(\ell/\gcd(\ell,\ell^{\prime})\). Overall, we conclude that \(C\) can be expressed as a context power with exponent \(\ell/\gcd(\ell,\ell^{\prime})\), contradicting the primitivity of \(C\). Hence, \(v^{\prime}=v_{i_{G}}\) for some \(i_{G}\in[0\,\mathinner{.\,.}5k]\) holds as claimed and, in particular, \(o(v^{\prime})=o(v)+i_{G}q_{L}\) and \(c(v^{\prime})=c(v)-i_{G}q_{R}\).
Now, if the occurrences of \(Q^{e}\) at nodes \(u,v\) are replaced with \(Q^{e^{\prime}}\) for \(e^{\prime}\geq e-1\), we can interpret this as replacing the occurrences of \(Q\) at nodes \(u_{i_{F}},v_{i_{G}}\) with \(Q^{1+e^{\prime}-e}\). By Claim 3.11, \(\mathcal{A}\) can be trivially adapted without modifying its cost, and hence \(\mathsf{ted}^{w}(F^{\prime},G^{\prime})\leq\mathsf{ted}^{w}_{\mathcal{A}}(F,G )=\mathsf{ted}^{w}(F,G)\). If \(e^{\prime}<e-1\), we repeat the above argument to decrement the exponent one step at a time, still concluding that \(\mathsf{ted}^{w}(F^{\prime},G^{\prime})\leq\mathsf{ted}^{w}(F,G)\). In either case, the converse inequality follows by symmetry between \((F,G,e)\) and \((F^{\prime},G^{\prime},e^{\prime})\).
We say that a context \(P=\langle P_{L};P_{R}\rangle\) avoids vertical \(k\)-periodicity if it cannot be expressed as \(P=C\star Q^{6k+1}\star D\) for some contexts \(C,Q,D\) satisfying \(|Q|\in[1\,\mathinner{.\,.}8k]\).
**Lemma 3.12**.: _Let \(k\in\mathbb{Z}_{+}\), let \(P=\langle P_{L};P_{R}\rangle,P^{\prime}=\langle P^{\prime}_{L};P^{\prime}_{R}\rangle\) be contexts of length \(|P_{L}|+|P_{R}|,|P^{\prime}_{L}|+|P^{\prime}_{R}|\geq 578k^{4}\) that avoid vertical \(k\)-periodicity and whose halves do not contain any balanced substring of length more than \(74k^{3}\). Then, \(P\) and \(P^{\prime}\) are \(\mathsf{ted}^{w}_{\leq k}\)-equivalent for every normalized weight function \(w\)._
Proof.: Suppose that \(P\) occurs in forests \(F\) and \(G\) at nodes \(u\) and \(v\), respectively, satisfying \(|o(u)-o(v)|\leq 2k\) and \(|c(u)-c(v)|\leq 2k\), Denote
\[F^{\prime} =F[0\,\mathinner{.\,.}o(u))\cdot P^{\prime}_{L}\cdot F[o(u)+|P_{L }|\,\mathinner{.\,.}c(u)-|P_{R}|]\cdot P^{\prime}_{R}\cdot F(c(u)\,\mathinner{.\,.}|F|),\] \[G^{\prime} =G[0\,\mathinner{.\,.}o(v))\cdot P^{\prime}_{R}\cdot G[o(v)+|P_{L }|\,\mathinner{.\,.}c(v)-|P_{R}|]\cdot P^{\prime}_{R}\cdot G(c(v)\,\mathinner {.\,.}|G|).\]
Let \(\mathcal{A}=(f_{t},g_{t})_{t=0}^{m}\) be an optimal forest alignment such that \(\mathsf{ted}(F,G)=\mathsf{ted}_{\mathcal{A}}(F,G)\leq k\). Moreover, let \((f_{a},g_{a})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{a}\geq o(u)\) or \(g_{a}\geq o(v)\), \((f_{b},g_{b})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{b}\geq o(u)+|P_{L}|\) and \(g_{b}\geq o(v)+|P_{L}|\), \((f_{c},g_{c})\in\mathcal{A}\) be the leftmost element of \(\mathcal{A}\) such that \(f_{c}>c(u)-|P_{R}|\) or \(g_{c}>c(v)-|P_{R}|\), and let \((f_{d},g_{d})\) be the leftmost element of \(\mathcal{A}\) such that \(f_{d}>c(u)\) and \(g_{d}>c(v)\). We construct an alignment \(\mathcal{A}^{\prime}\) so that it:
* aligns \(F[0\,\mathinner{.\,.}\,f_{a})\) with \(G[0\,\mathinner{.\,.}g_{a})\) in the same way as \(\mathcal{A}\) does;
* deletes \(F[f_{a}\,\mathinner{.\,.}o(u))\) and inserts \(G[g_{a}\,\mathinner{.\,.}o(v))\) (at least one of these fragments is empty);
* matches \(F[o(u)\,\mathinner{.\,.}o(u)+|P_{L}|)=P_{L}\) with \(G[o(v)\,\mathinner{.\,.}o(v)+|P_{L}|)=P_{L}\);
* if \(b>c\), deletes \(F[o(u)+|P_{L}|\,\mathinner{.\,.}c(u)-|P_{R}|]\) and inserts \(G[o(v)+|P_{L}|\,\mathinner{.\,.}c(v)-|P_{R}|]\);
* if \(b\leq c\), deletes \(F[o(u)+|P_{L}|\,\mathinner{.\,.}f_{b})\) and inserts \(G[o(v)+|P_{L}|\,\mathinner{.\,.}g_{b})\) (at least one of these fragments is empty);
* if \(b\leq c\), aligns \(F[f_{b}\,\mathinner{.\,.}f_{c})\) with \(G[g_{b}\,\mathinner{.\,.}g_{c})\) in the same way as \(\mathcal{A}\) does;
* if \(b\leq c\), deletes \(F[f_{c}\,\mathinner{.\,.}c(u)-|P_{R}|]\) and inserts \(G[g_{c}\,\mathinner{.\,.}c(v)-|P_{R}|]\) (at least one of these fragments is empty);
* matches \(F(c(u)-|P_{R}|\,\mathinner{.\,.}c(u))=P_{R}\) with \(G(c(v)-|P_{R}|\,\mathinner{.\,.}c(v)]=P_{R}\);
* deletes \(F(c(u)\,\mathinner{.\,.}f_{d})\) and inserts \(G(c(v)\,\mathinner{.\,.}g_{d})\) (at least one of these fragments is empty);
* aligns \(F[f_{d}\,\mathinner{.\,.}|F|)\) with \(G[g_{d}\,\mathinner{.\,.}|G|)\) in the same way as \(\mathcal{A}\) does.
To prove that \(\mathcal{A}^{\prime}\) is a forest alignment, let us consider several possibilities for a node \(u^{\prime}\) in \(F\)
* If \(u^{\prime}\) belongs to \(P=\langle F[o(u)\,\mathpunct{a}\,o(u)+|P_{L}|\rangle;F(c(u)-|P_{R}|\,\mathpunct{a}\,c(u)]\rangle\), then \(\mathcal{A}^{\prime}\) matches \(u^{\prime}\) to the corresponding node that belongs to \(P=\langle G[o(v)\,\mathpunct{a}\,o(v)+|P_{L}|\rangle;G(c(v)-|P_{R}|\,\mathpunct{a}\,c(v)]\rangle\).
* If \(u^{\prime}\) is outside \(F[o(u)\,\mathpunct{a}\,c(u)]\) and \(\mathcal{A}\) aligns \(u^{\prime}\) to a node \(v^{\prime}\) of \(G\) inside \(G[o(v)\,\mathpunct{a}\,c(v)]\), then \(o(u^{\prime}),c(u^{\prime})\in[f_{a}\mathpunct{a}\,o(u))\cup(c(u)\,\mathpunct{a} \,f_{d})\) because of the non-crossing property of \(\mathcal{A}\ni(f_{a},g_{a}),(f_{d},g_{d})\). Hence, \(\mathcal{A}^{\prime}\) deletes \(u^{\prime}\).
* If \(u^{\prime}\) is outside \(F[o(u)\,\mathpunct{a}\,c(u)]\) and \(\mathcal{A}\) aligns \(u^{\prime}\) to a node \(v^{\prime}\) of \(G\) outside \(G[o(v)\,\mathpunct{a}\,c(u)]\), then \(o(u^{\prime}),c(u^{\prime})\in[0\,\mathpunct{a}\,f_{d}\,\mathpunct{a}\,|F|)\) because of the non-crossing property of \(\mathcal{A}\ni(f_{a},g_{a}),(f_{d},g_{d})\). Hence, \(\mathcal{A}^{\prime}\) also aligns \(u^{\prime}\) to \(v^{\prime}\).
* If \(u^{\prime}\) is outside \(F[o(u)\,\mathpunct{a}\,c(u)]\) and \(\mathcal{A}\) deletes \(u^{\prime}\), then \(\mathcal{A}^{\prime}\) also deletes \(u^{\prime}\).
* If \(b>c\) and \(u^{\prime}\) is inside \(F[o(u)+|P_{L}|\,\mathpunct{a}\,c(u)-|P_{R}|]\), then \(\mathcal{A}^{\prime}\) deletes \(u^{\prime}\).
* If \(b\leq c\), \(u^{\prime}\) is inside \(F[o(u)+|P_{L}|\,\mathpunct{a}\,c(u)-|P_{R}|]\), and \(\mathcal{A}\) aligns \(u^{\prime}\) to a node \(v^{\prime}\) of \(G\) outside \(G[o(v)+|P_{L}|\,\mathpunct{a}\,c(v)-|P_{R}|]\), then \(o(u^{\prime}),c(u^{\prime})\in[o(u)+|P_{L}|\,\mathpunct{a}\,f_{b})\cup[f_{c} \,\mathpunct{a}\,c(u)-|P_{R}|]\) because of the non-crossing property of \(\mathcal{A}\ni(f_{b},g_{b}),(f_{c},g_{c})\). Hence, \(\mathcal{A}^{\prime}\) deletes \(u^{\prime}\).
* If \(b\leq c\), \(u^{\prime}\) is inside \(F[o(u)+|P_{L}|\,\mathpunct{a}\,c(u)-|P_{R}|]\), and \(\mathcal{A}\) aligns \(u^{\prime}\) to a node \(v^{\prime}\) of \(G\) inside \(G[o(v)+|P_{L}|\,\mathpunct{a}\,c(v)-|P_{R}|]\), then \(o(u^{\prime}),c(u^{\prime})\in[f_{b}\,\mathpunct{a}\,f_{c})\) because of the non-crossing property of \(\mathcal{A}\ni(f_{b},g_{b}),(f_{c},g_{c})\). Hence, \(\mathcal{A}^{\prime}\) also aligns \(u^{\prime}\) to \(v^{\prime}\).
* If \(b\leq c\), \(u^{\prime}\) is inside \(F[o(u)+|P_{L}|\,\mathpunct{a}\,c(u)-|P_{R}|]\), and \(\mathcal{A}\) deletes \(u^{\prime}\), then \(\mathcal{A}^{\prime}\) also deletes \(u^{\prime}\).
Let us now prove that \(\mathsf{ted}_{\mathcal{A}^{\prime}}^{w}(F,G)\leq\mathsf{ted}_{\mathcal{A}}^{w} (F,G)\). This relies on the following claim.
**Claim 3.13**.: _There exist \(t_{L}\in[a\,\mathpunct{a}\,b]\) such that \(f_{t_{L}}-g_{t_{L}}=o(u)-o(u)\) and \(t_{R}\in[c\,\mathpunct{a}\,d]\) such that \(f_{t_{L}}-g_{t_{L}}=c(u)-c(u)\)._
Proof.: By symmetry (up to reversal), we can focus without loss of generality on the first claim. Moreover, by symmetry between \(F\) and \(G\), we can assume without loss of generality that \(f_{a}=o(u)\); in particular, this implies \(f_{a}-g_{a}\geq o(u)-o(v)\). If there exists \(t\in[a\,\mathpunct{a}\,b]\) such that \(f_{t}-g_{t}\leq o(u)-o(v)\), then, since \(f_{t}-g_{t}\) may change by at most one for subsequent positions, there is also \(t_{L}\in[a\,\mathpunct{a}\,b]\) such that \(f_{t_{L}}-g_{t_{L}}=o(u)-o(v)\), Consequently, it remains to consider the case when \(f_{t}-g_{t}>o(u)-o(v)\) holds for all \(t\in[a\,\mathpunct{a}\,b]\).
Let us express \(P\) as a vertical composition of \(e\) contexts \(P=P_{0}\star\cdots\star P_{e-1}\), where \(e\) is the depth of \(P\). Observe that the occurrences of \(P\) at node \(u\) in \(F\) and \(v\) in \(G\), for each \(i\in[0\,\mathpunct{a}\,e)\), induce occurrences of \(P_{i}\) at some nodes \(u_{i}\) in \(F\) and \(v_{i}\) in \(G\). Since \(F(o(u_{i})\,\mathpunct{a}\,o(u_{i})+|P_{i,L}|)\) and \(F(c(u_{i})-|P_{i,R}|\,\mathpunct{a}\,c(u_{i}))\) are balanced, we conclude that \(|P_{i}|\leq 2\cdot(74k^{3}+1)\leq 150k^{3}\). We can decompose \([0\,\mathpunct{a}\,e)\) into at most \(k\) individual indices \(i\) such that \(\mathcal{A}\) does not match perfectly the occurrence of \(P_{i}\) at \(v_{i}\) and at most \(k+1\) intervals \([i\,\mathpunct{a}\,i^{\prime})\) such that \(\mathcal{A}\) matches the occurrence \(P_{i}\star\cdots\star P_{i^{\prime}-1}\) at \(v_{i}\) perfectly to a context in \(F\). Let us choose such an interval \([i\,\mathpunct{a}\,i^{\prime})\) maximizing \(|P_{i}\star\cdots\star P_{i^{\prime}-1}|\); this length is at least \(\frac{578k^{4}-k\cdot 150k^{3}}{k+1}\geq 214k^{3}\). Let \(i^{\prime\prime}\in[i\,\mathpunct{a}\,i^{\prime})\) be the maximum index such that \(|P_{i^{\prime\prime}}\star\cdots\star P_{i^{\prime}-1}|>8k\); note that \(|P_{i}\star\cdots\star P_{i^{\prime\prime}-1}|\geq 214k^{3}-(150k^{3}+8k)\geq 56k^{2}\).
For each \(j\in[i\,\mathpunct{a}\,i^{\prime\prime}]\), denote by \(u^{\prime}_{j}\) be the node of matched with \(v_{j}\) by \(\mathcal{A}\). Note that \(|o(u^{\prime}_{j})-o(u_{j})|\leq 4k\) and \(|c(u^{\prime}_{j})-c(u_{j})|\leq 4k\). Moreover, \(o(u^{\prime}_{j})-o(v_{j})>o(u)-o(v)=o(u_{j})-o(v_{j})\) implies \(o(u^{\prime}_{j})>o(u_{j})\). Since \(|P_{j}\star\cdots\star P_{i^{\prime}-1}|>8k\), we conclude that \(u^{\prime}_{j}=u_{j^{\prime}}\) for some \(j^{\prime}\in[j\,\mathpunct{a}\,i^{\prime})\). Moreover, if \(j>i\), then \(u^{\prime}_{j}\) must be a child of \(u^{\prime}_{j-1}\). Hence, there exists \(\delta>0\) such that \(u^{\prime}_{j}=u_{j+\delta}\) holds for all \(j\in[i\,\mathpunct{a}\,i^{\prime\prime}]\). For \(j\in[i\,\mathpunct{a}\,i^{\prime\prime})\), this implies \(P_{j}=P_{j+\delta}\) and that both halves of \(P_{j}\star\cdots\star P_{j+\delta-1}\) are of length at most \(4k\). In particular, if we define \(Q=P_{i}\star\cdots\star P_{i+\delta-1}\), then, due to \(\frac{|P_{i}\star\cdots\star P_{i^{\prime}-1
* \(F[f_{a}\,\mathpunct{f_{t_{L}}}\,\mathpunct{f_{t_{L}}})\) with \(G[g_{a}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathp{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}} \,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathp{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}} \,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathp{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathp{g_{t_{L}}} \,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunctunct{g_{t_{L}}}\, \mathpunctunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\,\mathpunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\,\mathpunctunct{g_{t_{L}}}\, \mathpunct{g_{t_{L}}}\,\mathpunctunctunct{g_
Proof.: Algorithm 4 provides a recursive procedure that, for every balanced fragment \(F[i\,\ldots j)\) of \(F\), constructs a piece decomposition \(\mathcal{D}(i,j)\) of \(F[i\,\ldots j)\) consisting of pieces of size at most \(t\). In the corner cases of \(j=i\) and \(j\in(i\,\ldots\,i+t]\), we return \(\mathcal{D}(i,j)=\emptyset\) and \(\mathcal{D}(i,j)=\{F[i\,\ldots\,j)\}\), respectively. Otherwise, we iteratively grow fragments \(F[i\,\ldots\,i^{\prime})\) and \(F[j^{\prime}\,\ldots\,j)\) (initially empty) maintaining the following invariants:
1. \(F[i^{\prime}\,\ldots\,j^{\prime})\) is balanced;
2. \(|F[i\,\ldots\,i^{\prime})|+|F[j^{\prime}\,\ldots\,j)|\leq t\);
3. if \(F[i\,\ldots\,j)\) is a tree, then \(F[i\,\ldots\,i^{\prime})=F[j^{\prime}\,\ldots\,j)=\varepsilon\) or \(\langle F[i\,\ldots\,i^{\prime});F[j^{\prime}\,\ldots\,j)\rangle\) is a context;
4. if \(F[i\,\ldots\,j)\) is not a tree, then \(F[i\,\ldots\,i^{\prime})\) and \(F[j^{\prime}\,\ldots\,j)\) are balanced.
At each iteration, we identify a position \(m\in(i^{\prime}\,\ldots\,j^{\prime}]\) such that \(F[i^{\prime}\,\ldots\,m)\) is a tree (such a position always exists due to \(j-i>t\) and by invariants 1, 2).
1. We set \(i^{\prime}\gets m\) as long as it would not violate invariant 2.
2. If \(m\neq j^{\prime}\), we set \(j\gets m\) as long as it would not violate invariant 2.
3. If \(m=j^{\prime}\) and \(F[i\,\ldots\,j)\) is a tree, we set \((i^{\prime},j^{\prime})\leftarrow(i^{\prime}+1,j^{\prime}-1)\) as long as it would not violate invariant 2.
4. Otherwise, we return \(\mathcal{D}(i,j):=\{\langle F[i\,\ldots\,i^{\prime});F[j^{\prime}\,\ldots\,j) \rangle\}\cup\mathcal{D}(i^{\prime},m)\cup\mathcal{D}(m,j^{\prime})\) if \(F[i\,\ldots\,j)\) is a tree and \(\mathcal{D}(i,j):=\mathcal{D}(i,i^{\prime})\cup\mathcal{D}(i^{\prime},m)\cup \mathcal{D}(m,j^{\prime})\cup\mathcal{D}(j^{\prime},j)\) if \(F[i\,\ldots\,j)\) is not a tree.
It is easy to see that cases 1-3 preserve the invariants and hence case 4 results in a valid piece decomposition with pieces of size at most \(t\).
Next, we prove that the number of pieces is at most \(\max(1,\frac{6(j-i)}{t}-3)\) if \(F[i\,\ldots\,j)\) is a tree and at most \(\max(1,\frac{6(j-i)}{t}-1)\) otherwise. This holds trivially if \(j\leq i+t\), where Algorithm 4 terminates at Line 1 or 2. If \(F[i\,\ldots\,j)\) is a tree of size \(j-i>t\), then Algorithm 4 terminates at Line 10. We consider several sub-cases:
1. If \(m-i^{\prime}\leq\frac{2t}{3}\) and \(j^{\prime}-m\leq\frac{t}{3}\), then \(|\mathcal{D}(i,j)|\leq 3<\frac{6(j-i)}{t}-3\) because \(j-i>t\).
2. If \(m-i^{\prime}\leq\frac{2t}{3}\) and \(j^{\prime}-m>\frac{t}{3}\), then \((m-i)+(j-j^{\prime})>t\) because the test in Line 6 failed. Hence, \(j^{\prime}-m<j-i-t\) and \(|\mathcal{D}(i,j)|\leq 2+|\mathcal{D}(m,j^{\prime})|\leq 2+\frac{6(j^{\prime}-m)}{t}-1< \frac{6(j-i-t)}{t}+1<\frac{6(j-i)}{t}-3\).
3. If \(m-i^{\prime}>\frac{2t}{3}\) and \(j^{\prime}-m=0\), then \((i^{\prime}+1-i)+(j-j^{\prime}+1)>t\) because the test in Line 9 failed. Hence, \(j^{\prime}-i^{\prime}\leq j-i-t+1\) and \(|\mathcal{D}(i,j)|\leq 1+|\mathcal{D}(i^{\prime},j^{\prime})|\leq 1+\frac{6(j^{ \prime}-i^{\prime})}{t}-3\leq\frac{6(j-i-t+1)}{t}-2<\frac{6(j-i)}{t}-3\).
4. If \(m-i^{\prime}>\frac{2t}{3}\) and \(0<j^{\prime}-m\leq\frac{t}{3}\), then \((i^{\prime}-i)+(j-m)>t\) because the test in Line 7 failed. Hence, \(m-i^{\prime}<j-i-t\) and \(|\mathcal{D}(i,j)|\leq 2+|\mathcal{D}(i^{\prime},m)|\leq 2+\frac{6(m-i^{\prime})}{t}-3< \frac{6(j-i-t)}{t}-1<\frac{6(j-i)}{t}-3\).
5. If \(m-i^{\prime}>\frac{2t}{3}\) and \(j^{\prime}-m>\frac{t}{3}\), then \(|\mathcal{D}(i,j)|\leq 1+|\mathcal{D}(i^{\prime},m)|+|\mathcal{D}(m,j^{\prime})|\leq 1+ \frac{6(m-i^{\prime})}{t}-3+\frac{6(j^{\prime}-m)}{t}-1\leq\frac{6(j-i)}{t}-3\).
If \(F[i\,\ldots\,j)\) is not a tree, then Algorithm 4 terminates at Line 8. We consider several sub-cases:
1. If \(m-i^{\prime}\leq\frac{2t}{3}\) and \(j^{\prime}-m\leq\frac{t}{3}\), then \(|\mathcal{D}(i,j)|\leq 4<\frac{6(j-i)}{t}-1\) because \(j-i>t\).
2. If \(m-i^{\prime}\leq\frac{2t}{3}\) and \(j^{\prime}-m>\frac{t}{3}\), then \((m-i)+(j-j^{\prime})>t\) because the test in Line 6 failed. Hence, \(j^{\prime}-m<j-i-t\) and \(|\mathcal{D}(i,j)|\leq 3+|\mathcal{D}(m,j^{\prime})|\leq 3+\frac{6(j^{ \prime}-m)}{t}-1<\frac{6(j-i-t)}{t}+2<\frac{6(j-i)}{t}-1\).
3. If \(m-i^{\prime}>\frac{2t}{3}\) and \(j^{\prime}-m=0\), then \(|\mathcal{D}(i,j)|\leq 2+|\mathcal{D}(i^{\prime},j^{\prime})|\leq 2+\frac{6(j^{ \prime}-i^{\prime})}{t}-3\leq\frac{6(j-i)}{t}-1\).
4. If \(m-i^{\prime}>\frac{2t}{3}\) and \(0<j^{\prime}-m\leq\frac{t}{3}\), then \((i^{\prime}-i)+(j-m)>t\) because the test in Line 7 failed. Hence, \(m-i^{\prime}<j-i-t\) and \(|\mathcal{D}(i,j)|\leq 3+|\mathcal{D}(i^{\prime},m)|\leq 3+\frac{6(m-i^{\prime})}{t}-3< \frac{6(j-i-t)}{t}<\frac{6(j-i)}{t}-1\).
5. If \(m-i^{\prime}>\frac{2t}{3}\) and \(j^{\prime}-m>\frac{t}{3}\), then \(|\mathcal{D}(i,j)|\leq 2+|\mathcal{D}(i^{\prime},m)|+|\mathcal{D}(m,j^{\prime})|\leq 2+ \frac{6(m-i^{\prime})}{t}-3+\frac{6(j^{\prime}-m)}{t}-1\leq\frac{6(j-i)}{t}-2< \frac{6(j-i)}{t}-1\).
It remains to provide a linear-time implementation of our algorithm. We assume that there are bidirectional pointers between the opening and the closing parentheses representing the same node. Such pointers can be constructed using a linear-time stack-based preprocessing of the input forest \(F\). Each iteration of the **while** loop increases \(j-j^{\prime}+i^{\prime}-i\) (except for the final one), so a single call to the \(\mathcal{D}(i,j)\) function costs \(\mathcal{O}(t)\) due to invariant (b). The total number of calls is \(\mathcal{O}(|\mathcal{D}(0,|F|)|)=\mathcal{O}(\frac{1}{t}\cdot|F|)\), so the overall running time, including preprocessing, is \(\mathcal{O}(|F|)\).
**Lemma 3.16**.: _Given forests \(F\) and \(G\) of total size \(n\), a piece decomposition \(\mathsf{D}\) of \(F\), and an integer \(s\in\mathbb{Z}_{+}\), one can find in \(\mathcal{O}(n+|\mathsf{D}|s^{3})\) time a maximum-size set \(S\subseteq\mathsf{D}\times\mathcal{P}(G)\) that, for some alignment \(\mathcal{A}\in\mathsf{TA}(F,G)\) of width at most \(s\), contains only pairs of pieces that \(\mathcal{A}\) matches perfectly._
```
1Pairs\((\mathsf{D}_{i,j},G[i^{\prime}\ldots j^{\prime}))\):
2\(S\leftarrow\emptyset\);
3if\(i^{\prime}<\min(j^{\prime},i+s)\)then\(S\stackrel{{\max}}{{\leftarrow}}\mathtt{Pairs}(D_{i,j},G[i^{\prime}+1 \ldots j^{\prime}))\);
4if\(j^{\prime}>\max(i^{\prime},j-s)\)then\(S\stackrel{{\max}}{{\leftarrow}}\mathtt{Pairs}(D_{i,j},G[i^{\prime} \ldots j^{\prime}-1))\);
5if\(\mathsf{D}_{i,j}=\{F[i\ldots j)\}\)and\(F[i\ldots j)=G[i^{\prime}\ldots j^{\prime})\)then
6\(S\stackrel{{\max}}{{\leftarrow}}\{(F[i\ldots j),G[i^{\prime} \ldots j^{\prime}))\}\);
7if\(\mathsf{D}_{i,j}=\mathsf{D}_{i,m}\cup\mathsf{D}_{m,j}\)for some \(m\in(i\ldots j)\)then
8foreach\(m^{\prime}\in[m-s\ldots m+s]\cap[i^{\prime}\ldots j^{\prime}]\)do
9\(S\stackrel{{\max}}{{\leftarrow}}\mathtt{Pairs}(D_{i,m},G[i^{ \prime}\ldots m^{\prime}))\cup\mathtt{Pairs}(D_{m,j},G[m^{\prime}\ldots j^{ \prime}))\);
10
11if\(\mathsf{D}_{i,j}=\{\langle F[i\ldots i+\ell);F[j-r\ldots j)\rangle\}\cup \mathsf{D}_{i+\ell,j-r}\)then
12if\(F[i\ldots i+\ell)=G[i^{\prime}\ldots i^{\prime}+\ell)\)and\(F[j-r\ldots j)=G[j^{\prime}-r\ldots j^{\prime})\)and\(G[i^{\prime}+\ell\ldots j^{\prime}-r)\) is balancedthen
13\(S\stackrel{{\max}}{{\leftarrow}}\{(\langle F[i\ldots i+\ell);F[j-r \ldots j)\rangle,\langle G[i^{\prime}\ldots i^{\prime}+\ell);G[j^{\prime}-r \ldots j^{\prime}))\rangle\}\cup\\ \mathtt{Pairs}(D_{i+\ell,j-r},G[i^{\prime}+\ell\ldots j^{\prime}-r))\]
14\(S\stackrel{{\max}}{{\leftarrow}}\mathtt{Pairs}(D_{i+\ell,j-r},G [\max(i+\ell-s,i^{\prime})\ldots\min(j-r+s,j^{\prime})))\);
15
16return\(S\);
```
**Algorithm 5**Compute a maximum-size element of \(\mathcal{S}(D_{i,j},G[i^{\prime}\ldots j^{\prime}))\).
Proof.: For a piece decomposition \(\mathsf{D}_{i,j}\) of a balanced fragment \(F[i\ldots j)\) and a fragment \(G[i^{\prime}\ldots j^{\prime})\), let \(\mathcal{S}(D_{i,j},G[i^{\prime}\ldots j^{\prime}))\) be the family of all subsets of \(\mathsf{D}_{i,j}\times\mathcal{P}(G[i^{\prime}\ldots j^{\prime}))\) that, for some alignment \(\mathcal{A}\in\mathsf{A}(F[i\ldots j),G[i^{\prime}\ldots j^{\prime}))\) of width at most \(s\), contain only pairs of pieces that \(\mathcal{A}\) matches perfectly. Algorithm 5 implements a recursive procedure \(\mathtt{Pairs}(\mathsf{D}_{i,j},G[i^{\prime}\ldots j^{\prime}))\) that computes a maximum-size element of \(\mathcal{S}(D_{i,j},G[i^{\prime}\ldots j^{\prime}))\) assuming that \(i^{\prime}\in[i-s\ldots i+s]\) and \(j^{\prime}\in[j-s\ldots j+s]\). It uses an \(S\stackrel{{\max}}{{\leftarrow}}S^{\prime}\) operator that assigns \(S\gets S^{\prime}\) if \(|S^{\prime}|>|S|\). The algorithm returns the largest of the following candidates:
1. \(\emptyset\). This is trivially valid because every alignment \(\mathcal{A}\in\mathsf{A}(F[i\ldots j),G[i^{\prime}\ldots j^{\prime}))\) of width at most \(s\) witnesses \(\emptyset\in\mathcal{S}(D_{i,j},G[i^{\prime}\ldots j^{\prime}))\).
2. \(\mathtt{Pairs}(D_{i,j},G[i^{\prime}+1\ldots j^{\prime}))\) if \(i^{\prime}<\min(j^{\prime},i+s)\). Let \(S=\mathtt{Pairs}(D_{i,j},G[i^{\prime}+1\ldots j^{\prime}))\) with a witness alignment \(\mathcal{A}^{\prime}\in\mathsf{A}(F[i\ldots j),G[i^{\prime}+1\ldots j^{ \prime}))\). An alignment obtained from \(\mathcal{A}^{\prime}\) by prepending \((i,i^{\prime})\), which corresponds to inserting \(Y[i^{\prime}]\), witnesses \(S\in\mathcal{S}(D_{i,j},G[i^{\prime}\ldots j^{\prime}))\).
3. \(\mathtt{Pairs}(D_{i,j},G[i^{\prime}\ldots j^{\prime}-1))\) if \(j^{\prime}>\max(i^{\prime},j-s)\). Let \(S=\mathtt{Pairs}(D_{i,j},G[i^{\prime}\ldots j^{\prime}-1))\) with a witness alignment \(\mathcal{A}^{\prime}\in\mathsf{A}(F[i\ldots j),G[i^{\prime}\ldots j^{\prime}-1))\). An alignment obtained from \(\mathcal{A}^{\prime}\) by appending \((j,j^{\prime})\), which corresponds to inserting \(Y[j^{\prime}-1]\), witnesses \(S\in\mathcal{S}(D_{i,j},G[i^{\prime}\ldots j^{\prime}))\).
4. \(\{(F[i\,..\,j),G[i^{\prime}\,..\,j^{\prime}))\}\) if \(\mathsf{D}_{i,j}=\{F[i\,..\,j)\}\) and \(F[i\,..\,j)=G[i^{\prime}\,..\,j^{\prime})\). The alignment \((i+t,\)\(j+t)_{t=0}^{i^{\prime}-i}\in\mathsf{A}(F[i\,..\,j),G[i^{\prime}\,..\,j^{ \prime}))\) witnesses \(\{(F[i\,..\,j),G[i^{\prime}\,..\,j^{\prime}))\}\in\mathcal{S}(D_{i,j},G[i^{ \prime}\,..\,j^{\prime}))\).
5. \(\mathtt{Pairs}(D_{i,m},G[i^{\prime}\,..\,m^{\prime}))\cup\mathtt{Pairs}(D_{m,j},G[m^{\prime}\,..\,j^{\prime}))\) if \(\mathsf{D}_{i,j}=\mathsf{D}_{i,m}\cup\mathsf{D}_{m,j}\) for some \(m\in(i\,..\,j)\) and \(m^{\prime}\in[m-s\,..\,m+s]\cap[i^{\prime}\,..\,j^{\prime}]\). Denote \(S_{L}=\mathtt{Pairs}(D_{i,m},G[i^{\prime}\,..\,m^{\prime}))\) and \(S_{R}=\mathtt{Pairs}(D_{m,j},G[m^{\prime}\,..\,j^{\prime}))\) with witness alignments \(\mathcal{A}_{L}\in\mathsf{A}(F[i\,..\,m),G[i^{\prime}\,..\,m^{\prime}))\) and \(\mathcal{A}_{R}\in\mathsf{A}(F[m\,..\,j),G[m^{\prime}\,..\,j^{\prime}))\), respectively. Stitching \(\mathcal{A}_{L}\) and \(\mathcal{A}_{R}\) at the common endpoint \((m,m^{\prime})\) yields an alignment witnessing \(S_{L}\cup S_{R}\in\mathcal{S}(D_{i,j},G[i^{\prime}\,..\,j^{\prime}))\).
6. \(\{(\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle,\langle G[i^{\prime}\,..\,i ^{\prime}+\ell);G[j^{\prime}-r\,..\,j^{\prime}))\}\cup\mathtt{Pairs}(D_{i+ \ell,j-r},G[i^{\prime}+\ell\,..\,j^{\prime}-r))\) if \(\mathsf{D}_{i,j}=\{\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle\}\cup\mathtt{ D}_{i+\ell,j-r}\) and \(\langle G[i\,..\,i^{\prime}+\ell);G[j^{\prime}-r\,..\,j^{\prime})\rangle\) is a context in \(G\) matching \(\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle\). Consider a set \(S^{\prime}=\mathtt{Pairs}(D_{i+\ell,j-r},G[i^{\prime}+\ell\,..\,j^{\prime}-r))\) and a witness alignment \(\mathcal{A}^{\prime}\in\mathsf{A}(F[i+\ell\,..\,j-r),G[i^{\prime}+\ell\,..\,j ^{\prime}-r))\). Stitching \((i+t,i^{\prime}+t)_{t=0}^{\ell}\), \(\mathcal{A}^{\prime}\), and \((j+t,j^{\prime}+t)_{t=-r}^{0}\) at the common endpoints yields an alignment witnessing \(S^{\prime}\cup\{(F[i\,..\,i+\ell);F[j-r\,..\,j)),\langle G[i^{\prime}\,..\,i ^{\prime}+\ell);G[j^{\prime}-r\,..\,j^{\prime}))\}\in\mathcal{S}(D_{i,j},G[i^{ \prime}\,..\,j^{\prime}))\).
7. \(\mathtt{Pairs}(D_{i+\ell,j-r},G[\max(i+\ell-s,i^{\prime})\,..\,\min(j-r+s,j ^{\prime})))\) if \(\mathsf{D}_{i,j}=\{\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle\}\cup\mathsf{ D}_{i+\ell,j-r}\). Let \(S=\mathtt{Pairs}(D_{i+\ell,j-r},G[i^{\prime}+\ell^{\prime}\,..\,j^{\prime}-r^{ \prime}))\), where \(\ell^{\prime}=\max(0,i+\ell-s-i^{\prime})\) and \(r^{\prime}=\max(0,j^{\prime}-j+r-s)\), with a witness alignment \(\mathcal{A}^{\prime}\in\mathsf{A}(F[i+\ell,j-r),G[i^{\prime}+\ell^{\prime},j ^{\prime}-r^{\prime}))\). Stitching \((i+t,i^{\prime}+t)_{t=0}^{\ell}\), \((i+t,i^{\prime}+\ell^{\prime})_{t=\ell^{\prime}}^{\ell}\), \(\mathcal{A}^{\prime}\), \((j+t,j^{\prime}-r^{\prime})_{t=-r}^{-r}\), and \((j+t,j^{\prime}+t)_{t=-r^{\prime}}^{0}\) yields an alignment witnessing \(S\in\mathcal{S}(D_{i,j},G[i^{\prime}\,..\,j^{\prime}))\).
Next, consider a maximum-size element \(S\in\mathcal{S}(D_{i,j},G[i^{\prime}\,..\,j^{\prime}))\) and a witness alignment \(\mathcal{A}\in\mathsf{A}(F[i\,..\,j),G[i^{\prime}\,..\,j^{\prime}))\) of width at most \(s\).
1. If \(\mathsf{D}_{i,j}=\emptyset\), we must have \(S=\emptyset\), which is covered by candidate 1.
2. Suppose that \(\mathsf{D}_{i,j}=\{F[i\,..\,j)\}\). The case of \(S=\emptyset\) is covered by candidate 1. Otherwise, \(S=\{(F[i\,..\,j),G[i^{\prime\prime}\,..\,j^{\prime\prime}))\}\) for some \(i^{\prime\prime}\in[i^{\prime}\,..\,i+s]\) and \(j^{\prime\prime}\in[j-s\,..\,j^{\prime}]\). This is covered by \(i^{\prime\prime}-i^{\prime}\) applications of candidate 2, \(j^{\prime}-j^{\prime\prime}\) applications of candidate 3, and finally an application of candidate 4.
3. Suppose that \(\mathsf{D}_{i,j}=\mathsf{D}_{i,m}\cup\mathsf{D}_{m,j}\) for some \(m\in(i\,..\,j)\). Since the width of \(\mathcal{A}\) does not exceed \(s\), we must have \((m,m^{\prime})\in\mathcal{A}\) for some \(m^{\prime}\in[m-s\,..\,m+s]\cap[i^{\prime}\,..\,j^{\prime}]\). Consequently, \(S\) can be expressed as a union of an element of \(\mathcal{S}(\mathsf{D}_{i,m},G[i^{\prime}\,..\,m^{\prime}))\) and an element of \(\mathcal{S}(D_{m,j},G[m^{\prime}\,..\,j^{\prime}))\). This case is thus covered by candidate 5.
4. Suppose that \(\mathsf{D}_{i,j}=\{\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle\}\cup\mathsf{ D}_{i+\ell,j-r}\). If \(S\) does not contain any pair of the form \((\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle,\langle G[i^{\prime\prime}\,..\,i^{ \prime\prime}+\ell);G[j^{\prime\prime}-r\,..\,j^{\prime\prime})\rangle)\), then \(S\in\mathcal{S}(D_{i+\ell,j-r},G[\max(i+\ell-s,i^{\prime})\,..\,\min(j-r+s,j^{ \prime})))\), and this case is covered by candidate 7. Otherwise, we must have \(i^{\prime\prime}\in[\max(i^{\prime},i-s)\,..\,i+s]\) and \(j^{\prime\prime}\in[j-s\,..\,\min(j^{\prime},j+s)]\). This is covered by \(i^{\prime\prime}-i^{\prime}\) applications of candidate 2, \(j^{\prime}-j^{\prime\prime}\) applications of candidate 3, and finally an application of candidate 4 because \(S\setminus\{(\langle F[i\,..\,i+\ell);F[j-r\,..\,j)\rangle,\langle G[i^{\prime}\,..\,i ^{\prime}+\ell);G[j^{\prime}-r\,..\,j^{\prime}))\}\in\mathcal{S}(D_{i+\ell,j-r},G[ i^{\prime}+\ell\,..\,j^{\prime}-r))\).
This completes the proof that Algorithm 5 is correct. The sought set \(S\) is obtained via a call \(\mathtt{Pairs}(\mathsf{D},G[0\,..\,|G|))\) which is valid as long as \(s\geq\big{|}|F|-|G|\big{|}\). Otherwise, there is no alignment \(\mathcal{A}\in\mathsf{TA}(F,G)\) of width at most \(s\), and thus we return \(S=\emptyset\).
As for the efficient implementation, we use memoization to make sure that each call to \(\mathtt{Pairs}\) is executed at most once. The number of calls is \(\mathcal{O}(|\mathsf{D}|\cdot s^{2})\) and each one performs \(\mathcal{O}(s)\) instructions. In order to implement every instruction in \(\mathcal{O}(1)\) time, we implement sets as persistent linked lists augmented with their size (this is valid because the arguments of every union operation are guaranteed to be disjoint). Moreover, we use Theorem 2.1 (for checking whether fragments of \(F\) match fragments of \(G\)) and Fact 3.
**Lemma 3.17**.: _There exists a linear-time algorithm that, given a forest \(P\) and an integer \(k\in\mathbb{Z}_{+}\), constructs a forest of length at most \(74k^{3}\) that is \(\mathsf{ted}_{\leq k}^{w}\)-equivalent to \(P\) for every normalized quasimetric \(w\)._
```
1HorizontalReduction\((P,k)\):
2\(P^{\prime}\leftarrow\mathsf{PeriodicityReduction}(P,4k,\{Q\in\Sigma^{+}:|Q|\leq 4k\text { and }Q\text{ is a primitive forest}\})\);
3if\(|P^{\prime}|\geq 74k^{3}\)thenreturn\(\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\s{\sfrac{\s{\s}}}{{2}}}}}{{ {\{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\}\}\ \} \)\)\) \) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \\ \ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\}\\\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\
sorting [17, 18] to map characters of \(\mathbf{P}\) (depth-1 contexts) to integer identifiers. By composing the contexts corresponding to the resulting string \(\mathbf{P}^{\prime}\), we obtain a context \(P^{\prime}\). We return \(P^{\prime\prime}:=\bigstar_{i=0}^{17k^{2}-1}\langle\left(\iota_{a}(\iota_{a}) _{a}\right)^{i};\left(\iota_{a}\right){}^{17k^{2}-1-i}\rangle_{a}\rangle\) (for an arbitrary label \(a\in\Sigma\)) or \(P^{\prime}\) depending on whether \(|P^{\prime}|\geq 578k^{4}\) or not.
Note that \(|P^{\prime\prime}|=\sum_{i=0}^{17k^{2}-1}(1+2\cdot i+2\cdot(17k^{2}-1-i)+1)=17k ^{2}\cdot 2\cdot 17k^{2}=578k^{4}\). Thus, the resulting context (either \(P^{\prime}\) or \(P^{\prime\prime}\)) is guaranteed to be of length at most \(578k^{4}\). Let us now argue that it is \(\mathsf{ted}^{w}_{\leq k}\)-equivalent to \(P\) for every normalized quasimetric \(w\). By Lemma 3.17, the forests \(F^{\prime}_{i}\) and \(G^{\prime}_{i}\) are \(\mathsf{ted}^{w}_{\leq k}\)-equivalent to \(F_{i}\) and \(G_{i}\), respectively, and thus \(\bigstar_{i=0}^{e-1}\mathbf{P}[i]\) is \(\mathsf{ted}^{w}_{\leq k}\)-equivalent to \(P\). By Lemma 2.13, the context \(P^{\prime}\) is obtained from \(\bigstar_{i=0}^{e-1}\mathbf{P}[i]\) by repeatedly replacing \(Q^{6k+1}\) with \(Q^{6k}\) for primitive contexts \(Q\) of length at most \(8k\). By Lemma 3.10, \(Q^{6k+1}\) is then \(\mathsf{ted}^{w}_{\leq k}\)-equivalent to \(Q^{6k}\), so this operation preserves \(\mathsf{ted}^{w}_{\leq k}\)-equivalence, i.e., \(P^{\prime}\) is also \(\mathsf{ted}^{w}_{\leq k}\)-equivalent to \(P\). Moreover, each depth-1 context in \(\mathbf{P}^{\prime}\) originates from \(\mathbf{P}\), so each forest occurring in (either half of) \(P^{\prime}\) is of length at most \(74k^{3}\). Furthermore, Lemma 2.13 guarantees that \(P^{\prime}\) is not of the form \(C\star Q^{6k+1}\star D\) for any context \(Q\) of length at most \(8k\), and thus \(P^{\prime}\) avoids vertical \(k\)-periodicity. By construction, \(P^{\prime\prime}\) avoids vertical \(k\)-periodicity and its halves contain only forests of lengths at most \(74k^{3}\) (in fact, at most \(34k^{2}\)). Consequently, Lemma 3.12 implies that \(P^{\prime\prime}\) is \(\mathsf{ted}^{w}_{\leq k}\)-equivalent to \(P^{\prime}\) (and, by transitivity, to \(P\)) provided that \(|P^{\prime\prime}|\geq 578k^{4}\).
As for the running time analysis, we note that all applications of Lemma 3.17 concern disjoint fragments of \(P\), so the total cost of the calls to HorizontalReduction is linear. Assigning integer identifiers to contexts \(\mathbf{P}[i]\) and applying Lemma 2.13 also takes linear time. Finally, \(P^{\prime\prime}\) is constructed only if \(|P^{\prime}|\geq 578k^{4}\), so the cost of this step is also be bounded by \(\mathcal{O}(k^{4})=\mathcal{O}(|P|)\).
**Theorem 3.19**.: _There exists an \(\mathcal{O}(n)\)-time algorithm that, given forests \(F\), \(G\) of size at most \(n\geq 12716k^{5}\) and an integer \(k\in\mathbb{Z}_{+}\), constructs forests \(F^{\prime}\), \(G^{\prime}\) of lengths at most \(\frac{n}{2}+6358k^{5}\) such that \(\mathsf{ted}^{w}_{\leq k}(F,G)=\mathsf{ted}^{w}_{\leq k}(F^{\prime},G^{\prime})\) holds for every normalized quasimetric \(w\)._
Proof.: By symmetry, we assume without loss of generality that \(|F|\geq|G|\). We start by applying Lemma 3.15 to construct a piece decomposition \(\mathsf{D}\) of \(F\) consisting of at most \(12k-1\) pieces of length at most \(\lceil\frac{n}{2k}\rceil\) each. Next, we use Lemma 3.16 to identify a maximum-size set \(S\subseteq\mathsf{D}\times\mathcal{P}(G)\) that, for some alignment \(\mathcal{A}\in\mathsf{A}(F,G)\) of width at most \(2k\), contains only pairs of pieces that \(\mathcal{A}\) matches perfectly. If \(|S|<|\mathsf{D}|-k\), we return \(F^{\prime}=(\iota_{a})_{a})^{k+1}\) and \(G^{\prime}=\varepsilon\) for some \(a\in\Sigma\). Otherwise, for each pair of matching forests \(F[i\,\mathp
total cost is \(\mathcal{O}(n)\) by Lemmas 3.17 and 3.18, respectively.
**Corollary 3.20**.: _There exists a linear-time algorithm that, given forests \(F\), \(G\) and an integer \(k\in\mathbb{Z}_{+}\), constructs forests \(F^{\prime}\), \(G^{\prime}\) of lengths at most \(12717k^{5}\) such that \(\mathsf{ted}_{\leq k}^{w}(F,G)=\mathsf{ted}_{\leq k}^{w}(F^{\prime},G^{\prime})\) holds for every normalized quasimetric \(w\)._
Proof.: We iteratively apply Theorem 3.19 as long as \(\max(|F|,|G|)>12717k^{5}\) and return the resulting pair of forests. Formally, we construct a sequence \((F_{i},G_{i})_{i=0}^{t}\) such that \((F_{0},G_{0})=(F,G)\) and \(\mathsf{ted}_{\leq k}^{w}(F_{i+1},G)_{i+1}=\mathsf{ted}_{\leq k}^{w}(F_{i},G_ {i})\) holds for every \(i\in[0\,\mathpunct{.}\,t)\). Consider forests \((F_{i},G_{i})\) at iteration \(i\). If \(n_{i}:=\max(|F_{i}|,|G_{i}|)\leq 12717k^{5}\), we set \(t:=i\) and return \((F^{\prime},G^{\prime}):=(F_{i},G_{i})\). Otherwise, we apply Theorem 3.19 to derive forests \(F_{i+1}\) and \(G_{i+1}\) of lengths at most \(n_{i+1}:=\max(|F_{i+1}|,|G_{i+1}|)\leq\frac{1}{2}n_{i}+6358k^{5}\) such that \(\mathsf{ted}_{\leq k}^{w}(F_{i+1},G)_{i+1}=\mathsf{ted}_{\leq k}^{w}(F_{i},G_ {i})\). Since \(n_{i+1}-12716k^{5}\leq\frac{1}{2}(n_{i}-12716k^{5})\), the value \(n_{i}\) strictly decreases at each iteration and thus the process terminates. Moreover, the running time of each iteration is \(\mathcal{O}(n_{i})=\mathcal{O}(n_{i}-12716k^{5})\). The latter values form a geometric series dominated by the leading term at \(i=0\). Hence, the total running time is linear in the input size.
**Theorem 1.2**.: _Given forests \(F,G\) of length at most \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a quasimetric \(w\), the value \(\mathsf{ted}_{\leq k}^{w}(F,G)\) can be computed in \(\mathcal{O}(n+k^{15})\) time. Moreover, \(\mathsf{ted}_{\leq k}(F,G)\) can be computed in \(\mathcal{O}(n+k^{7}\log k)\) time._
Proof.: We first apply Corollary 3.20 to build forests \(F^{\prime},G^{\prime}\) of length \(\mathcal{O}(k^{5})\) such that \(\mathsf{ted}_{\leq k}^{w}(F,G)=\mathsf{ted}_{\leq k}^{w}(F^{\prime},G^{\prime})\). Then, we compute \(\mathsf{ted}_{\leq k}^{w}(F^{\prime},G^{\prime})\) using the algorithm of Demaine, Mozes, Rossman, and Weimann [10]. The running times of these two steps are \(\mathcal{O}(n)\) and \(\mathcal{O}((k^{5})^{3})\), respectively, for a total of \(\mathcal{O}(n+k^{15})\). If \(w\) is the discrete metric (the unweighted case), then we compute \(\mathsf{ted}_{\leq k}(F^{\prime},G^{\prime})\) using the algorithm of Akmal and Jin [1], which costs \(\mathcal{O}(k^{5}\cdot k^{2}\cdot\log(k^{5}))=\mathcal{O}(k^{7}\log k)\) time.
## 4 Dyck Edit Distance
In this section we give a deterministic algorithm that computes weighted Dyck edit distance of a given input string. Formally we show the following.
**Theorem 1.3**.: _Given a string \(X\) of length \(n\), an integer \(k\in\mathbb{Z}_{+}\), and a skewmetric \(w\), the value \(\mathsf{dylck}_{\leq k}^{w}(X)\) can be computed in \(\mathcal{O}(n+k^{12})\) time._
### Preliminaries
In Dyck Language, the alphabet \(\Sigma\) consists of two disjoint sets \(T\) and \(\overline{T}\) of _opening_ and _closing_ parentheses, respectively, with a bijection \(f:T\to\overline{T}\) mapping each opening parenthesis to the corresponding closing parenthesis. We extend this mapping to an involution \(f:T\cup\overline{T}\to T\cup\overline{T}\) and then to an involution \(f:\Sigma^{*}\to\Sigma^{*}\) mapping each string \(X[0]X[1]\cdots X[|X|-1]\) to its reverse complement \(\overline{X[|X|-1]}\cdots\overline{X[1]\,X[0]}\). Given two strings \(X,Y\), we denote their concatenation by \(XY\) or \(X\cdot Y\).
The _Dyck_ language \(\mathsf{Dyck}(\Sigma)\subseteq\Sigma^{*}\) consists of all well-parenthesized expression over \(\Sigma\); formally, it can be defined using a context-free grammar whose only non-terminal \(S\) admits productions \(S\to SS\), \(S\to\varnothing\) (empty string), and \(S\to aS\overline{a}\) for all \(a\in T\).
**Definition 4.1** (Heights).: Given an alphabet set \(\Sigma\), define the function \(h:\Sigma\to\{-1,1\}\) where \(h(a)=1\) if \(a\in\Sigma\) is an opening parenthesis and \(h(a)=-1\) otherwise. Given a string \(X\in\Sigma^{n}\), define the height of a position \(i\) where \(0\leq i\leq n\), as \(H(i)=\sum_{j=0}^{i-1}h(X[j])\).
Here \(H(i)\) is the difference between the number of opening parentheses and the number of closing parentheses in \(X[0..i)\).
**Definition 4.2** (Peaks and valleys).: Given a string \(X\in\Sigma\), an index \(i\in[1\dots n)\) is called a peak if \(H(i-1)<H(i)>H(i+1)\) and a valley if \(H(i-1)>H(i)<H(i+1)\).
### Dyck Language Alignments and Weighted Dyck Edit Distance
We say that \(\mathcal{M}\subseteq\{(i,j)\subseteq\mathbb{Z}^{2}:i<j\}\) is a _non-crossing matching_ if any two distinct pairs \((i,j),(i^{\prime},j^{\prime})\in\mathcal{M}\) satisfy \(i<j<i^{\prime}<j^{\prime}\) or \(i<i^{\prime}<j^{\prime}<j\). Such a matching can also be interpreted as a function \(\mathcal{M}:\mathbb{Z}\to\mathbb{Z}\cup\{\bot\}\) with \(\mathcal{M}(i)=j\) if \((i,j)\in\mathcal{M}\) or \((j,i)\in\mathcal{M}\) for some \(j\in\mathbb{Z}\), and \(\mathcal{M}(i)=\bot\) otherwise. For a string \(X\in\Sigma^{*}\) we define its _Dyck language alignment_ to be a matching function \(\mathcal{M}\) as defined above.
For two fragments \(X[p\dots q)\) and \(X[p^{\prime}\dots q^{\prime})\) of \(X\), we write \(X[p\dots q)\simeq_{\mathcal{M}}\overline{X[p^{\prime}\dots q^{\prime})}\) if \(X[p\dots q)=\overline{X[p^{\prime}\dots q^{\prime})}\in T^{*}\) and \((r,q^{\prime}-r-1)\) holds for every \(r\in[p\dots q)\).
Similar to Section 2.2 we define a weight function \(w\) on \(\bar{\Sigma}:=\Sigma\cup\{\varepsilon\}\). We call this weight function a _skewmetric_ if it satisfies the triangle inequality, that is, \(w(a,b)+w(b,c)\geq w(a,c)\) holds for every \(a,b,c\in\bar{\Sigma}\) and skew-symmetry, that is, \(w(a,b)=w(\overline{b},\overline{a})\) holds for every \(a,b\in\bar{\Sigma}\). In the rest of this section we assume the weight function \(w\) to be skewmetric unless stated otherwise.
**Definition 4.3**.: The weighted Dyck edit distance of a string \(X\in\Sigma^{*}\) with respect to a weight function \(w\) is the minimum edit distance \(\mathsf{ed}^{w}(X,Y)\) between \(X\) and a string \(Y\in\mathsf{Dyck}(\Sigma)\). Formally,
\[\mathsf{d}\mathsf{yck}^{w}(X)=\min_{Y\in\mathsf{Dyck}(\Sigma)}\mathsf{ed}^{w} (X,Y).\]
For \(k\in\mathbb{R}_{\geq 0}\), we also denote
\[\mathsf{d}\mathsf{yck}^{w}_{\leq k}(X)=\begin{cases}\mathsf{d}\mathsf{yck}^{w }(X)&\text{if }\mathsf{d}\mathsf{yck}^{w}(X)\leq k,\\ \infty&\text{otherwise.}\end{cases}\]
The _cost_ of an alignment \(\mathcal{M}\in\mathsf{M}(X)\) with respect to a _weight function_\(w\), denoted \(\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}}(X)\), is defined as
\[\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}}(X)=\sum_{(i,j)\in\mathcal{M}}\mathsf{ d}\mathsf{yck}^{w}(X[i]X[j])+\sum_{i\in[0..\,|X|):\mathcal{M}(i)=\bot}\mathsf{d} \mathsf{yck}^{w}(X[i]).\]
**Fact 4.4**.: _For every string \(X\) and weight function \(w\), we have \(\mathsf{d}\mathsf{yck}^{w}(X)=\min_{\mathcal{M}\in\mathsf{M}(X)}\mathsf{d} \mathsf{yck}^{w}_{\mathcal{M}}(X)\)._
Proof.: We first show by induction on \(|X|\) that \(\mathsf{d}\mathsf{yck}^{w}(X)\leq\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}}(X)\) holds for every \(\mathcal{M}\in\mathsf{M}(X)\). The claim is trivial if \(|X|=0\). If \(\mathcal{M}(0)=\bot\), then we construct \(\mathcal{M}^{\prime}:=\{(i-1,j-1):(i,j)\in\mathcal{M}\}\) and \(X^{\prime}:=X[1\dots|X|)\). By the inductive assumption, \(\mathsf{d}\mathsf{yck}^{w}(X)\leq\mathsf{d}\mathsf{yck}^{w}(X^{\prime})+ \mathsf{d}\mathsf{yck}^{w}(X[0])\leq\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}^{ \prime}}(X^{\prime})+\mathsf{d}\mathsf{yck}^{w}(X[0])=\mathsf{d}\mathsf{yck}^{ w}_{\mathcal{M}}(X)\). If \(\mathcal{M}(0)=|X|-1\), then we construct \(\mathcal{M}^{\prime}:=\{(i-1,j-1):(i,j)\in\mathcal{M}\setminus\{(0,|X|-1)\}\) and \(X^{\prime}=[1\dots|X|-1)\). By the inductive assumption, \(\mathsf{d}\mathsf{yck}^{w}(X)\leq\mathsf{d}\mathsf{yck}^{w}(X^{\prime})+ \mathsf{d}\mathsf{yck}^{w}(X^{\prime})+\mathsf{d}\mathsf{yck}^{w}(X[0]X[|X|-1 )=\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}}(X)\). Otherwise, we have \((0,p)\in M\) for some \(p\in[1\dots|X|-1)\). In this case, we construct \(\mathcal{M}^{\prime}:=\{(i,j)\in\mathcal{M}:j\leq p\}\) and \(X^{\prime}:=X[0\dots p]\), as well as \(\mathcal{M}^{\prime\prime}:=\{(i-p-1,j-p-1):(i,j)\in\mathcal{M}\text{ and }i>p\}\) and \(X^{\prime\prime}:=X[p+1\dots|X|)\). By the inductive assumption, \(\mathsf{d}\mathsf{yck}^{w}(X)\leq\mathsf{d}\mathsf{yck}^{w}(X^{\prime})+ \mathsf{d}\mathsf{yck}^{w}(X^{\prime\prime})\leq\mathsf{d}\mathsf{yck}^{w}_{ \mathcal{M}^{\prime}}(X^{\prime})+\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}^{ \prime\prime}}(X^{\prime\prime})=\mathsf{d}\mathsf{yck}^{w}_{\mathcal{M}}(X)\); here, the last equality follows from the fact that \(|\mathcal{M}|=|\mathcal{M}^{\prime}|+|\mathcal{M}^{\prime\prime}|\): any \((i,j)\in M\) with \(i\leq p\) and \(j>p\) would violate the non-crossing property of \(\mathcal{M}\).
Next, we show by induction on \(|X|\) that there exists \(\mathcal{M}\in\mathsf{M}(X)\) such that \(\mathsf{dyck}^{w}_{\mathcal{M}}(X)\leq\mathsf{dyck}^{w}(X)\); again, the claim is trivial for \(|X|=0\). Let us fix \(Y\in\mathsf{Dyck}(\Sigma)\) and \(\mathcal{A}\in\mathsf{A}(X,Y)\) such that \(\mathsf{dyck}^{w}(X)=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)\). If \(\mathcal{A}\) deletes \(X[0]\), we consider \(X^{\prime}:=X[1\mathinner{\ldots}|X|)\). The inductive assumption yields a matching \(\mathcal{M}^{\prime}\) such that \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{dyck}^{w}(X^ {\prime})\). In this case, we set \(\mathcal{M}:=\{(i+1,j+1):(i,j)\in\mathcal{M}^{\prime}\}\) so that \(\mathsf{dyck}^{w}_{\mathcal{M}}(X)=\mathsf{dyck}^{w}(X[0])+\mathsf{dyck}^{w}_ {\mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{ed}^{w}(X[0],\varepsilon)+ \mathsf{dyck}^{w}(X^{\prime})\leq w(X[0],\varepsilon)+\mathsf{ed}^{w}(X^{ \prime},Y^{\prime})\leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)=\mathsf{dyck}^{w}(X)\). The case when \(\mathcal{A}\) deletes \(X[|X|-1]\) is symmetric, so we may assume that \(\mathcal{A}\) deletes neither \(X[0]\) nor \(X[|X|-1]\); in particular, \(Y\neq\varepsilon\).
Suppose that \(Y=Y^{\prime}\cdot Y^{\prime\prime}\) for some non-empty strings \(Y^{\prime},Y^{\prime\prime}\in\mathsf{Dyck}(\Sigma)\). This yields a decomposition \(X=X^{\prime}\cdot X^{\prime\prime}\) such that \(\mathsf{ed}^{w}(X,Y)=\mathsf{ed}^{w}(X^{\prime},Y^{\prime})+\mathsf{ed}^{w}(X ^{\prime\prime},Y^{\prime\prime})\). Moreover, the optimality of \(Y\) guarantees that \(X^{\prime}\) and \(X^{\prime\prime}\) are both non-empty. The inductive assumption yields matchings \(\mathcal{M}^{\prime},\mathcal{M}^{\prime\prime}\) such that \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{dyck}^{w}(X ^{\prime})\) and \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X^{\prime\prime})\leq\mathsf{dyck}^ {w}(X^{\prime\prime})\). In this case, we set \(\mathcal{M}:=\mathcal{M}^{\prime}\cup\{(i+|X^{\prime}|,j+|X^{\prime}|):(i,j) \in\mathcal{M}^{\prime\prime}\}\) so that \(\mathsf{dyck}^{w}_{\mathcal{M}}(X)=\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}( X^{\prime})+\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X^{\prime\prime})\leq \mathsf{dyck}^{w}(X^{\prime})+\mathsf{dyck}^{w}(X^{\prime\prime})\leq\mathsf{ ed}^{w}(X^{\prime},Y^{\prime})+\mathsf{ed}^{w}(X^{\prime\prime},Y^{\prime\prime})\leq\mathsf{ed}^{w} _{\mathcal{A}}(X,Y)=\mathsf{dyck}^{w}(X)\).
In the remaining case, we must have \(Y=aY^{\prime}\overline{a}\) for \(a\in T\) and \(Y^{\prime}\in\mathsf{Dyck}(\Sigma)\). Let us first suppose that \(\mathcal{A}\) aligns \(Y[0]=a\) with \(X[0]\) and \(Y[|Y|-1]=\overline{a}\) with \(X[|X|-1]\). In this case, \(\mathcal{A}\) aligns \(Y[1\mathinner{\ldots}|Y|-1)=Y^{\prime}\) with \(X^{\prime}:=X[1\mathinner{\ldots}|X|-1)\). The inductive assumption yields a matching \(\mathcal{M}^{\prime}\) such that \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{dyck}^{w}(X ^{\prime})\). In this case, we set \(\mathcal{M}:=\{(0,|X|-1)\}\cup\{(i+1,j+1):(i,j)\in\mathcal{M}^{\prime}\}\) so that \(\mathsf{dyck}^{w}_{\mathcal{M}}(X)=\mathsf{dyck}^{w}(X[0]X[|X|-1])+\mathsf{dyck }^{w}_{\mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{ed}^{w}(X[0]X[|X|-1],a \overline{a})+\mathsf{dyck}^{w}(X^{\prime})\leq w(X[0],a)+w(X[|X|-1],\overline{a })+\mathsf{ed}^{w}(X^{\prime},Y^{\prime})\leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y) =\mathsf{dyck}^{w}(X)\). Next, suppose that \(\mathcal{A}\) aligns \(Y[0]=a\) with \(X[0]\) but inserts \(Y[|Y|-1]=\overline{a}\). In this case, \(\mathcal{A}\) aligns \(Y[1\mathinner{\ldots}|Y|-1)=Y^{\prime}\) with \(X^{\prime}:=X[1\mathinner{\ldots}|X|)\). The inductive assumption yields a matching \(\mathcal{M}^{\prime}\) such that \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{dyck}^{w}(X ^{\prime})\). In this case, we set \(\mathcal{M}:=\{(i+1,j+1):(i,j)\in\mathcal{M}^{\prime}\}\) so that \(\mathsf{dyck}^{w}_{\mathcal{M}}(X)=\mathsf{dyck}^{w}(X[0])+\mathsf{dyck}^{w}_{ \mathcal{M}^{\prime}}(X^{\prime})\leq\mathsf{ed}^{w}(X[0],a\overline{a})+ \mathsf{dyck}^{w}(X^{\prime})\leq w(X[0],a)+w(\varepsilon,\overline{a})+ \mathsf{ed}^{w}(X^{\prime},Y^{\prime})\leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)= \mathsf{dyck}^{w}(X)\). The case when \(\mathcal{A}\) inserts \(Y[0]=a\) and aligns \(Y[|Y|-1]=\overline{a}\) with \(X[|X|-1]\) is symmetric. The case when \(\mathcal{A}\) inserts both \(Y[0]=a\) and \(Y[|Y|-1]=\overline{a}\) is impossible by optimality of \(Y\). Finally, we note that, since \(\mathcal{A}\) deletes neither \(X[0]\) nor \(X[|X|-1]\), the alignment \(\mathcal{A}\) cannot align \(Y[0]\) to any character other than \(X[0]\) and \(Y[|Y|-1]\) to any character other than \(Y[|Y|-1]\). Thus, the case analysis above is complete.
**Claim 4.5**.: _For every \(x\in\Sigma\) and skewmetric weight function \(w\), \(\mathsf{dyck}^{w}(x)=w(x,\epsilon)=w(\epsilon,\overline{x})\)._
Proof.: We consider the following three different cases.
**Case 1:**\(x\) is deleted. In this case \(\mathsf{dyck}^{w}(x)=w(x,\epsilon)\).
**Case 2:**\(\overline{x}\) is inserted after \(x\) if \(x\in T\) and before \(x\) if \(x\in\overline{T}\). In this case \(\mathsf{dyck}^{w}(x)=w(\epsilon,\overline{x})=w(x,\epsilon)\). The last equality follows as \(w\) is skew-symmetric. Thus an insertion can be replaced with a deletion. From now on wards we assume that only allowed edits are deletion and substitutions.
**Case 3:**\(x\) is substituted by some \(y\in\Sigma\). Here we also need to insert \(\overline{y}\). Thus \(\mathsf{dyck}^{w}(x)=w(x,y)+w(\epsilon,\overline{y})=w(x,y)+w(y,\epsilon)\geq w(x,\epsilon)\). The second equality follows as \(w\) is skew-symmetric and the last inequality follows as \(w\) obeys triangle inequality. Trivially \(\mathsf{dyck}^{w}(x)\leq w(x,\epsilon)\). Thus the claim follows.
**Claim 4.6**.: _For every \(x,y\in\bar{\Sigma}\) and skewmetric weight function \(w\), \(\mathsf{dyck}^{w}(xy)=\min_{z\in T\cup\{\epsilon\}}w(x,z)+w(y,\overline{z})\)._
Proof.: Let \(z\in T\cup\{\epsilon\}\) minimizes \(w(x,z)+w(y,\overline{z})\). It is straight forward to argue that \(\mathsf{dyck}^{w}(xy)\leq w(x,z)+w(y,\overline{z})\) as \(x,y\) can be substituted by \(z,\overline{z}\) respectively. Next we argue the converse. Following Claim 4.5 we assume the only allowed edits are deletions and substitutions.
**Case 1:** Both \(x\) and \(y\) are deleted. Here \(\mathsf{dyck}^{w}(xy)\geq w(x,\epsilon)+w(y,\epsilon)\). The claim follows as \(\epsilon\in T\cup\{\epsilon\}\) and \(\overline{\epsilon}=\epsilon\).
**Case 2:**\(x\) is substituted by \(\overline{y}\). Here \(\mathsf{dyck}^{w}(xy)\geq w(x,\overline{y})=w(x,\overline{y})+w(y,y)\). Thus the claim follows as \(\overline{y}\in T\).
**Case 3:**\(y\) is substituted by \(\overline{x}\). Here \(\mathsf{dyck}^{w}(xy)\geq w(y,\overline{x})=w(x,x)+w(y,\overline{x})\). Thus the claim follows as \(x\in T\).
**Case 3:**\(x\) is substituted by \(z\) and \(y\) is substituted by \(\overline{z}\). Here \(\mathsf{dyck}^{w}(xy)\geq w(x,z)+w(y,\overline{z})\). Thus the claim follows as \(z\in T\).
From now on wards we assume \(w\) to be skew-symmetric.
#### 4.2.1 Preprocessing.
Given the input string \(X\in\Sigma^{n}\), preprocess \(X\) as follows. As long as there are two neighboring indices \(i,i+1\) such that \(X[i+1]=\overline{X[i]}\) and \(X[i]\in T\) remove them. Let the resulting string be \(X^{\prime}\). We make the following claim.
**Claim 4.7**.: \(\mathsf{dyck}^{w}(X)=\mathsf{dyck}^{w}(X^{\prime})\)_._
Proof.: Let \(\mathcal{M}\) be an optimal alignment of \(X\). For contradiction assume for two consecutive indices \(i,i+1\), \(X[i+1]=\overline{X[i]}\), \(X[i]\in T\) but \((i,i+1)\notin\mathcal{M}\). Next depending on the matching indices of \(i,i+1\), we consider the following three cases.
**Case 1:** Let \((j,i),(i+1,k)\in\mathcal{M}\) where \(j\in[0\ldots i)\cup\{\bot\}\) and \(k\in(i+1\ldots|X|)\cup\{\bot\}\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(j,i),(i+1,k)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[j]X[i])+\mathsf{dyck}^{w}( X[i+1]X[k])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M} }(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[j]X[i])=w(X[j],a)+w(X[i],\overline{a})\) and \(\mathsf{dyck}^{w}(X[i+1]X[k])=w(X[i+1],b)+w(X[k],\overline{b})\). Thus,
\[\mathsf{dyck}^{w}(X[j]X[i])+\mathsf{dyck}^{w}(X[i+1]X[k]) =w(X[j],a)+w(X[i],\overline{a})+w(X[i+1],b)+w(X[k],\overline{b})\] \[=w(X[j],a)+w(a,\overline{X[i]})+w(X[i+1],b)+w(X[k],\overline{b})\] \[\geq w(X[j],\overline{X[i]})+w(X[i+1],b)+w(X[k],\overline{b})\] \[\geq\mathsf{dyck}^{w}(X[j],X[k])\]
The second equality follows as \(w\) is skew-symmetric; thus \(w(X[i],\overline{a})=w(a,\overline{X[i]})\). The third and fourth inequality follows as \(w\) follows triangle inequality and \(\overline{X[i]}=X[i+1]\). The last inequality follows from Claim 4.6.
**Case 2:** Let \((k,i),(j,i+1)\in\mathcal{M}\) where \(k,j\in[0\ldots i)\cup\{\bot\}\) and \(j<k\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(k,i),(j,i+1)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[k]X[i])+\mathsf{dyck}^{w}( X[j]X[i+1])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M} }(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[k]X[i])=w(X[k],a)+w(X[i],\overline{a})\) and \(\mathsf{dyck}^{w}(X[j]X[i+1])=\mathsf{dyck}^{w}(X[i+1],\overline{a})\).
**Case 3:** Let \((k,i),(j,i+1)\in\mathcal{M}\) where \(k,j\in[0\ldots i)\cup\{\bot\}\) and \(j<k\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(k,i),(j,i+1)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[k]X[i])+\mathsf{dyck}^{w}( X[j]X[i+1])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M} }(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[k]X[i])=w(X[k],a)+w(X[i],\overline{a})\) and \(\mathsf{dyck}^{w}(X[j]X[i+1])=\mathsf{dyck}^{w}(X[i+1],\overline{a})\).
**Case 4:** Let \((k,i),(j,i+1)\in\mathcal{M}\) where \(k,j\in[0\ldots i)\cup\{\bot\}\) and \(j<k\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(k,i),(j,i+1)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[k]X[i])+\mathsf{dyck}^{w}( X[j]X[i+1])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M} }(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[k]X[i])=w(X[k],a)+w(X[i],\overline{a})\) and \(\mathsf{dyck}^{w}(X[j]X[i+1])=\mathsf{dyck}^{w}(X[i+1],\overline{a})\).
**Case 5:** Let \((k,i),(j,i+1)\in\mathcal{M}\) where \(k,j\in[0\ldots i)\cup\{\bot\}\) and \(j<k\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(k,i),(j,i+1)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[k]X[i])+\mathsf{dyck}^{w}( X[j]X[i+1])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M} }(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[k]X[i])=w(X[k],a)+w(X[i],\overline{a})\) and \(\mathsf{dyck}^{w}(X[j]X[i+1])=\mathsf{dyck}^{w}(X[i+1],\overline{a})\).
**Case 6:** Let \((k,i),(j,i+1)\in\mathcal{M}\) where \(k,j\in[0\ldots i)\cup\{\bot\}\) and \(j<k\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(k,i),(j,i+1)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[k]X[i])+\mathsf{dyck}^{w}( X[j]X[i+1])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M} }(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[k]X[i])=w(X[k],a)+w(X[i],\overline{a})\) and \(\mathsf{dyck}^{w}(X[j]X[i+1])=\mathsf{dyck}^{w}(X[i+1],\overline{a})\).
\(w(X[j],b)+w(X[i+1],\overline{b})\). Thus,
\[\mathsf{dyck}^{w}(X[k]X[i])+\mathsf{dyck}^{w}(X[j]X[i+1]) =w(X[k],a)+w(X[i],\overline{a})+w(X[j],b)+w(X[i+1],\overline{b})\] \[=w(X[k],a)+w(a,\overline{X[i]})+w(X[j],b)+w(X[i+1],\overline{b})\] \[\geq w(X[k],\overline{X[i]})+w(X[i+1],\overline{b})+w(X[j],b)\] \[\geq w(X[k],\overline{b})+w(X[j],b)\] \[\geq\mathsf{dyck}^{w}(X[j],X[k])\]
**Case 3:** Let \((i,k),(i+1,j)\in\mathcal{M}\) where \(k,j\in(i+1\ldots|X|)\cup\{\bot\}\) and \(j<k\). In this case we create another alignment \(\mathcal{M}^{\prime}=\mathcal{M}\setminus\{(i,k),(i+1,j)\}\cup\{(i,i+1),(j,k)\}\). We argue \(\mathsf{dyck}^{w}(X[j]X[k])\leq\mathsf{dyck}^{w}(X[i]X[k])+\mathsf{dyck}^{w}( X[i+1]X[j])\), thus proving \(\mathsf{dyck}^{w}_{\mathcal{M}^{\prime}}(X)\leq\mathsf{dyck}^{w}_{\mathcal{M}}(X)\). Following Claim 4.6, let \(a,b\in T\cup\{\epsilon\}\) be such that \(\mathsf{dyck}^{w}(X[i]X[k])=w(X[i],a)+w(X[k],\overline{a})\) and \(\mathsf{dyck}^{w}(X[i+1]X[j])=w(X[i+1],b)+w(X[j],\overline{b})\). Thus,
\[\mathsf{dyck}^{w}(X[i]X[k])+\mathsf{dyck}^{w}(X[i+1]X[j]) =w(X[i],a)+w(X[k],\overline{a})+w(X[i+1],b)+w(X[j],\overline{b})\] \[=w(X[i],a)+w(X[k],\overline{a})+w(X[j],\overline{b})+w(\overline{ b},\overline{X[i+1]})\] \[\geq w(X[i],a)+w(X[k],\overline{a})+w(X[j],\overline{X[i+1]})\] \[\geq w(X[j],a)+w(X[k],\overline{a})\] \[\geq\mathsf{dyck}^{w}(X[j],X[k])\]
The preprocessing can be done in time \(O(n)\). Also, we can assume that in the preprocessed string no two neighbouring symbols can be aligned. Following this and Claim 35 from [2], we can make the following claim.
**Claim 4.8**.: _Let \(X\in\Sigma^{n}\). There exists an algorithm that preprocesses \(X\) in \(O(n)\) time, and either declares \(\mathsf{dyck}^{w}(X)>k\), or outputs a string \(X^{\prime}\) of length at most \(n\) such that \(\mathsf{dyck}^{w}(X)=\mathsf{dyck}^{w}(X^{\prime})\) and \(X^{\prime}\) has at most \(2k\) valleys._
Thus from now on wards we assume \(X\) to be preprocessed and has at most \(2k\) valleys.
### Periodicity Reduction
**Definition 4.9**.: For \(k\in\mathbb{Z}_{\geq 0}\) a fragments \(X[a\mathinner{\ldotp}b)\) and \(X[c\mathinner{\ldotp}d)\) of a string \(X\) are _\(k\)-synchronized_ if \(X[a\mathinner{\ldotp}b)\in T^{*}\), \(X[c\mathinner{\ldotp}d)\in\overline{T}^{*}\), \(b-a=d-c\), \(b\leq c\), and \(H(b)+H(c)-2\min_{m\in[b\mathinner{\ldotp}c]}H(m)\leq 2k\).
Note that \(X[a\mathinner{\ldotp}b)\) and \(X[c\mathinner{\ldotp}d)\) are \(0\)-synchronized if and only if \((a,b,c,d)\) is a trapezoid.
**Definition 4.10**.: For \(k\in\mathbb{Z}_{\geq 0}\) and a skewmetric weight function \(w\), strings \(P,P^{\prime}\in T^{*}\) are called \(\mathsf{dyck}^{w}_{\leq k}\)-_equivalent_ if
\[\mathsf{dyck}^{w}_{\leq k}(X)=\mathsf{dyck}^{w}_{\leq k}(X[0\mathinner{\ldotp} a)\cdot P^{\prime}\cdot X[b\mathinner{\ldotp}c)\cdot\overline{P^{\prime}}\cdot X[d \mathinner{\ldotp}|))\]
holds for every string \(X\) with \(k\)-synchronized fragments \(X[a\mathinner{\ldotp}b)=P\) and \(X[c\mathinner{\ldotp}d)=\overline{P}\).
**Fact 4.11** (Fact 36, [1]).: _Let \(\mathcal{M}\) be an alignment such that \(\mathsf{dyck}^{w}_{\mathcal{M}}(X)\leq k\). If \(X[a\mathinner{\ldotp}b)\simeq_{\mathcal{M}}\overline{X[c\mathinner{\ldotp}d)}\), then the fragments \(X[a\mathinner{\ldotp}b)\) and \(X(c\mathinner{\ldotp}d]\) are \(k\)-synchronized._
**Fact 4.12**.: _Consider a string \(X\) and an alignment \(\mathcal{M}\in\mathsf{M}(X)\) such that \(\mathsf{dyck}_{\mathcal{M}}(X)\leq k\) for some \(k\in\mathbb{Z}_{\geq 0}\). Moreover, let \(X[a\mathinner{\ldot}b)\) and \(X[c\mathinner{\ldot}d)\) be \(k\)-synchronized fragments of length \(\ell>6k\). Then, there exist \(k\)-synchronized fragments \(X[a^{\prime}\mathinner{\ldot}b^{\prime})\) and \(X[c^{\prime}\mathinner{\ldot}d^{\prime})\) of length \(\ell^{\prime}\geq\frac{\ell-6k}{k+1}\), such that \(X[a^{\prime}\mathinner{\ldot}b^{\prime})\simeq_{\mathcal{M}}\overline{X[c^{ \prime}\mathinner{\ldot}d^{\prime})}\) and \(a\leq a^{\prime}\leq b^{\prime}\leq b\leq c\leq c^{\prime}\leq d^{\prime}\leq d\). Furthermore, we then have \(|(a+d)-(a^{\prime}+d^{\prime})|\leq 4k\)._
Proof.: Since \(\mathcal{M}\) is non-crossing, it is disjoint with \([a\mathinner{\ldot}b)\times[d\mathinner{\ldot}|X|)\) or \([0\mathinner{\ldot}a)\times[c\mathinner{\ldot}d)\). By symmetry (up to the reverse complement), let us assume that \(\mathcal{M}\) is disjoint with \([a\mathinner{\ldot}b)\times[d\mathinner{\ldot}|X|)\). Consider \(x\in[a\mathinner{\ldot}b-4k)\) such that \(X[x]\simeq_{\mathcal{M}}\overline{X[y]}\). The assumption implies that \(y<d\). Moreover, \(b-x=H(b)-H(x)>4k\), so \(H(x)<H(b)-4k\). At the same time, \(|H(y+1)-H(x)|\leq 2k\), so \(H(y+1)<H(b)-2k\). Since \(X[a\mathinner{\ldot}b)\) and \(X[c\mathinner{\ldot}d)\) are \(k\)-synchronized, this means that \(y+1\notin[b\mathinner{\ldot}c]\), i.e., \(y\in[c\mathinner{\ldot}d)\). Consider the fragment \(X[a\mathinner{\ldot}b-4k)\) and the minimal subfragment of \(X[c\mathinner{\ldot}d)\) containing positions that \(\mathcal{M}\) matches perfectly to positions \(X[x]\) with \(x\in[a\mathinner{\ldot}b-4k)\). These two fragments contain at most \(2k\) positions that are deleted or matched imperfectly. The remaining positions constitute a common subsequence of \(X[a\mathinner{\ldot}b-4k)\) and \(X[c\mathinner{\ldot}d)\); this subsequence can be interrupted at most \(k\) times, so there is a contiguous subsequence \(X[a^{\prime}\mathinner{\ldot}b^{\prime})\simeq_{\mathcal{M}}X[c^{\prime} \mathinner{\ldot}d^{\prime})\) of length at least \(\frac{\ell-6k}{k+1}\). Due to \(|H(a)-H(d)|\leq 2k\) and \(|H(a^{\prime})-H(d^{\prime})|\leq 2k\), we have \(4k\geq|H(a)-H(d)-H(d^{\prime})+H(d^{\prime})|=|a-a^{\prime}+d-d^{\prime}|\).
**Lemma 4.13**.: _Let \(k\in\mathbb{Z}_{+}\), let \(Q\in T^{*}\) be a string, and let \(e,e^{\prime}\in\mathbb{Z}_{\geq 8k}\). Then \(Q^{e}\) and \(Q^{e^{\prime}}\) are \(\mathsf{dyck}_{\leq k}^{w}\)-equivalent for every skewmetric weight function \(w\)._
Proof.: We assume without loss of generality that \(Q\) is primitive. (If \(Q=R^{m}\) for \(m\in\mathbb{Z}_{\geq 2}\), then \(Q^{e}=R^{me}\) and \(Q^{e^{\prime}}=R^{me^{\prime}}\) can be interpreted as powers of \(R\) rather than powers of \(Q\).) Let \(q=|Q|\). Consider a string \(X\) and positions \(p_{T}\), \(p_{\overline{T}}\) such that \(Q^{e}=X[p_{T}\mathinner{\ldot}p_{T}+e\cdot q)\) and \(\overline{Q^{e}}=X(p_{\overline{T}}-e\cdot q\mathinner{\ldot}p_{\overline{T}})\) are \(k\)-synchronized fragments. Denote \(X[0\mathinner{\ldot}p_{T})\cdot Q^{e^{\prime}}\cdot X[p_{T}+e\cdot q \mathinner{\ldot}p_{\overline{T}}-e\cdot q]\cdot\overline{Q^{e^{\prime}}} \cdot X(p_{\overline{T}}\mathinner{\ldot}|X|)\). Moreover, let \(\mathcal{M}\in\mathsf{M}(X)\) be an alignment such that \(\mathsf{dyck}^{w}(X,Y)=\mathsf{dyck}_{\mathcal{M}}^{w}(X,Y)\leq k\).
**Claim 4.14**.: _There exist \(i_{T},i_{\overline{T}}\in[0\mathinner{\ldot}7k]\) such that_
\[X[p_{T}+i_{T}\cdot q\mathinner{\ldot}p_{T}+(i_{T}+1)\cdot q)\simeq_{\mathcal{M }}\overline{X(p_{\overline{T}}-(i_{\overline{T}}+1)\cdot q\mathinner{\ldot}p_{ \overline{T}}-i_{\overline{T}}\cdot q]}.\]
Proof.: Consider the \(8k\) occurrences of \(Q\) starting at positions \(p_{T}+i\cdot q\) for \(i\in[0\mathinner{\ldot}7k]\) (let this fragment be \(P\)) and \(8k\) occurrences of \(\overline{Q}\) ending at positions \(p_{\overline{T}}-i\cdot q\) for \(i\in[0\mathinner{\ldot}7k]\) (let this fragment be \(\overline{P}\)). Note \(P,\overline{P}\) are also are \(k\)-synchronized fragments. Thus following Fact 4.12, there exists at least one occurrence of \(Q\) in \(P\), starting at index \(\ell\) such that \(\mathcal{M}\) matches it exactly with a fragment in \(\overline{P}\). We can thus define \(i_{T}\in[0\mathinner{\ldot}7k]\) so that \(\mathcal{M}\) matches \(X[p_{T}+i_{T}\cdot q\mathinner{\ldot}p_{T}+(i_{T}+1)\cdot q)\) exactly to some fragment \(X(s_{\overline{T}}-q\mathinner{\ldot}s_{\overline{T}})\in\overline{P}\). By definition of \(\overline{P}\), we have \(s_{\overline{T}}\geq p_{\overline{T}}-7kq\). Furthermore, since \(Q\) is primitive (i.e., distinct from all its non-trivial cyclic rotations), we conclude that \(s_{\overline{T}}=p_{\overline{T}}-i_{\overline{T}}\cdot q\) for some \(i_{\overline{T}}\in[0\mathinner{\ldot}7k]\).
Now, if \(Q^{e}=X[p_{T}\mathinner{\ldot}p_{T}+e\cdot q)\) is replaced with \(Q^{e^{\prime}}\) and \(\overline{Q^{e}}=X(p_{\overline{T}}-e\cdot q\mathinner{\ldot}p_{\overline{T}}]\) is replaced with \(\overline{Q^{e^{\prime}}}\) for \(e^{\prime}\geq e-1\), we can interpret this as replacing \(Q=X[p_{T}+i_{T}\cdot q\mathinner{\ldot}p_{T}+(i_{T}+1)\cdot q)\) with \(Q^{1+e^{\prime}-e}\) and \(\overline{Q}=X(p_{\overline{T}}-(i_{\overline{T}}+1)\cdot q\mathinner{\ldot}p_{ \overline{T}}-i_{\overline{T}}\cdot q]\) with \(\overline{Q^{1+e^{\prime}-e}}\). By Claim 4.14, \(\mathcal{M}\) can be trivially adapted without modifying its cost, and hence \(\mathsf{dyck}^{w}(X^{\prime})\leq\mathsf{dyck}_{\mathcal{M}}^{w}(X)=\mathsf{ dyck}^{w}(X)\). If \(e^{\prime}<e-1\), we repeat the above argument to decrement the exponent \(e\) one step at a time, still concluding that \(\mathsf{dyck}^{w}(X^{\prime})\leq\mathsf{dyck}^{w}(X)\). In either case, the converse inequality follows by symmetry between \((X,e)\) and \((X^{\prime},e^{\prime})\)
We say that a string \(P\in T^{*}\) avoids \(k\)-periodicity if it does not contain any substring \(Q^{8k+1}\) with \(|Q|\in[1\mathinner{\ldotp\ldotp\ldotp}4k]\).
**Lemma 4.15**.: _Let \(k\in\mathbb{Z}_{+}\) and let \(P,P^{\prime}\in T^{*}\) be strings of lengths at least \(156k^{3}\) such that \(P[0\mathinner{\ldotp\ldotp}78k^{3})=P^{\prime}[0\mathinner{\ldotp\ldotp}78k^{3})\) and \(P[|P|-78k^{3}\mathinner{\ldotp\ldotp}1|)=P^{\prime}[|P^{\prime}|-78k^{3}\mathinner {\ldotp\ldotp}1|)\) avoid \(k\)-periodicity. Then, \(P\) and \(P^{\prime}\) are \(\mathsf{dyck}^{w}_{\leq k}\)-equivalent for every skewmetric weight function \(w\)._
Proof.: Consider a string \(X\) and positions \(p_{T}\), \(p_{\overline{T}}\) such that \(P=X[p_{T}\mathinner{\ldotp\ldotp}p_{T}+|P|)\), \(\overline{P}=X(p_{\overline{T}}-|P|\mathinner{\ldotp_{\overline{T}}}]\) are \(k\)-synchronized fragments. Denote \(X^{\prime}=X[0\mathinner{\ldotp\ldotp}p_{T})\cdot P^{\prime}\cdot X[p_{T}+|P| \mathinner{\ldotp\ldotp}-|P|]\cdot\overline{P^{\prime}}\cdot X(p_{\overline{T} }\mathinner{\ldotp\ldotp}1|)\). Moreover, let \(\mathcal{M}\in\mathsf{M}(X)\) be an alignment such that \(\mathsf{dyck}^{w}(X)=\mathsf{dyck}^{w}_{\mathcal{M}}(X)\leq k\).
**Claim 4.16**.: _There exist \(d,e\in[0\mathinner{\ldotp\ldotp}78k^{3}]\) such that \((p_{T}+d,p_{\overline{T}}-d)\in\mathcal{M}\) and \((p_{T}+|P|-e,p_{\overline{T}}-|P|+e)\in\mathcal{M}\)._
Proof.: By Fact 4.12, \(X[p_{T}\mathinner{\ldotp\ldotp}p_{T}+78k^{3})\) contains a fragment of length at least \(\frac{78k^{3}-6k}{k+1}\geq 36k^{2}\) that \(\mathcal{M}\) matches perfectly to a fragment of \(X(p_{\overline{T}}-78k^{3}\mathinner{\ldotp\ldotp}_{\overline{T}}]\) Thus, let \(R:=X[r_{T}\mathinner{\ldotp\ldotp}r_{T}+|R|)\) be a fragment of length at least \(36k^{2}\) contained in \(X[p_{T}\mathinner{\ldotp\ldotp}p_{T}+|P|)\) that \(\mathcal{M}\) matches perfectly to \(X[r_{\overline{T}}-|R|\mathinner{\ldotp\ldotp}r_{\overline{T}})=\overline{R}\). Moreover, let \(r_{\overline{T}}^{\prime}:=p_{T}+p_{\overline{T}}-r_{T}\). If \(r_{\overline{T}}=r_{\overline{T}}^{\prime}\), then the claim is satisfied for \(d=r_{T}-p_{T}=p_{\overline{T}}-r_{\overline{T}}\). Otherwise, both \(X[r_{\overline{T}}-|R|\mathinner{\ldotp\ldotp}r_{\overline{T}})\) and \(X[r_{\overline{T}}^{\prime}-|R|\mathinner{\ldotp\ldotp}r_{\overline{T}}^{ \prime})\) are occurrences of \(\overline{R}\) in \(X\). Moreover, \(0<|r_{\overline{T}}-r_{\overline{T}}^{\prime}|\leq|(p_{\overline{T}}-r_{ \overline{T}}^{\prime})-(p_{\overline{T}}-r_{\overline{T}})|\leq|(r_{T}-p_{T} )-(p_{\overline{T}}-r_{\overline{T}})|+|(r_{T}-p_{T})-(p_{\overline{T}}-r_{ \overline{T}}^{\prime})|\leq 2\mathsf{dyck}^{w}_{\mathcal{M}}(X)+2k\leq 4k\). Hence, \(\mathsf{per}(\overline{R})\leq|r_{\overline{T}}-r_{\overline{T}}^{\prime}|\leq 4k\) and \(\exp(\overline{R})\geq\frac{|\overline{R}|}{4k}\geq 9k\). Since \(X[r_{\overline{T}}^{\prime}-|\overline{R}|\mathinner{\ldotp\ldotp}r_{\overline{T}})\) is contained in \(X(p_{\overline{T}}-|P|\mathinner{\ldotp\ldotp}r_{\overline{T}}]=\overline{P[0 \mathinner{\ldotp\ldotp}78k^{3})}\), this contradicts the assumption that \(\overline{P[0\mathinner{\ldotp\ldotp}78k^{3})}\) and thus \(P[0\mathinner{\ldotp\ldotp}78k^{3})\) avoids \(k\)-periodicity.
The second part of the claim is proved analogously.
As \(X[p_{T}+d\mathinner{\ldotp\ldotp}p_{T}+|P|-e)\in T\) and \(X[p_{T}+d\mathinner{\ldotp\ldotp}r_{T}+|P|-e)=\overline{X(p_{\overline{T}}-|P| +e\mathinner{\ldotp\ldotp}r_{\overline{T}}-d]}\), the optimality of \(\mathcal{M}\) guarantees that \(X[p_{T}+d\mathinner{\ldotp\ldotp}r_{T}+|P|-e)\simeq_{\mathcal{M}}\overline{X(p_ {\overline{T}}-|P|+e\mathinner{\ldotp\ldotp}r_{\overline{T}}-d]}\). Hence, if \(P=X[p_{T}\mathinner{\ldotp\ldotp}r_{T}+|P|)=\overline{X(p_{\overline{T}}-|P| \mathinner{\ldotp\ldotp}r_{\overline{T}})}\) is replaced with \(P^{\prime}\), we can interpret this as \(P[d\mathinner{\ldotp\ldotp}p|-e)=X[p_{T}+d\mathinner{\ldotp\ldotp}r_{T}+|P|-e)= \overline{X(p_{\overline{T}}-|P|+e\mathinner{\ldotp\ldotp}r_{\overline{T}}-d]}\) with \(P^{\prime}[d\mathinner{\ldotp\ldotp}|-e)\). Since \(X[p_{T}+d\mathinner{\ldotp\ldotp}r_{T}+|P|-e)\simeq_{\mathcal{M}}\overline{X(p_ {\overline{T}}-|P|+e\mathinner{\ldotp\ldotp}r_{\overline{T}}-d]}\), the alignment \(\mathcal{M}\) can be trivially adapted without modifying its cost, and therefore \(\mathsf{dyck}^{w}(X^{\prime})\leq\mathsf{dyck}^{w}_{\mathcal{M}}(X)=\mathsf{dyck }^{w}(X)\). The converse inequality follows by symmetry between \((X,P)\) and \((X^{\prime},P^{\prime})\).
**Corollary 4.17**.: _Let \(k\in\mathbb{Z}_{+}\). For every string \(P\in T^{*}\), there exists a string of length at most \(156k^{3}\) that is \(\mathsf{dyck}^{w}_{\leq k}\)-equivalent to \(P\) for every skewmetric weight function \(w\)._
Proof.: We proceed by induction on \(|P|\) with the trivial base case of \(|P|\leq 156k^{3}\). If \(|P|\geq 156k^{3}\) and \(P\) avoids \(k\)-periodicity, then Lemma 4.15 implies that \(P\) is equivalent to a string \(P^{\prime}:=P[0\mathinner{\ldotp\ldotp}78k^{3})\cdot P[|P|-78k^{3}\mathinner{ \ldotp\ldotp}|)\) of length \(156k^{3}\). Thus, suppose that \(P\) contains a fragment \(P[i\mathinner{\ldotp\ldotp}j)=Q^{8k+1}\) and \(|Q|\in[1\mathinner{\ldotp\ldotp}4k]\). By Lemma 4.13, \(Q^{8k+1}\) is equivalent to \(Q^{8k}\), and thus \(P\) is equivalent to a string \(P^{\prime}:=P[0\mathinner{\ldotp\ldotp}i)\cdot P[i+|Q|\mathinner{\ldotp\ldotp}|]\). By the inductive assumption, \(P^{\prime}\) is equivalent to some string \(P^{\prime\prime}\) of length at most \(156k^{3}\), and, by transitivity of the considered equivalence, \(P\) is also equivalent to \(P^{\prime\prime}\).
### Algorithm
**Lemma 4.18**.: _There exists a linear-time algorithm that, given a string \(P\) and an integer \(k\in\mathbb{Z}_{+}\), constructs a string \(P^{\prime}\) of length at most \(156k^{3}\) that is \(\mathsf{dyck}^{w}_{\leq k}\)-equivalent to \(P\) for every skewmetric weight function \(w\). Moreover \(P^{\prime}\) avoids \(k\)-periodicity._
Proof.: We apply Algorithm 1 with \(e=8k\) and \(\mathcal{Q}\) consisting of all primitive strings in \(T^{*}\) of length in \([1\,\mathpunct
```
1DyckKernel\((X,k)\):
2if\(|X|\leq 630k^{4}\)thenreturn\((X)\);
3if\(\mathsf{dyck}(X)>k\)thenreturn\((a^{k+1})\) for some \(a\in\Sigma\);
4 Let \(\mathcal{M}\in\mathsf{M}(X)\) be a dyck language alignment satisfying \(\mathsf{dyck}_{\mathcal{M}}(X)\leq k\);
5\(X^{\prime},P,Q\leftarrow\varepsilon\);
6for\(i\gets 0\)to\(n-1\)do
7if\(\mathcal{M}(i)=\bot\)or\(\mathsf{dyck}(X[i]X[\mathcal{M}(i)])\cdot\mathsf{Order}(i,\mathcal{M}(i))=1\)or\(\mathsf{dyck}(X[\mathcal{M}(i)]X[i])\cdot\mathsf{Order}(\mathcal{M}(i),i)=1\)then
8\(X^{\prime}\gets X^{\prime}\cdot X[i]\)
9elseif\(X[i]\in T\)and\(\mathcal{M}(i+1)=\mathcal{M}(i)-1\)and\(\mathsf{dyck}(X[i+1]X[\mathcal{M}(i)-1])=0\)then
10\(P\gets P\cdot X[i]\)
11elseif\(X[i]\in T\)then
12\(P\gets P\cdot X[i]\);
13\(P\leftarrow\mathsf{DyckReduction}(P,k)\);
14\(X^{\prime}\gets X^{\prime}\cdot P\);
15\(P\leftarrow\varepsilon\);
16
17elseif\(X[i]\in\overline{T}\)and\(\mathcal{M}(i+1)=\mathcal{M}(i)-1\)and\(\mathsf{dyck}(X[\mathcal{M}(i)-1]X[i+1])=0\)then
18\(Q\gets Q\cdot X[i]\)
19else
20\(Q\gets Q\cdot X[i]\);
21\(Q\leftarrow\overline{\mathsf{DyckReduction}(\overline{Q},k)}\);
22\(X^{\prime}\gets X^{\prime}\cdot Q\);
23\(Q\leftarrow\varepsilon\);
24return\((X^{\prime})\)
```
**Algorithm 9**Construct strings \(X^{\prime}\) of length at most \(630k^{4}\) such that \(\mathsf{dyck}_{\leq k}^{w}(X)=\mathsf{dyck}_{\leq k}^{w}(X^{\prime})\)
is a starting index of some \(Q_{j}\in\overline{S}\). As otherwise \(\mathcal{M}(b)=\mathcal{M}(b-1)-1\) and \(X[b]\neq\overline{X}[\mathcal{M}(b)]\); this contradicts the maximality of \(P_{i}\). Further by construction for all \(k\in[a\mathinner{\ldot}b)\), \(X[\mathcal{M}(k)]\in Q_{j}\). Finally we argue \(\mathcal{M}(a)\) is a ending index of \(Q_{j}\). As otherwise \(\mathcal{M}(a-1)=\mathcal{M}(a)+1\) and this contradicts the fact that \(a\) is the starting index of some segment from \(S\). Similarly we can show for each \(Q_{j}\in\overline{S}\) there is a corresponding match \(P_{i}\in S\) and this provides an one to one correspondence between a pair of fragments from \(S\) and \(\overline{S}\). Thus for a fragment \(P_{i}\in S\) let \(\mathcal{M}(P_{i})\) represents the corresponding matched fragments from \(\overline{S}\) and we can represent \(S\cup\overline{S}=\cup_{i\in[\ell]}(P_{i},\mathcal{M}(P_{i}))\). Following Fact 4.11, \(P_{i},\mathcal{M}(P_{i})\) are \(k\)-synchronized. Next in the algorithm for each pair \((P_{i},\mathcal{M}(P_{i}))\) we add strings \(\mathtt{DyckReduction}(P_{i})\) representing \(P_{i}\) and \(\overline{\mathtt{DyckReduction}(P_{i})}\) (note \(\overline{\mathcal{M}(P_{i})}=P_{i}\)) representing \(\mathcal{M}(P_{i})\) to \(X^{\prime}\). Following the fact that every character that is not contained in a fragment from \(S\cup\overline{S}\) is edited by \(\mathcal{M}\) and thus copied to \(X^{\prime}\) directly, by applying Lemma 4.18 repeatedly for every pair \((P_{i},\mathcal{M}(P_{i}))\), we claim \(\mathtt{dyck}^{w}_{\leq k}(X)=\mathtt{dyck}^{w}_{\leq k}(X^{\prime})\).
Next, we show that the returned string is of length at most \(630k^{4}\). This is clear when the algorithm terminates at Line 2 or 3. Otherwise, we create a string \(X^{\prime}\), to which we directly copy the characters that are edited by \(\mathcal{M}\). However there are at most \(2k\) characters that \(\mathcal{M}\) deletes or substitutes. Next we identify maximal fragments \(P=X[i\mathinner{\ldot}j)\in T^{*}\) such that there is another fragment \(X(i^{\prime}\mathinner{\ldot}j^{\prime}]\in\overline{T}^{*}\) that is matched with \(P\) by \(\mathcal{M}\). The maximality of \(P\) and the preprocessing of \(X\) ensure that at least one of \(X[j]\) and \(X[i^{\prime}]\) is edited by \(\mathcal{M}\), We call these characters the boundary characters for \(P\). Notice for any two distinct fragments \(P,P^{\prime}\in T^{*}\), the the boundary characters are different and by construction \(P,P^{\prime}\) are disjoint. As there are at most \(2k\) characters that \(\mathcal{M}\) edits, we conclude there can be at most \(2k\) fragments over \(T^{*}\), that our algorithm can construct. For each such fragment following the reduction of Lemma 4.18, we add a substring of length \(156k^{3}\) to \(X^{\prime}\). Thus the total length of all the substrings is \(312k^{4}\). Similarly we can argue for the fragments \(Q\in\overline{T}^{*}\). Thus we can bound the total length of \(X^{\prime}\) by \(2\cdot 312k^{4}+2k<630k^{4}\).
It remains to analyze the complexity of our procedure. We use the algorithm [10] to check whether \(\mathtt{dyck}(X)\leq k\) and, if so, construct the alignment \(\mathcal{M}\). This costs \(\mathcal{O}(n+k^{5})\) time. Next we perform a single left to right scan of \(X\). Throughout, all the conditions in the _if/else_ statements can be checked in \(O(1)\) time. Moreover any character is passed to the \(\mathtt{DyckReduction}()\) routine at most twice. Thus following Lemma 4.18, given \(X\) and \(\mathcal{M}\), \(X^{\prime}\) can be constructed in linear time.
Proof of Theorem 1.3.: We first preprocess \(X\) in linear time following the steps described in Section 4.2.1 to build strings \(X^{\prime}\) such that \(\mathtt{dyck}^{w}_{\leq k}(X^{\prime})=\mathtt{dyck}^{w}_{\leq k}(X)\). Next we apply Theorem 4.19 on \(X^{\prime}\), to build strings \(X^{\prime\prime}\) of length \(\mathcal{O}(k^{4})\) such that \(\mathtt{dyck}^{w}_{\leq k}(X^{\prime\prime})=\mathtt{dyck}^{w}_{\leq k}(X^{ \prime})\). This takes time \(O(n+k^{5})\). Lastly if \(X=a^{k+1}\) (this can be checked in time \(O(k)\)) output the distance is \(>k\). Otherwise we compute \(\mathtt{dyck}^{w}_{\leq k}(X^{\prime\prime})\) using the dynamic program algorithm from [10] in time \(O(k^{12})\). Thus the total running time is \(O(n+k^{12})\).
## Appendix A Deferred Proofs from Section 2
In the following, we give the missing proofs of facts from Section 2.
**Fact 2.5**.: _If \(w\) is a quasimetric on \(\bar{\Sigma}\), then \(\mathtt{ed}^{w}\) is a quasimetric on \(\Sigma^{*}\). In this case, \(\mathtt{ed}^{w}(X,Y)\) can be equivalently defined as the minimum cost of a sequence of edits transforming \(X\) into \(Y\)._
Proof.: Consider arbitrary strings \(X,Y,Z\in\Sigma^{*}\) as well as alignments \(\mathcal{A}=(x_{t},y_{t})_{t=0}^{m}\in\mathsf{A}(X,Y)\) and \(\mathcal{B}=(\hat{y}_{t},\hat{z}_{t})_{t=0}^{\hat{m}}\in\mathsf{A}(Y,Z)\). We construct a _product alignment_\(\mathcal{A}\otimes\mathcal{B}\in\mathsf{A}(X,Z)\) such that \(\mathtt{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)\leq\mathtt{ed}^{w}_{ \mathcal{A}}(X,Y)+\mathtt{ed}^{w}_{\mathcal{B}}(Y,Z)\). Let us denote \(\mathcal{A}^{\prime}=(x_{t},y_{t})_{t=0}^{m-1}\) and \(\mathcal{B}^{\prime}=(\hat{y}_{t},\hat{z}_{t})_{t=0}^{\hat{m}-1}\), as well as \(X^{\prime}=X[0\mathinner{\ldot}|X|-1)\) if \(X\neq\varepsilon\), \(Y^{\prime}=Y[0\mathinner{\ldot}|Y|-1)\) if \(Y\neq\varepsilon\), and \(Z^{\prime}=Z[0\mathinner{\ldot}|2|-1)\) if \(Z\neq\varepsilon\).
We proceed by induction on \(m+\hat{m}\) and consider several cases based on how \(\mathcal{A}\) and \(\mathcal{B}\) handle the trailing characters of \(X\), \(Y\), and \(Z\).
1. \(m=\hat{m}=0\). In this case, \(X=Y=Z=\varepsilon\), and we define \(\mathcal{A}\otimes\mathcal{B}:=(0,0)\). Trivially, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=0=\mathsf{ed}^{w}_{ \mathcal{A}}(X,Y)+\mathsf{ed}^{w}_{\mathcal{B}}(Y,Z)\).
2. \((x_{m-1},y_{m-1})=(|X|-1,|Y|)\), that is, \(\mathcal{A}\) deletes \(X[|X|-1]\). In this case, \(\mathcal{A}^{\prime}\in\mathsf{A}(X^{\prime},Y)\), and we define \(\mathcal{A}\otimes\mathcal{B}:=(\mathcal{A}^{\prime}\otimes\mathcal{B})\odot( |X|,|Z|)\), where \(\odot\) denotes concatenation, so that \(\mathcal{A}\otimes\mathcal{B}\) deletes \(X[|X|-1]\). By the induction hypothesis, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=\mathsf{ed}^{w}_{ \mathcal{A}\otimes\mathcal{B}}(X^{\prime},Z)+w(X[|X|-1],\varepsilon)\leq \mathsf{ed}^{w}_{\mathcal{A}^{\prime}}(X^{\prime},Y)+\mathsf{ed}^{w}_{ \mathcal{B}}(Y,Z)+w(X[|X|-1],\varepsilon)=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)+ \mathsf{ed}^{w}_{\mathcal{B}}(Y,Z)\).
3. \((\hat{y}_{\hat{m}-1},\hat{z}_{\hat{m}-1})=(|Y|-1,|Z|)\), that is, \(\mathcal{B}\) inserts \(Z[|Z|-1]\). In this case, \(\mathcal{B}^{\prime}\in\mathsf{A}(Y,Z^{\prime})\), and we define \(\mathcal{A}\otimes\mathcal{B}:=(\mathcal{A}\otimes\mathcal{B}^{\prime})\odot( |X|,|Z|)\) so that \(\mathcal{A}\otimes\mathcal{B}\) inserts \(Z[|Z|-1]\). By the induction hypothesis, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=\mathsf{ed}^{w}_{ \mathcal{A}\otimes\mathcal{B}^{\prime}}(X,Z^{\prime})+w(\varepsilon,Z[|Z|-1]) \leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)+\mathsf{ed}^{w}_{\mathcal{B}^{\prime} }(Y,Z^{\prime})+w(\varepsilon,Z[|Z|-1])=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)+ \mathsf{ed}^{w}_{\mathcal{B}}(Y,Z)\).
4. \((x_{m-1},y_{m-1})=(|X|,|Y|-1)\) and \((\hat{y}_{\hat{m}-1},\hat{z}_{\hat{m}-1})=(|Y|-1,|Z|)\), that is, \(\mathcal{A}\) inserts \(Y[|Y|-1]\) and \(\mathcal{B}\) deletes \(Y[|Y|-1]\). In this case, \(\mathcal{A}^{\prime}\in\mathsf{A}(X,Y^{\prime})\) and \(\mathcal{B}^{\prime}\in\mathsf{A}(Y^{\prime},Z)\), and we define \(\mathcal{A}\otimes\mathcal{B}:=\mathcal{A}^{\prime}\otimes\mathcal{B}^{\prime}\). By the induction hypothesis, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=\mathsf{ed}^{w}_{ \mathcal{A}^{\prime}\otimes\mathcal{B}^{\prime}}(X,Z)\leq\mathsf{ed}^{w}_{ \mathcal{A}^{\prime}}(X,Y^{\prime})+\mathsf{ed}^{w}_{\mathcal{B}^{\prime}}(X, Y^{\prime})+\mathsf{ed}^{w}_{\mathcal{B}^{\prime}}(Y^{\prime},Z)\leq\mathsf{ed}^{w}_{ \mathcal{A}}(X,Y)+\mathsf{ed}^{w}_{\mathcal{B}}(Y,Z)\).
5. \((x_{m-1},y_{m-1})=(|X|,|Y|-1)\) and \((\hat{y}_{\hat{m}-1},\hat{z}_{\hat{m}-1})=(|Y|-1,|Z|-1)\), that is, \(\mathcal{A}\) inserts \(Y[|Y|-1]\) and \(\mathcal{B}\) aligns \(Y[|Y|-1]\) with \(Z[|Z|-1]\). In this case, \(\mathcal{A}^{\prime}\in\mathsf{A}(X,Y^{\prime})\) and \(\mathcal{B}^{\prime}\in\mathsf{A}(Y^{\prime},Z^{\prime})\), and we define \(\mathcal{A}\otimes\mathcal{B}:=(\mathcal{A}^{\prime}\otimes\mathcal{B}^{ \prime})\odot(|X|,|Z|)\) so that \(\mathcal{A}\otimes\mathcal{B}\) inserts \(Z[|Z|-1]\). By the induction hypothesis, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=\mathsf{ed}^{w}_{ \mathcal{A}^{\prime}\otimes\mathcal{B}^{\prime}}(X,Z^{\prime})+w(\varepsilon,Z[|Z |-1])\leq\mathsf{ed}^{w}_{\mathcal{A}}(X,Y^{\prime})+\mathsf{ed}^{w}_{ \mathcal{B}^{\prime}}(Y^{\prime},Z^{\prime})+w(\varepsilon,Y[|Y|-1])+w(Y[|Y|-1 ],Z[|Z|-1])=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)+\mathsf{ed}^{w}_{\mathcal{B}}(Y,Z)\).
6. \((x_{m-1},y_{m-1})=(|X|-1,|Y|-1)\) and \((\hat{y}_{\hat{m}-1},\hat{z}_{\hat{m}-1})=(|Y|-1,|Z|)\), that is, \(\mathcal{A}\) aligns \(X[|X|-1]\) with \(Y[|Y|-1]\) and \(\mathcal{B}\) deletes \(Y[|Y|-1]\). In this case, \(\mathcal{A}^{\prime}\in\mathsf{A}(X^{\prime},Y^{\prime})\) and \(\mathcal{B}^{\prime}\in\mathsf{A}(Y^{\prime},Z)\), and we define \(\mathcal{A}\otimes\mathcal{B}:=(\mathcal{A}^{\prime}\otimes\mathcal{B}^{ \prime})\odot(|X|,|Z|)\) so that \(\mathcal{A}\otimes\mathcal{B}\) deletes \(X[|X|-1]\). By the induction hypothesis, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=\mathsf{ed}^{w}_{ \mathcal{A}^{\prime}\otimes\mathcal{B}^{\prime}}(X^{\prime},Z)+w(X[|X|-1], \varepsilon)\leq\mathsf{ed}^{w}_{\mathcal{A}}(X^{\prime},Y^{\prime})+\mathsf{ed }^{w}_{\mathcal{B}^{\prime}}(Y^{\prime},Z)+w(X[|X|-1],Y[|Y|-1])+w(Y[|Y|-1], \varepsilon)=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)+\mathsf{ed}^{w}_{\mathcal{B}}(Y,Z)\).
7. \((x_{m-1},y_{m-1})=(|X|-1,|Y|-1)\) and \((\hat{y}_{\hat{m}-1},\hat{z}_{\hat{m}-1})=(|Y|-1,|Z|-1)\), that is, \(\mathcal{A}\) aligns \(X[|X|-1]\) with \(Y[|Y|-1]\) and \(\mathcal{B}\) aligns \(Y[|Y|-1]\) with \(Z[|Z|-1]\). In this case, \(\mathcal{A}^{\prime}\in\mathsf{A}(X^{\prime},Y^{\prime})\) and \(\mathcal{B}^{\prime}\in\mathsf{A}(Y^{\prime},Z^{\prime})\), and we define \(\mathcal{A}\otimes\mathcal{B}:=(\mathcal{A}^{\prime}\otimes\mathcal{B}^{\prime}) \odot(|X|,|Z|)\) so that \(\mathcal{A}\otimes\mathcal{B}\) aligns \(X[|X|-1]\) with \(Z[|Z|-1]\). By the induction hypothesis, \(\mathsf{ed}^{w}_{\mathcal{A}\otimes\mathcal{B}}(X,Z)=\mathsf{ed}^{w}_{\mathcal{A}^{ \prime}\otimes\mathcal{B}^{\prime}}(X^{\prime},Z^{\prime})+w(X[|X|-1],Z[|Z|-1]) \leq\mathsf{ed}^{w}_{\mathcal{A}^{\prime}}(X^{\prime},Y^{\prime})+\mathsf{ed }^{w}_{\mathcal{B}^{\prime}}(Y^{\prime},Z)+w(X[|X|-1],Y[|Y|-1])+w(Y[|-1],\)\(Z[|Z|-1])=\mathsf{ed}^{w}_{\mathcal{A}}(X,Y)+\mathsf{ed}^{w}_{ \mathcal{B}}(Y,Z)\).
It is easy to check that the above cases cover all the possibilities. In particular, Case 2 covers the case of \(\hat{m}=0<m\) whereas Case 3 covers the case of \(m=0<\hat{m}\). We also remark that Cases 2 and 3 are sometimes both applicable; by convention, we then follow Case 2. Finally, we note that Cases 5-7 rely on the assumption that \(w\) satisfies the triangle inequality. This completes the proof of the first part of the fact.
To show that \(\mathsf{ed}^{w}(X,Y)\) can be equivalently defined as the minimum cost of a sequence of edits transforming \(X\) into \(Y\), we first consider each of the operations in a minimum alignment \(\mathcal{A}\) of \(X\) and \(Y\) individually to build a sequence of edits \(S\) from \(\mathcal{A
4. If \((x_{t},y_{t})=(x_{t+1},y_{t+1}-1)\), we add an insertion of \(Y[y_{t}]\) at position \(x_{t}\) in \(X\) to \(S\).
In all cases, we decrement \(t\) by \(1\). Clearly the resulting sequence of edits has the same cost as \(\mathcal{A}\), and by the definition of alignments, \(S\) transforms \(X\) into \(Y\). We now consider a minimum sequence of edits \(S\) that transforms \(X\) to \(Y\) and build an alignment \(\mathcal{A}\in\mathsf{A}(X,Y)\) from \(S\) such that \(\mathsf{ed}_{\mathcal{A}}^{w}(X,Y)\leq cost(S)\) (we let \(cost(S)\) denote the total cost of edits by \(S\)). We use notation \(\mathcal{A}^{\prime},X^{\prime},Y^{\prime}\) as before, and proceed by induction to construct \(\mathcal{A}\):
1. If \(X[|X|-1]\) is deleted by \(S\) and a character is inserted at the end of \(X\), then \(\mathcal{A}^{\prime}\in\mathsf{A}(X^{\prime},Y^{\prime})\) and we set \(\mathcal{A}=\mathcal{A}^{\prime}\odot(|X|,|Y|)\). We note that the inserted character \(c\) may be substituted to \(Y[|Y|-1]\). We let \(S^{\prime}\) be the sequence \(S\) without the insertion, deletion, and if possible substitution on the last character of \(X\). By the induction hypothesis and triangle inequality, \(\mathsf{ed}_{\mathcal{A}}(X,Y)=\mathsf{ed}_{\mathcal{A}^{\prime}}(X^{\prime}, Y^{\prime})+w(X[|X|-1],Y[|Y|-1])\leq\mathsf{ed}_{\mathcal{A}^{\prime}}(X^{ \prime},Y^{\prime})+w(X[|X|-1],\varepsilon)+w(\varepsilon,Y[|Y|-1])\leq\mathsf{ ed}_{\mathcal{A}^{\prime}}(X^{\prime},Y^{\prime})+w(X[|X|-1],\varepsilon)+w( \varepsilon,c),+w(c,Y[|Y|-1])\leq cost(S^{\prime})+w(X[|X|-1],\varepsilon)+w( \varepsilon,c),+w(c,Y[|Y|-1])=cost(S)\).
2. If \(X[|X|-1]\) is deleted by \(S\) and no character is inserted at the end of \(X\), then \(\mathcal{A}^{\prime}\in\mathsf{A}(X^{\prime},Y)\) and we set \(\mathcal{A}=\mathcal{A}^{\prime}\odot(|X|,|Y|)\). We let \(S^{\prime}\) be the sequence \(S\) without the deletion of \(X[|X|-1]\). By the induction hypothesis, \(\mathsf{ed}_{\mathcal{A}}(X,Y)=\mathsf{ed}_{\mathcal{A}^{\prime}}(X^{\prime}, Y)+w(X[|X|-1],\varepsilon)\leq cost(S^{\prime})+w(X[|X|-1],\varepsilon)=cost(S)\).
3. If a character is inserted at the end of \(X\), then \(\mathcal{A}^{\prime}\in\mathsf{A}(X,Y^{\prime})\) and we set \(\mathcal{A}=\mathcal{A}^{\prime}odot(|X|,|Y|)\). We let \(S^{\prime}\) be the sequence \(S\) without this insertion. By the induction hypothesis, \(\mathsf{ed}_{\mathcal{A}}(X,Y)=\mathsf{ed}_{\mathcal{A}^{\prime}}(X,Y^{\prime })+w(\varepsilon,Y[|Y|-1],\varepsilon)\leq cost(S^{\prime})+w(\varepsilon,Y[| Y|-1])=cost(S)\).
4. If \(X[|X|-1]\) is substituted by \(S\), then \(\mathcal{A}^{\prime}\in\mathsf{A}(X^{\prime},Y^{\prime})\) and we set \(\mathcal{A}=\mathcal{A}^{\prime}\odot(|X|,|Y|)\). We let \(S^{\prime}\) be the sequence of \(S\) without any substitutions of \(X[|X|-1]\) and let \(C\) be an ordered list of characters substituted by \(S\) at \(X[|X|-1]\). Then, by the induction hypothesis, \(\mathsf{ed}_{\mathcal{A}}(X,Y)=\mathsf{ed}_{\mathcal{A}^{\prime}}(X^{\prime}, Y^{\prime})+w(X[|X|-1],Y[|Y|-1])\leq\mathsf{ed}_{\mathcal{A}^{\prime}}(X^{ \prime},Y^{\prime})+\sum_{i=0}^{|C|-1}w(c_{i},c_{i+1})\leq cost(S^{\prime})+ \sum_{i=0}^{|C|-1}w(c_{i},c_{i+1})=cost(S)\).
By induction, we can see that there exists an alignment \(\mathcal{A}\) with cost at most that of \(S\).
**Fact 2.6**.: _Consider a string \(X\) and its fragment \(X[i\,\mathpunct
\(\mathcal{B}\) deletes \(X[c_{u,m_{u}}]\) and \(X[c_{u,t}]\sim_{\mathcal{B}}Y[c_{u,t+1}-i]=X[c_{u,t+1}]\) holds for \(t\in[0\mathinner{.}m_{u})\), this yields
\[\operatorname{\mathsf{ed}}_{\mathcal{B}}^{w}(X,Y) \geq\sum_{u\in[0\mathinner{.}i)\mathinner{.}[j\cup|X|)}\left(w(X [c_{u,m_{u}}],\varepsilon)+\sum_{t\in[0\mathinner{.}m_{u})}w(X[c_{u,t}],X[c_{u,t+1}])\right)\] \[\geq\sum_{u\in[0\mathinner{.}i)\mathinner{.}[j\cup|X|)}w(X[u],\varepsilon)\] \[=\operatorname{\mathsf{ed}}^{w}(X[0\mathinner{.}i)\mathinner{.}X [j\mathinner{.}|X|),\varepsilon).\]
Since \(\mathcal{B}\) was chosen arbitrarily, we conclude that \(\operatorname{\mathsf{ed}}^{w}(X,Y)\geq\operatorname{\mathsf{ed}}^{w}(X[0 \mathinner{.}i)\mathinner{.}X[j\mathinner{.}|X|),\varepsilon).\)
|
2303.04100 | Continuous-Time Modeling and Analysis of Particle Beam Metrology | Particle beam microscopy (PBM) performs nanoscale imaging by pixelwise
capture of scalar values representing noisy measurements of the response from
secondary electrons (SEs) integrated over a dwell time. Extended to metrology,
goals include estimating SE yield at each pixel and detecting differences in SE
yield across pixels; obstacles include shot noise in the particle source as
well as lack of knowledge of and variability in the instrument response to
single SEs. A recently introduced time-resolved measurement paradigm promises
mitigation of source shot noise, but its analysis and development have been
largely limited to estimation problems under an idealization in which SE bursts
are directly and perfectly counted. Here, analyses are extended to error
exponents in feature detection problems and to degraded measurements that are
representative of actual instrument behavior for estimation problems. For
estimation from idealized SE counts, insights on existing estimators and a
superior estimator are also provided. For estimation in a realistic PBM imaging
scenario, extensions to the idealized model are introduced, methods for model
parameter extraction are discussed, and large improvements from time-resolved
data are presented. | Akshay Agarwal, Minxu Peng, Vivek K. Goyal | 2023-03-07T18:02:57Z | http://arxiv.org/abs/2303.04100v1 | # Continuous-Time Modeling and Analysis of
###### Abstract
Particle beam microscopy (PBM) performs nanoscale imaging by pixelwise capture of scalar values representing noisy measurements of the response from secondary electrons (SEs) integrated over a dwell time. Extended to metrology, goals include estimating SE yield at each pixel and detecting differences in SE yield across pixels; obstacles include shot noise in the particle source as well as lack of knowledge of and variability in the instrument response to single SEs. A recently introduced time-resolved measurement paradigm promises mitigation of source shot noise, but its analysis and development have been largely limited to estimation problems under an idealization in which SE bursts are directly and perfectly counted. Here, analyses are extended to error exponents in feature detection problems and to degraded measurements that are representative of actual instrument behavior for estimation problems. For estimation from idealized SE counts, insights on existing estimators and a superior estimator are also provided. For estimation in a realistic PBM imaging scenario, extensions to the idealized model are introduced, methods for model parameter extraction are discussed, and large improvements from time-resolved data are presented.
binary hypothesis testing, electron microscopy, Fisher information, helium ion microscopy, Kullback-Leibler divergence, Neyman Type A distribution, Poisson processes, truncated Poisson distribution, zero-inflated Poisson distribution.
## I Introduction
Particle beam microscopy (PBM) techniques such as scanning electron microscopy (SEM) [1, 2] and helium ion microscopy (HIM) [3, 4] are widely used to image and characterize samples at the nanoscale. Images are formed one pixel at a time by raster scanning a focused beam of high-energy charged particles (electrons in SEM and helium ions in HIM) and detecting secondary electrons (SEs) emitted from the sample. The pixel value is a noisy measurement of the intensity of an SE signal integrated over some _dwell time_. The scale is often arbitrary; the micrograph is then an image showing spatial variations without representing a quantified physical property. Calibration of the _beam current_ (expressed as the mean number of incident particles per unit time) and the mean instrument response per SE enables the more ambitious goal of _metrology_, with pixel values representing estimates of _SE yield_ per incident particle.
Randomness of the incidence of primary particles--_source shot noise_--is a key characteristic of PBM that contributes to its noise and hence to the amount of averaging that is needed to produce high-quality images. The achievable image quality in PBM is often limited by the imaging dose (_i.e._, number of incident particles). This limitation is particularly important for radiation-sensitive materials, such as proteins and biomolecules, which are increasingly being imaged by various PBM techniques [5, 6]. Although there has been previous work on the trade-off between dose and image quality [7, 8, 9], as well as attempts to improve PBM image quality through the use of denoising and deconvolution techniques [10, 11, 12, 13, 14], there is a lack of fundamental work around the relationship between information about the sample SE yield and the imaging dose used, as well as a lack of statistically-motivated SE yield estimation techniques based on the signals collected on PBMs.
In this paper, we explore the fundamental limits to particle beam metrology and describe a novel imaging scheme where the integration over a dwell time is replaced with time-resolved (TR) measurement using analog outcoupling of the SE signal. For both detection and estimation of SE yield, we find that TR measurement can lead to large improvements. Starting with an idealized model in which SEs are counted perfectly, we show that the improvements are characterized by differences in Kullback-Leibler divergence and Fisher information between Gaussian and zero-inflated Poisson distributions. We extend our modeling and analysis to include three causes for inaccuracy in SE counts: saturation in SE counting, additive noise from the detection signal chain, and overlap of responses from temporally adjacent incident particles.
The concept of benefiting from time resolution in PBM was introduced in [15]. Continuous-time modeling of PBM was introduced in [16], along with theoretical analyses and Monte Carlo simulations of several estimators for SE yield. Robustness to unknown beam current was shown in [17, 18], and joint estimation of beam current and SE yield was studied in [19, 20]. A recent manuscript develops denoising procedures to apply with time-resolved data based on plug-and-play methods [21]. All these previous works concentrate on a model in which SEs are counted perfectly. Thus, they can give the impression that the benefits of TR measurement are contingent on this idealization. Here, by including various degradations to SE observation, we highlight that the benefits from TR measurement persist with non-ideal SE detection.
### _Contributions_
The main contributions of this paper include:
* _Detection of SE yield._ We provide the first results on hypothesis testing between two SE yield values from
SE-count measurements. The unbounded improvements in error exponents due to TR measurement are analyzed using Kullback-Leibler divergence.
* _Improved estimation of SE yield._ We introduce a new estimator for SE yield from SE-count measurements that improves upon the estimators analyzed and simulated in [16]. We also provide new insights on some estimators and bounds in [16].
* _Estimation from binary SE measurements_. We show that SE yield can be estimated from measurements that saturated at 1 SE, and we characterize the performance limits under this limited form of measurement.
* _Estimation from degraded SE measurements._ We analyze the increases of estimation error lower bounds that result from additive noise in SE measurements. We also provide a procedure to fit model parameters to experimental data.
* _Impact of nonzero pulse width._ We introduce a compensation for the possible undercounting of detection events due to overlapping of pulses.
### _Outline_
Section II introduces an abstract model for PBM and gathers several preliminary computations pertaining to the Neyman Type A observations generated with perfect counting of SEs. Section III is dedicated to feature detection abstracted as a binary hypothesis test. We compute error exponents for conventional and time-resolved measurements, and we find that the increase of error exponents (decrease of error probabilities) with TR measurements is by a large (potentially unbounded) factor. Section IV turns to estimation problems, still assuming perfect counting of SEs. We provide insights into existing estimators and introduce a new estimator that is based on the conditional expectation of an oracle estimator. Section V introduces an estimator for SE yield that does not require SE counts; instead, it uses only the number of detection events, as one would obtain if the SE detector saturates at a single electron. Section VI develops a richer model for noisy SE detection and several estimators to apply in this setting. We show how the Fisher information of the measurements decays with increasing noise. We develop methods to fit model parameters and show results with data collected from a real instrument. Estimation simulations show large improvements from TR measurement. Section VII concludes.
## II Abstract Model with Ideal SE Counting
After briefly describing the operation of a typical instrument in Section II-A, we review an abstraction that assumes ideal counting of SEs [16] in Section II-B. This idealization is used for detection problems in Section III and for estimation problems in Section IV.
The Neyman Type A distribution of an idealized conventional measurement of SEs, developed in Section II-D, can be used for various numerical evaluations but is not conducive to closed-form analytical results. We thus introduce high- and low-dose approximations and a proxy inspired by the concept of a deterministic incident particle beam. The approximations and their asymptotes offer insightful intuitions in understanding the behavior of the distribution of the idealized conventional measurement. To later aid in contrasting with TR measurement, we also provide computations of Fisher information (Section II-E) and Kullback-Leibler divergence (Section II-F) for the conventional measurement.
While the incident particles may be electrons or ions, for simplicity we refer to them as ions.
### _Operation of a Typical Instrument_
In a typical particle-beam imaging setup, shown schematically in Figure 1(a), the SEs emitted from each pixel of the sample are detected by an Everhart-Thornley (ET) detector [22], which consists of a scintillator followed by a photomultiplier tube (PMT). The detection of SEs from a single incident ion typically occurs within a few femtoseconds [23], whereas the mean interarrival time for ions is on the order of 100 ns. After emission from the sample pixel, the SEs are typically accelerated to 10 \(\mathrm{keV}\) and made incident on a scintillator. The scintillator generates a random number of photons, with the mean proportional to the number of incident SEs. These photons are then directed towards the PMT though a light pipe, where they generate a voltage pulse with a mean height proportional to their number. Therefore, the final output signal from the ET detector consists of a series of voltage pulses, as depicted by the experimental data shown in Figure 1(e).
Although the ideal SE image would be a pixel-wise map of the sample SE yield, conventional PBM does not attempt to create such an image due to two factors. First, the gains and loss factors involved in the SE detection chain are usually not available to the microscopist or the imaging software. Second, there can be a large variance in the voltage signal generated by the one SE. Due to the lack of knowledge of the mean instrument response per SE, the count of SEs per pixel is conventionally not evaluated during imaging, preventing estimation of the SE yield. Instead, the voltage signal is sampled at a fixed period (typically 100 ns) and summed for each pixel dwell time to generate a scalar 8-bit pixel brightness.
### _Stochastic Process Abstraction_
Our measurement model and estimation techniques are separable across the pixels, so we omit any pixel indexing. Denote the pixel dwell time by \(t\). For each pixel, the incident ion arrivals are modeled as a Poisson process with known rate \(\Lambda\) per unit time, as illustrated in Figure 1(b) for \(t=10\)\(\mathrm{\SIUnitSymbolMicro s}\). Incident ion \(i\) interacts with the sample, causing \(X_{i}\) number of SEs to be detected, as illustrated in Figure 1(c). Each \(X_{i}\) can be described as a Poisson random variable with mean \(\eta\)[24]. This \(\eta\) is called the _SE yield_ and is the parameter we wish to measure for the pixel. Note that detection efficiency [25, 26] is incorporated within the definition of \(\eta\).
The model can be described as a marked Poisson process \(\{(T_{1},X_{1}),\,(T_{2},X_{2}),\,\ldots\}\), where \((T_{1},\,T_{2},\,\ldots)\) is the arrival time sequence of the ions. The number of incident ions \(M\) is the largest \(i\) such that \(T_{i}\leq t\) (with \(M=0\) when \(T_{1}>t\)). This \(M\) is a Poisson random variable with mean \(\lambda=\Lambda t\), which we
call the dose.1 Ions are observed only indirectly through the detection of SEs. There is no observed event when \(X_{i}=0\). Hence what is observable is a marked thinned Poisson process \(\{(\widetilde{T}_{1},\widetilde{X}_{1}),\,(\widetilde{T}_{2},\widetilde{X}_{2}), \,\ldots\}\), where \(\widetilde{T}_{i}\) is the arrival time of the \(i\)th ion that produces a _positive_ number of detected SEs and \(\widetilde{X}_{i}\) is the corresponding number of detected SEs, as illustrated in Figure 1(d). Define \(\widetilde{M}\) to be the largest \(i\) such that \(\widetilde{T}_{i}\leq t\) (with \(\widetilde{M}=0\) when \(\widetilde{T}_{1}>t\)).
Footnote 1: More commonly, dose is the mean number of incident particles per unit area; here, we are not considering absolute spatial scale.
### _Time-Resolved Measurement Model_
Observation of
\[\left\{\widetilde{M},\widetilde{T},\widetilde{X}\right\}=\left\{\widetilde{M},(\widetilde{T}_{1},\widetilde{T}_{2},...,\widetilde{T}_{\widetilde{M}}),( \widetilde{X}_{1},\widetilde{X}_{2},...,\widetilde{X}_{\widetilde{M}})\right\} \tag{1}\]
was introduced in [16] as _continuous-time time-resolved_ measurement, contrasting with a discrete-time model introduced earlier in [15]. Here we will consider only the continuous-time setting, which facilitates simpler and more easily interpretable results.
Since the thinning is independent of the ion incidence process and \(\mathrm{P}(X_{i}=0)=e^{-\eta}\),
\[\widetilde{M}\sim\mathrm{Poisson}(\lambda(1-e^{-\eta})). \tag{2}\]
Each \(\widetilde{X}_{i}\) has the zero-truncated Poisson distribution with parameter \(\eta\):
\[\mathrm{P}_{\widetilde{X}_{i}}(j;\eta)=\frac{e^{-\eta}}{1-e^{-\eta}}\cdot \frac{\eta^{j}}{j!},\qquad j=1,\,2,\,\ldots. \tag{3}\]
The mean of this distribution is
\[\mathbb{E}\!\left[\,\widetilde{X}_{i}\,\right]=\frac{\eta}{1-e^{-\eta}}. \tag{4}\]
Given \(\widetilde{M}\), \((\widetilde{X}_{1},\,\ldots,\,\widetilde{X}_{\widetilde{M}})\) are independent and identically distributed. Conditioned on \(\widetilde{M}=\widetilde{m}>0\), the normalized time \(T_{i}/t\) has the \(\mathrm{Beta}(i,\widetilde{m}+1-i)\) distribution (with no dependence on \(\eta\)).
### _Conventional Measurement Distribution_
As discussed in Section II-A, a typical instrument generates a single scalar value for each raster scan location, and this value has many sources of noise. An idealized scalar measurement is for the instrument to give the cumulative SE counts within dwell time \(t\) (_i.e._, the measurement results in a _scalar_ value for every pixel, as opposed to vector-valued TR measurement):
\[Y=\sum_{i=i}^{M}X_{i}. \tag{5}\]
This \(Y\) is a Neyman Type A random variable with parameters \(\lambda\) and \(\eta\), which we will denote \(\mathrm{Neyman}(\lambda,\eta)\). Its probability mass function (PMF) is
\[\mathrm{P}_{Y}(y\,;\,\eta,\lambda)=\frac{e^{-\lambda}\eta^{y}}{y!}\sum_{m=0}^{ \infty}\frac{(\lambda e^{-\eta})^{m}m^{y}}{m!},\quad y=0,\,1,\,\ldots, \tag{6}\]
its mean is
\[\mathbb{E}[\,Y\,]=\lambda\eta, \tag{7}\]
and its variance is
\[\mathrm{var}(Y)=\lambda\eta(\eta+1). \tag{8}\]
Like for a Poisson distribution, the variance increases with the mean; unlike a Poisson distribution, the variance exceeds the mean, and this is increasingly true as \(\eta\) increases. This excess variance is consistent with experimental observations, and compound Poisson distributions have been previously used to model the distribution of SEs in PBM [27, 28, 29, 30].
The series appearing within the PMF (6) makes the Neyman Type A distribution difficult to work with both analytically and computationally. While we will sometimes use (6) directly, it simplifies some computations and makes certain comparisons more intuitive to use approximations that hold for high or low \(\lambda\). A purely hypothetical situation of a deterministic incident beam also provides valuable context.
Fig. 1: A possible realization of the random processes involved in a generative model of SE imaging in PBM. (a) Schematic for SE imaging in PBM. (b) Generation of \(M\) incident ions. (c) The underlying marked Poisson process \(\{(T_{1},X_{1}),\,(T_{2},X_{2}),\,\ldots\}\) with ions incident at times \(T_{1},\,T_{2},\,\ldots\) generating detected SE counts \(X_{1},\,X_{2},\,\ldots\). (d) The marked Poisson process \(\{(\widetilde{T}_{1},\widetilde{X}_{1}),\,(\widetilde{T}_{2},\widetilde{X}_{2 }),\,\ldots\}\) produced by discarding the ions for which no SEs are detected. (e) SE detector voltage response. Panel is a real snapshot of voltage output from an HIM.
#### Iv-B1 Deterministic beam (Poisson approximation)
If \(\lambda\) is a positive integer, we may imagine a situation in which exactly \(\lambda\) ions are incident. Since the sum of independent Poisson random variables is a Poisson random variable, we obtain a simple model of
\[Y_{\mathrm{det}}\sim\mathrm{Poisson}(\lambda\eta). \tag{9}\]
Notice by comparison to (8) that \(Y\) has higher variance than \(Y_{\mathrm{det}}\) by a factor of \(\eta+1\). This \(\eta+1\) factor is attributed to the randomness of the incident ion counts, _i.e._, source shot noise.
#### Iv-B2 High \(\lambda\) (Gaussian approximation)
As \(\lambda\to\infty\), a Gaussian approximation with matching moments in (7) and (8) holds in the sense of pointwise convergence of moment generating functions [31, SS2a]:
\[Y_{\mathrm{high}}\sim\mathcal{N}(\lambda\eta,\lambda\eta(\eta+1)). \tag{10}\]
One may use (10) to form an approximate PMF by integrating over intervals \(\{[y-\frac{1}{2},\,y+\frac{1}{2}]\}_{y=0}^{\infty}\). For \(\lambda>10\) and \(\eta>1\), the squared \(\ell^{2}\) error of this approximation is less than 0.002 [31, Fig. 1]. Dose exceeding 10 ions per pixel is typical for useful micrograph quality.
#### Iv-B3 Low \(\lambda\) (zero-inflated Poisson approximation)
As \(\lambda\to 0\), the PMF (6) converges pointwise to a Poisson distribution with extra mass at zero [31, SS2b]:
\[\mathrm{P}_{Y_{\mathrm{low}}}(y\,;\,\eta,\lambda)=\left\{\begin{array}{ll}e^ {-\lambda}+(1-e^{-\lambda})e^{-\eta},&y=0;\\ (1-e^{-\lambda})e^{-\eta}\eta^{y}/y!,&y=1,\,2,\,\ldots.\end{array}\right. \tag{11}\]
We will denote this \(\mathrm{ZIP}\mathrm{Poisson}(\lambda,\eta)\). For \(\lambda<0.3\) and \(\eta<10\), the squared \(\ell^{2}\) error of this approximation is less than 0.001 [31, Fig. 1]. Our use of this approximation is to understand continuous-time behavior, where \(\lambda\) is effectively infinitesimal.
### _Fisher Information_
Fisher information (FI) is a basic tool for lower bounding the mean-squared errors of estimators. Here we gather computations of FI that will be used to contextualize the FI of TR measurements.
#### Iv-E1 Deterministic beam (Poisson approximation)
The Fisher information about mean \(\nu\) in a \(\mathrm{Poisson}(\nu)\) observation is
\[\mathcal{I}(\nu)=\frac{1}{\nu}.\]
Thus, we have \(\mathcal{I}_{Y_{\mathrm{det}}}(\lambda\eta)=1/(\lambda\eta)\). With \(\lambda\) known, this translates by simple rescaling to
\[\frac{1}{\lambda}\mathcal{I}_{Y_{\mathrm{det}}}(\eta;\lambda)=\frac{1}{\eta} \tag{12}\]
in a normalized form we will use below.
#### Iv-E2 High \(\lambda\) (Gaussian approximation)
The Fisher information about mean \(\mu\) in a Gaussian \(\mathcal{N}(\mu,\sigma^{2})\) observation is
\[\mathcal{I}(\mu)=\frac{1}{\sigma^{2}}.\]
Using the Gaussian approximation (10) to the \(\mathrm{Neyman}(\lambda,\eta)\) distribution suggests heuristically that the Fisher information about \(\lambda\eta\) in \(Y_{\mathrm{high}}\) is
\[\mathcal{I}_{Y_{\mathrm{high}}}(\lambda\eta)=\frac{1}{\lambda\eta(\eta+1)}. \tag{13}\]
With \(\lambda\) known, this translates by simple rescaling to
\[\frac{1}{\lambda}\mathcal{I}_{Y_{\mathrm{high}}}(\eta;\lambda)=\frac{1}{\eta( \eta+1)}=\left(\frac{1}{\eta}-\frac{1}{\eta+1}\right). \tag{14}\]
Indeed a detailed argument for
\[\lim_{\lambda\to\infty}\frac{1}{\lambda}\mathcal{I}_{Y}(\eta;\lambda)=\frac{1 }{\eta}-\frac{1}{\eta+1} \tag{15}\]
is given in [16, App. B]. Comparing to (12), the Fisher information is reduced by a factor of \(\eta+1\).
#### Iv-E3 Low \(\lambda\) (zero-inflated Poisson approximation)
From the PMF (11),
\[\log\mathrm{P}_{Y_{\mathrm{low}}}(y\,;\,\eta,\lambda) \tag{16}\] \[=\left\{\begin{array}{ll}\log(e^{-\lambda}+(1-e^{-\lambda})e^{ -\eta}),&y=0;\\ \log(1-e^{-\lambda})-\eta+y\log\eta-\log(y!),&y=1,\,2,\,\ldots.\end{array}\right.\]
Differentiating gives
\[\frac{\partial\log\mathrm{P}_{Y_{\mathrm{low}}}(y\,;\,\eta,\lambda)}{ \partial\eta} \tag{17}\] \[=\left\{\begin{array}{ll}\frac{(1-e^{-\lambda})e^{-\eta}}{e^{- \lambda}+(1-e^{-\lambda})e^{-\eta}},&y=0;\\ -1+y/\eta,&y=1,\,2,\,\ldots.\end{array}\right.\]
Now computing the expected value of the square of this quantity under the PMF (11) gives
\[\mathcal{I}_{Y_{\mathrm{low}}}(\eta;\lambda)=\frac{(1-e^{-\lambda})^{2}e^{-2 \eta}}{e^{-\lambda}+(1-e^{-\lambda})e^{-\eta}}+(1-e^{-\lambda})\left(\frac{1}{ \eta}-e^{-\eta}\right). \tag{18}\]
In the limit of low \(\lambda\), the first term approaches zero and the first factor of the second term approaches \(\lambda\), so
\[\lim_{\lambda\to 0}\frac{1}{\lambda}\mathcal{I}_{Y_{\mathrm{low}}}(\eta;\lambda)= \frac{1}{\eta}-e^{-\eta}. \tag{19}\]
This matches a more tedious derivation of
\[\lim_{\lambda\to 0}\frac{1}{\lambda}\mathcal{I}_{Y}(\eta;\lambda)=\frac{1}{\eta}-e ^{-\eta} \tag{20}\]
in [16, App. B].
The low-\(\lambda\) limit of Fisher information in (19) exceeds the high-\(\lambda\) limit in (15) by a factor of \((\eta+1)(1-\eta e^{-\eta})\). This factor varies from 1 when \(\eta=0\) to \(\approx\eta+1\) when \(\eta\) is high. This gain in Fisher information can be attributed to increasing certainty in the number of incident ions at low \(\lambda\) and consequent reduction in source shot noise [15].
### _Kullback-Leibler Divergence_
Kullback-Leibler divergence (KLD) is a basic tool for quantifying distances between distributions and in particular determining error exponents for hypothesis testing. Here we gather computations of KLD applicable to distinguishing \(\mathrm{Neyman}(\lambda,\eta_{0})\) and \(\mathrm{Neyman}(\lambda,\eta_{1})\) distributions. This will be used to contextualize the KLD of TR measurements.
For distributions \(p\) and \(q\) on the same alphabet, the Kullback-Leibler divergence is
\[D_{\mathrm{KL}}(p\,\|\,q)=\mathrm{E}_{P}[\,\log(p(Y)/q(Y))\,]\,, \tag{21}\]
which is a shorthand for the expected value of the random variable \(\log(p(Y)/q(Y))\) when \(Y\) has the \(p\) distribution.
#### Ii-B1 Deterministic beam (Poisson approximation)
For generic Poisson distributions, the KLD is given by
\[D_{\mathrm{KL}}(\mathrm{Poisson}(\nu_{0})\,\|\,\mathrm{Poisson}(\nu_{1}))=\nu_{1} \!-\!\nu_{0}\!+\!\nu_{0}\log\frac{\nu_{0}}{\nu_{1}}. \tag{21}\]
Thus, we have
\[\frac{1}{\lambda}D_{\mathrm{KL}}(\mathrm{Poisson}(\lambda\eta_{0} )\,\|\,\mathrm{Poisson}(\lambda\eta_{1}))\] \[=\frac{1}{\lambda}\left[\lambda\eta_{1}-\lambda\eta_{0}+\lambda \eta_{0}\log\frac{\lambda\eta_{0}}{\lambda\eta_{1}}\right]\] \[=\eta_{1}-\eta_{0}+\eta_{0}\log\frac{\eta_{0}}{\eta_{1}}. \tag{22}\]
#### Ii-B2 High \(\lambda\) (Gaussian approximation)
For generic univariate Gaussian distributions, the KLD is given by
\[D_{\mathrm{KL}}(\mathcal{N}(\mu_{0},\sigma_{0}^{2})\,\|\,\mathcal{ N}(\mu_{1},\sigma_{1}^{2}))\] \[=\frac{1}{2}\log\frac{\sigma_{1}^{2}}{\sigma_{0}^{2}}+\frac{ \sigma_{0}^{2}+(\mu_{0}-\mu_{1})^{2}}{2\sigma_{1}^{2}}-\frac{1}{2}. \tag{23}\]
Thus, we have
\[D_{\mathrm{KL}}(\mathcal{N}(\lambda\eta_{0},\lambda\eta_{0}( \eta_{0}+1))\,\|\,\mathcal{N}(\lambda\eta_{1},\lambda\eta_{1}(\eta_{1}+1)))\] \[=\frac{1}{2}\log\frac{\lambda\eta_{1}(\eta_{1}+1)}{\lambda\eta_{ 0}(\eta_{0}+1)}+\frac{\lambda\eta_{0}(\eta_{0}+1)+(\lambda\eta_{0}-\lambda \eta_{1})^{2}}{2\lambda\eta_{1}(\eta_{1}+1)}-\frac{1}{2}\] \[=\frac{1}{2}\log\frac{\eta_{1}(\eta_{1}+1)}{\eta_{0}(\eta_{0}+1) }+\frac{\eta_{0}(\eta_{0}+1)+\lambda(\eta_{0}-\eta_{1})^{2}}{2\eta_{1}(\eta_{ 1}+1)}-\frac{1}{2}. \tag{24}\]
Furthermore,
\[\lim_{\lambda\to\infty}\frac{D_{\mathrm{KL}}(\mathcal{N}( \lambda\eta_{0},\lambda\eta_{0}(\eta_{0}+1))\,\|\,\mathcal{N}(\lambda\eta_{1 },\lambda\eta_{1}(\eta_{1}+1)))}{\lambda}\\ =\frac{(\eta_{0}-\eta_{1})^{2}}{2\eta_{1}(\eta_{1}+1)}. \tag{25}\]
#### Ii-B3 Low \(\lambda\) (zero-inflated Poisson approximation)
For zero-inflated Poisson distributions following (11), we can compute the KLD \(D_{\mathrm{KL}}(\mathrm{ZIPPoisson}(\lambda,\eta_{0})\,\|\,\mathrm{ZIPPoisson}( \lambda,\eta_{1}))\) directly. For \(y=0\) we have
\[\log\frac{p(0)}{q(0)}=\log\frac{e^{-\lambda}+(1-e^{-\lambda})e^{-\eta_{0}}}{e ^{-\lambda}+(1-e^{-\lambda})e^{-\eta_{1}}}; \tag{26}\]
for \(y=1,\,2,\,\ldots\), we have
\[\frac{p(y)}{q(y)}=\frac{(1-e^{-\lambda})e^{-\eta_{0}}\eta_{0}^{y}/y!}{(1-e^{- \lambda})e^{-\eta_{1}}\eta_{1}^{y}/y!}=e^{-(\eta_{0}-\eta_{1})}(\eta_{0}/\eta _{1})^{y},\]
so
\[\log\frac{p(y)}{q(y)}=\eta_{1}-\eta_{0}+y\log\frac{\eta_{0}}{\eta_{1}}. \tag{27}\]
For the KLD, we would like to average (26) and (27) under the \(p\) distribution:
\[D_{\mathrm{KL}}(\mathrm{ZIPPoisson}(\lambda,\eta_{0})\,\|\, \mathrm{ZIPPoisson}(\lambda,\eta_{1}))\\ =g(\eta_{0})\log\frac{g(\eta_{0})}{g(\eta_{1})}\\ +(1-g(\eta_{0}))(\eta_{1}-\eta_{0})+(1-e^{-\lambda})\eta_{0}\log \frac{\eta_{0}}{\eta_{1}},\] (28a) where \[g(s)=e^{-\lambda}+(1-e^{-\lambda})e^{-s}. \tag{28b}\]
Furthermore,
\[\lim_{\lambda\to 0}\frac{D_{\mathrm{KL}}(\mathrm{ZIP Poisson}(\lambda,\eta_{0})\,\|\,\mathrm{ZIP Poisson}(\lambda,\eta_{1}))}{\lambda}\\ =e^{-\eta_{0}}-e^{-\eta_{1}}+(1-e^{-\eta_{0}})(\eta_{1}-\eta_{0} )+\eta_{0}\log\frac{\eta_{0}}{\eta_{1}}. \tag{29}\]
Comparing (22), (25), and (29) is more subtle than the analogous comparison of FI expressions. We defer this to the following section in the context of specific numerical examples of error exponents.
## III Feature Detection with SE Counting
A common goal in PBM is to decide on the presence or absence of a feature that is revealed by deviation of SE yield \(\eta\) from the value of surrounding pixels. For instance, detecting feature positions is a crucial step in improving the accuracy of line-edge roughness measurement [32], which can be helpful in assessing semiconductor manufacturing accuracy. Here we consider feature detection when SE count data is available as described in Section II.
### _A Binary Hypothesis Test_
To illustrate the fundamental advantage of TR measurements for feature detection, we consider a binary hypothesis testing problem between SE yield values of \(\eta_{0}\) ("no alarm") and \(\eta_{1}\) ("alarm"), with dose \(\lambda\) known. With the (idealized) conventional measurement \(Y\), the decision must be made based on whether the observation more plausibly came from the \(\mathrm{Neyman}(\lambda,\eta_{0})\) or \(\mathrm{Neyman}(\lambda,\eta_{1})\) distribution. Observation of \(\{\widetilde{M},\,(\widetilde{T}_{1},\widetilde{X}_{1}),\,(\widetilde{T}_{2}, \widetilde{X}_{2}),\,\ldots,\,(\widetilde{T}_{\widetilde{M}},\widetilde{X}_{ \widetilde{M}})\}\) is at least as informative, and we wish to characterize how much the decision making accuracy is improved.
In this section, we imagine that a PBM experiment with dose \(\lambda\) is repeated many times. We study the performance through the rate of exponential decay of the missed detection rate for a sequence of Neyman-Pearson hypothesis tests that minimize the missed detection rate while satisfying a fixed false alarm rate criterion. For \(n\) repetitions, the probability of missed detection \(\mathrm{P}_{\mathrm{MD}}(n)\) satisfies
\[\lim_{n\to\infty}-\frac{1}{n}\log\mathrm{P}_{\mathrm{MD}}(n)=D_{\mathrm{KL}}(p _{0}\,\|\,p_{1}), \tag{30}\]
where \(p_{0}\) and \(p_{1}\) represent the relevant observation distributions [33, SS8.3.2]. Thus, we concentrate on KLD computations and comparisons.
### _KLD Between Time-Resolved Measurement Distributions_
Let \(p\) and \(q\) denote the distributions under SE yields \(\eta_{0}\) and \(\eta_{1}\), with the random variables allowed to be implicit. In
anticipation of using the law of iterated expectation, we make the following simplification:
\[\mathrm{E}_{P}\Big{[}\log(p(\widetilde{M},\widetilde{T},\widetilde{ X})/q(\widetilde{M},\widetilde{T},\widetilde{X}))\,\big{|}\,\widetilde{M}= \widetilde{m}\,\Big{]}\] \[\stackrel{{(a)}}{{=}}\mathrm{E}_{P}\Big{[}\log(p( \widetilde{M},\widetilde{X})/q(\widetilde{M},\widetilde{X}))\,\big{|}\, \widetilde{M}=\widetilde{m}\,\Big{]}\] \[\stackrel{{(b)}}{{=}}\mathrm{E}_{P}\Bigg{[}\log \frac{p(\widetilde{M})}{q(\widetilde{M})}\,\Big{|}\,\widetilde{M}=\widetilde {m}\,\Bigg{]}+\sum_{i=1}^{\widetilde{m}}\mathrm{E}_{P}\Bigg{[}\log\frac{p( \widetilde{X}_{i})}{q(\widetilde{X}_{i})}\,\Bigg{]}\] \[\stackrel{{(c)}}{{=}}\log\frac{p(\widetilde{m})}{q( \widetilde{m})}+\widetilde{m}\,\mathrm{E}_{P}\Bigg{[}\log\frac{p(\widetilde{ X}_{i})}{q(\widetilde{X}_{i})}\Bigg{]}\,, \tag{31}\]
where (a) follows from the conditional distribution of each \(\widetilde{T}_{i}\) given \(\widetilde{M}\) being identical under \(p\) and \(q\); (b) from the conditional independence of \(\{\widetilde{X}_{1},\,\ldots,\,\widetilde{X}_{\widetilde{M}}\}\) given \(\widetilde{M}=\widetilde{m}\); and (c) from the \(\widetilde{X}_{i}\) distributions being identical. Now by taking the expected value of (31) and using the law of iterated expectation, we obtain
\[D_{\mathrm{KL}}(p(\widetilde{M},\widetilde{T},\widetilde{X})\, \|\,q(\widetilde{M},\widetilde{T},\widetilde{X}))\] \[=\mathrm{E}_{P}\Big{[}\log(p(\widetilde{M},\widetilde{T}, \widetilde{X})/q(\widetilde{M},\widetilde{T},\widetilde{X}))\,\Big{]}\] \[=D_{\mathrm{KL}}(p(\widetilde{M})\,\|\,q(\widetilde{M}))+ \mathrm{E}[\,\widetilde{M}\,|D_{\mathrm{KL}}(p(\widetilde{X}_{i})\,\|\,q( \widetilde{X}_{i})). \tag{32}\]
Recall the \(\widetilde{M}\) distribution is given in (2). Thus, we can apply (21) with \(\nu_{i}=\lambda(1-e^{-\eta_{i}})\) to obtain
\[D_{\mathrm{KL}}(p(\widetilde{M})\,\|\,q(\widetilde{M}))\] \[=\lambda(e^{-\eta_{0}}-e^{-\eta_{1}})+\lambda(1-e^{-\eta_{0}}) \log\frac{1-e^{-\eta_{0}}}{1-e^{-\eta_{1}}}. \tag{33}\]
Recall also the \(\widetilde{X}_{i}\) distribution is given in (3). Thus, we have the log PMF ratio
\[\log\frac{p(x)}{q(x)} =\log\frac{e^{-\eta_{0}}\eta_{0}^{x}/[(1-e^{-\eta_{0}})x!]}{e^{- \eta_{1}}\eta_{1}^{x}/[(1-e^{-\eta_{1}})x!]}\] \[=\eta_{1}-\eta_{0}+\log\frac{1-e^{-\eta_{1}}}{1-e^{-\eta_{0}}}+x \log\!\left(\frac{\eta_{0}}{\eta_{1}}\right). \tag{34}\]
Taking the expectation under distribution \(p\) and using (4) gives
\[D_{\mathrm{KL}}(p(\widetilde{X}_{i})\,\|\,q(\widetilde{X}_{i}))\] \[=\eta_{1}-\eta_{0}+\log\frac{1-e^{-\eta_{1}}}{1-e^{-\eta_{0}}}+ \frac{\eta_{0}}{1-e^{-\eta_{0}}}\log\!\left(\frac{\eta_{0}}{\eta_{1}}\right). \tag{35}\]
Finally, substituting \(\mathrm{E}_{P}[\,\widetilde{M}\,]=\lambda(1-e^{-\eta_{0}})\), (33), and (35) into (32) gives
\[D_{\mathrm{KL}}(p(\widetilde{M},\widetilde{T},\widetilde{X})\, \|\,q(\widetilde{M},\widetilde{T},\widetilde{X}))\] \[=\lambda(e^{-\eta_{0}}-e^{-\eta_{1}})+\lambda(1-e^{-\eta_{0}})( \eta_{1}-\eta_{0})\] \[\qquad+\lambda\eta_{0}\log(\eta_{0}/\eta_{1}). \tag{36}\]
Notice that the KLD (36) matches the asymptote (29) of the low-\(\lambda\) approximation to \(Y\). This is analogous to a previous known result for FI about \(\eta\) normalized by \(\lambda\)[16, Sect. III-V]: the normalized FI of a continuous-time TR measurement matches the low-\(\lambda\) asymptote for a conventional measurement. The KLD and FI results have an intuitive rationale. When the dose \(\lambda\) is very small, the probability of more than one incident ion is negligible, and with zero or one incident ion, the conventional and TR measurements are identical. A single low-\(\lambda\) conventional measurement is not informative enough for useful detection or estimation. The analyses of KLD and FI indicate that TR measurements achieve the best possible informativeness per incident particle, but uniformly over \(\lambda\).
### _Comparisons of Error Exponents_
The Neyman Type A PMF (6) is not amenable to meaningful expressions for KLD. If series truncation is handled with care and overflow and underflow are avoided, one can numerically evaluate \(D_{\mathrm{KL}}(\mathrm{Neyman}(\lambda,\eta_{0})\,\|\,\mathrm{Neyman}( \lambda,\eta_{1}))\).
Figure 2 provides a few examples. In black is the normalized KLD \(D_{\mathrm{KL}}(\mathrm{Neyman}(\lambda,\eta_{0})\,\|\,\mathrm{Neyman}( \lambda,\eta_{1}))/\lambda\). The normalized KLD is also shown for the deterministic beam (Poisson approximation) (22); the high-\(\lambda\) (Gaussian approximation) (24) and its asymptote (25); and the low-\(\lambda\) (zero-inflated Poisson approximation) (28) and its asymptote (29). The normalized KLD with TR measurement _equals_ the low-\(\lambda\) asymptote.
The most important observation is that the KLD with TR measurement is always greater than with conventional measurement. This translates to a larger error exponent and hence a lower missed detection rate when false alarm rate is held constant.
The performance gap can be arbitrarily large. Figure 3 compares error exponents over ranges of \(\eta_{1}\) values for \(\eta_{0}=2\) and \(\eta_{0}=4\). The gap is increasing with \(|\eta_{1}-\eta_{0}|\). This can also be predicted by comparing (36) with (25).
Having established that the error exponent (36) (matching (29)) is achievable, it is interesting to make additional comparisons. By comparing (22) and (36), we see that TR measurement approaches the performance with a deterministic beam when \(\eta_{0}\) and \(\eta_{1}\) both grow without bound. In Figures 2 and 3, the gap is smaller when \(\eta_{0}\) and \(\eta_{1}\) are large. An interpretation is that when the SE yield is large, CT measurement allows almost perfect knowledge of the number of incident ions \(M\); evidently, the fact that \(M\) is nevertheless random has no impact on the decision between \(\eta_{0}\) and \(\eta_{1}\).
## IV SE Yield Estimation with SE Counting
In this section, we consider the estimation of SE yield \(\eta\) under the idealized model of PBM in which SE counts are available as described in Section II. This was also a central problem of [16]. Here, we provide a new interpretation of a decomposition of the Fisher information, which we will relate to estimation with degraded observations in Section V. We also introduce a new estimator and provide insight on the estimators analyzed in [16].
### _Fisher Information in Time-Resolved Measurements_
The Fisher information about \(\eta\) in the time resolved measurement (1) with \(\lambda\) as a known parameter was derived in [16]. Normalized by \(\lambda\), it is
\[\frac{1}{\lambda}\mathcal{I}_{\widetilde{M},\widetilde{T},\widetilde{X}}(\eta; \lambda)=\frac{1}{\eta}-e^{-\eta}, \tag{37}\]
which is the same as the low-\(\lambda\) limit of the Fisher information in (18), showing that the time-resolved measurement achieves the gain in Fisher information at low \(\lambda\).
We can gain further insight into the Fisher information by considering the contributions to it. As was in intermediate step in deriving (37) in [16],
\[\mathcal{I}_{\widetilde{M},\widetilde{T},\widetilde{X}}(\eta;\lambda)=\mathcal{ I}_{\widetilde{M}}(\eta;\lambda)+\mathcal{I}_{\widetilde{X}|\widetilde{M}}( \eta;\lambda). \tag{38}\]
The first term is the information in the number of detection events \(\widetilde{M}\), and the second term is the information in the collection of SE counts \(\widetilde{X}\). Normalized by \(\lambda\), the component Fisher informations are given by
\[\frac{1}{\lambda}\mathcal{I}_{\widetilde{M}}(\eta;\lambda)=\frac{e^{-\eta}}{e ^{\eta}-1} \tag{39}\]
and
\[\frac{1}{\lambda}\mathcal{I}_{\widetilde{X}|\widetilde{M}}(\eta;\lambda)= \frac{\eta+1}{\eta}-\frac{1}{1-e^{-\eta}}. \tag{40}\]
Figure 4 compares these two contributions. We see that at low values of \(\eta\), \(\mathcal{I}_{\widetilde{M}}\) dominates \(\mathcal{I}_{\widetilde{X}|\widetilde{M}}\), and vice-versa at higher values of \(\eta\). At low \(\eta\) the probability of there being more than 1 SE in a single detection event is low, which results in most \(\widetilde{X}_{i}\)'s being 1. Therefore, most of the information about \(\eta\) is carried by \(\widetilde{M}\). Conversely, at higher \(\eta\), \(\widetilde{M}\approx M\) since almost all incident particles lead to at least one detected SE. Therefore, information about \(\eta\) is carried almost entirely by \(\widetilde{X}\).
The observation that information about \(\eta\) is available in just the number of detection events means that we can hope to estimate \(\eta\) even when the measurement is saturated, _i.e._, no distinction is made between different numbers of SEs per detection. This ability to estimate \(\eta\)
Fig. 3: Comparison of error exponents for hypothesis test between \(\eta_{0}\) and \(\eta_{1}\) where \(\eta_{0}\) is fixed and \(\eta_{1}\) is varied. With conventional measurement, the error exponent is \(D_{\mathrm{KL}}(\mathrm{Neyman}(\lambda,\eta_{0})\,\|\,\mathrm{Neyman}( \lambda,\eta_{1}))\). With time-resolved measurement, the error exponent (36) is significantly larger (superior) and close to the error exponent (22) one would obtained with a deterministic source beam.
Fig. 2: Computations of normalized KL divergence \(D_{\mathrm{KL}}(\mathrm{Neyman}(\lambda,\eta_{0})\,\|\,\mathrm{Neyman}( \lambda,\eta_{1}))/\lambda\) and approximations and asymptotes derived herein. Magenta (dashed): deterministic beam (22). Red: high-\(\lambda\) approximation (24) and its asymptote (25). Blue: low-\(\lambda\) approximation (28) and its asymptote (29). The normalized KL divergence with TR measurement matches the low-\(\lambda\) asymptote for all values of \(\lambda\).
number of SEs in each event becomes important when the number of SEs is uncertain. In Section V, we will introduce an \(\widetilde{M}\)-based estimator for \(\eta\), and in Section VI, we will use it to estimate model parameters in a realistic PBM scenario, where noise from the SE detection chain prevents a clear distinction between the signal produced by different numbers of SEs.
### _SE Yield Estimation_
Prior work [16] introduces the following estimators for \(\eta\) that can be computed from idealized TR observations:
* _Conventional estimator_: The conventional measurement (5), scaled by the dose \(\lambda\): \[\widehat{\eta}_{\mathrm{conv}}=\frac{Y}{\lambda}.\] (41)
* _Oracle estimator_: This estimator uses the count of incident ions \(M\) to improve upon \(\widehat{\eta}_{\mathrm{conv}}\): \[\widehat{\eta}_{\mathrm{oracle}}=\frac{Y}{M}.\] (42) Although this estimator is clearly the best possible estimate of \(\eta\), it cannot be implemented in practice since \(M\) is unobservable. However, \(\widehat{\eta}_{\mathrm{oracle}}\) gives useful lower bounds on the performance of TR estimators.
* _Quotient mode (QM) estimator_: \[\widehat{\eta}_{\mathrm{QM}}(\widetilde{M},Y)=\left\{\begin{array}{cc}0,& \widetilde{M}=0;\\ Y/\widetilde{M},&\widetilde{M}>0.\end{array}\right.\] (43)
* _Lambert quotient mode (LQM) estimator_: The unique root of \[\widehat{\eta}=\frac{Y}{(1-e^{-\widehat{\eta}})^{-1}\widetilde{M}},\] (44a) which is \[\widehat{\eta}_{\mathrm{LQM}}=W(-\widehat{\eta}_{\mathrm{QM}}e^{-\widehat{ \eta}_{\mathrm{QM}}})+\widehat{\eta}_{\mathrm{QM}},\] (44b) where \(W(\cdot)\) is the Lambert W function [34].
* _Maximum likelihood (ML) estimator_: The unique root of \[\widehat{\eta}_{\mathrm{ML}}=\frac{Y}{\widetilde{M}+\lambda e^{-\widehat{ \eta}_{\mathrm{ML}}}}.\] (45)
In [16], the superior performance of the TR estimators compared to the conventional estimator was demonstrated. Before introducing a new estimator, we reinterpret these estimators, which provides some new insight into their relative performances.
#### Iv-B1 LQM estimator as a mismatched ML estimate
In [16], the LQM estimator has a heuristic justification detailed below in Section IV-B2. We show in this subsection that it also arises as an ML-like estimate of \(\eta\) using a likelihood expression that omits the distribution of \(\widetilde{M}\).
Suppose that \(\widetilde{M}=\widetilde{m}\) and \(\widetilde{X}=(\widetilde{x}_{1},\widetilde{x}_{2},\ldots,\widetilde{x}_{ \widetilde{m}})\) are observed. Using the distribution of \(\widetilde{X}_{i}\) from (3), the conditional likelihood of the observation given \(\widetilde{M}=\widetilde{m}\) is
\[\prod_{i=1}^{\widetilde{m}}\mathrm{P}_{\widetilde{X}_{i}}(\widetilde{x}_{i}\, ;\,\eta)=\left(\frac{e^{-\eta}}{1-e^{-\eta}}\right)^{\widetilde{m}}\frac{ \eta_{i+\widetilde{x}_{2}+\cdots+\widetilde{x}_{\widetilde{m}}}^{\widetilde{ m}}}{\widetilde{x}_{1}!\,\widetilde{x}_{2}!\,\cdots\,j_{\widetilde{m}}!}. \tag{46}\]
By dropping factors that do not depend on \(\eta\), the ML-like estimate based on (46) is
\[\arg\max_{\eta}\left(\frac{e^{-\eta}}{1-e^{-\eta}}\right)^{\widetilde{m}}\eta ^{y}, \tag{47}\]
where \(y=\widetilde{x}_{1}+\widetilde{x}_{2}+\cdots+\widetilde{x}_{\widetilde{m}}\). The unique maximizer satisfies
\[\eta=\frac{Y}{\widetilde{M}(1-e^{-\eta})^{-1}}, \tag{48}\]
which is the LQM estimator.
Note that (46) is not the likelihood of the observation \((\widetilde{m},\widetilde{x})\) because it omits the factor \(\mathrm{P}_{\widetilde{M}}(\widetilde{m}\,;\,\eta)\). Viewing the LQM estimator as one that ignores the information about \(\eta\) present in \(\widetilde{M}\) is consistent with its generally worse performance than the ML estimator. It is also consistent with relatively poor performance for low \(\eta\), since Figure 4 shows that \(\widetilde{M}\) contains much more information than \(\widetilde{X}\) for low \(\eta\).
#### Iv-B2 Estimation of \(M\)
Suppose that the number of incident ions is some known positive number \(m\). Then \(Y\sim\mathrm{Poisson}(m\eta)\), and \(\widehat{\eta}=Y/m\) is plainly the good estimator. It is unbiased, efficient, and the ML estimate. The number of incident ions \(M\) is not directly observed, and any information about \(M\) is contained in \(\widetilde{M}\); conditioned on \(\widetilde{M}\), the distributions of \(\widetilde{T}\) and \(\widetilde{X}\) are unrelated to \(M\).
In [16], the QM estimator is introduced based on plugging in \(\widetilde{M}\) for \(M\), and the LQM estimator is introduced based on \((1-e^{-\widehat{\eta}})^{-1}\widetilde{M}\) being an ad hoc improved estimate of \(M\). Specifically, since \(\widetilde{M}\sim\mathrm{binomial}(M,1-e^{-\eta})\), it follows that \(\mathrm{E}\big{[}\,\widetilde{M}\,|\,M\,\big{]}=(1-e^{-\eta})M\). However, this does not imply that \(\mathrm{E}\big{[}\,M\,|\,\widetilde{M}\,\big{]}=(1-e^{-\eta})^{-1}\widetilde{M}\).
In fact, there is a simple expression for \(\mathrm{E}\big{[}\,M\,|\,\widetilde{M}\,\big{]}\). Using that the conditional distribution of \(\widetilde{M}\) given \(M\) is binomial and the distribution of \(M\) is \(\mathrm{Poisson}(\lambda)\), the conditional distribution of \(M\) given \(\widetilde{M}\) can be determined with Bayes's rule to be
\[\mathrm{P}_{M|\widetilde{M}}(m\,|\,\widetilde{m}\,;\,\eta,\lambda)=\frac{\exp (-\lambda e^{-\eta})(\lambda e^{-\eta})^{m-\widetilde{m}}}{(m-\widetilde{m})!}, \tag{49}\]
\(m=\widetilde{m},\,\widetilde{m}+1,\,\ldots\). This is a \(\mathrm{Poisson}(\lambda e^{-\eta})\) distribution shifted by \(\widetilde{m}\), so
\[\mathrm{E}[\,M\,|\,\widetilde{M}\,]=\widetilde{M}+\lambda e^{-\eta}. \tag{50}\]
Using this as a proxy for \(M\) gives the ML estimator (45), which in [16] is derived from maximization of the likelihood. Putting the estimators (43)-(45) in a single family with different proxies for \(M\) explains the generally (but not uniformly) best performance of \(\widehat{\eta}_{\mathrm{ML}}\) and worst performance of \(\widehat{\eta}_{\mathrm{QM}}\).
#### Iv-B3 Conditional expectation estimator for \(\eta\)
We can also use the conditional distribution of \(M\) given \(\widetilde{M}\) to develop a new estimator for \(\eta\). We have asserted that the oracle estimator \(Y/M\) is a good estimate of \(\eta\). Upon observing \(Y\) and \(\widetilde{M}\), we can compute
\[\widehat{\eta}_{\mathrm{CE}}(y,\widetilde{m}) =\mathbb{E}\Big{[}\,Y/M\,|\,Y=y,\,\widetilde{M}=\widetilde{m}\, \Big{]}\] \[=y\,\mathbb{E}\bigg{[}\,\frac{1}{M}\,\Big{|}\,\widetilde{M}= \widetilde{m}\,\bigg{]}\,. \tag{51}\]
The conditional expectation is under the conditional PMF (49), yielding
\[\widehat{\eta}_{\mathrm{CE}}(y,\widetilde{m})=y\exp(-\lambda e^{-\widehat{ \eta}_{\mathrm{CE}}})\sum_{\ell=0}^{\infty}\frac{1}{\ell+\widetilde{m}}\frac{( \lambda e^{-\widehat{\eta}_{\mathrm{CE}}})^{\ell}}{\ell!}, \tag{52}\]
which can be solved through a suitable root-finding algorithm.
Figure 5(a) compares the bias of \(\widehat{\eta}_{\mathrm{CE}}\) with that of \(\widehat{\eta}_{\mathrm{ML}}\), obtained using a Monte Carlo simulation for \(\lambda=20\). We see that the magnitude of the bias of \(\widehat{\eta}_{\mathrm{CE}}\) is lower than that of \(\widehat{\eta}_{\mathrm{ML}}\) over a wide range of \(\eta\). The variances of the two estimators are almost identical so they are not plotted. In Figure 5(b), we plot the ratio of the root mean-squared error (RMSE) of \(\widehat{\eta}_{\mathrm{ML}}\) to that of \(\widehat{\eta}_{\mathrm{CE}}\).
## V SE Yield Estimation from Saturated SE counts
As discussed in Section IV-A, \(\widetilde{M}\) contains information about \(\eta\), and we can form an estimator for \(\eta\) from just \(\widetilde{M}\). Such an estimator would treat detections as binary or saturated, since it would only consider their presence (\(\widetilde{X}_{i}\gtrsim 1\)) or absence without reference to the exact number of SEs \(\widetilde{X}_{i}\) in a detection event.
Recall from (2) that the number of incident particles that result in at least one detected SE is given by \(\widetilde{M}\sim\mathrm{Poisson}(\lambda(1-e^{-\eta}))\). Since \(\lambda\) is known, an estimator for \(\eta\) would be equivalent to estimating the mean of this Poisson distribution, the ML estimate of which is the observation \(\widetilde{m}\). When \(\widetilde{M}<\lambda\), we get
\[\widehat{\eta}=-\log\!\left(1-\frac{\widetilde{M}}{\lambda}\right)\]
as the ML estimate of \(\eta\) from \(\widetilde{M}\); when \(\widetilde{M}\geq\lambda\), the likelihood is an increasing function of \(\eta\), suggesting \(\widehat{\eta}=\infty\). Therefore, we define the estimator
\[\widehat{\eta}_{\widetilde{M}}=\left\{\begin{aligned} &-\log\!\left(1- \widetilde{M}/\lambda\right),&\widetilde{M}<\lambda;\\ &\eta_{\max},&\widetilde{M}\geq\lambda,\end{aligned}\right. \tag{53}\]
where \(\eta_{\max}\) is some fixed value such as the largest plausible SE yield. In the absence of an _a priori_ range for \(\eta\), one could set \(\eta_{\max}\) to be the largest possible value returned when \(\widetilde{M}<\lambda\):
\[\eta_{\max}=-\log\!\left(1-\frac{\left\lceil\lambda\right\rceil-1}{\lambda} \right). \tag{54}\]
However, this expression has large jumps at integer values of \(\lambda\). The choice of
\[\eta_{\max}=-\log\!\left(1-\frac{\left\lceil\lambda\right\rceil-1}{\left\lceil \lambda\right\rceil}\right) \tag{55}\]
is more conservative.
For any fixed \(\eta\), the probability of \(\widetilde{M}\geq\lambda\) decreases with increasing \(\lambda\); for any fixed \(\lambda\), the probability of \(\widetilde{M}\geq\lambda\) decreases with decreasing \(\eta\). These trends are illustrated in Figure 6. Typical values of \(\lambda\) for imaging range from \(\sim 10\) to \(\sim 100\). However, as described in Section VI-C, much larger values may arise in calibration.
The performance of the estimator is shown in Figure 7 for \(\lambda=100\) and \(\lambda=100\,000\), where \(\eta_{\max}\) is set using (55) and \(\eta\in[\frac{1}{10},\,10]\). Bias and variance are separated, and we can see that the RMSE is dominated by variance at low \(\eta\) and by bias
Fig. 5: Numerical comparison of the \(\widehat{\eta}_{\mathrm{CE}}\) estimator (51) with the ML estimator \(\widehat{\eta}_{\mathrm{ML}}\) (45). \(\widehat{\eta}_{\mathrm{CE}}\) has a lower magnitude of bias than \(\widehat{\eta}_{\mathrm{ML}}\). The RMSE ratio is close to 1 for the whole range of \(\eta\).
at high \(\eta\). The variance levels off around half of \(\eta_{\max}\). Kinks in the absolute bias curves are due to the bias changing sign from positive for low \(\eta\) to negative for high \(\eta\).
From our analysis of the Fisher information about \(\eta\) in \(\widehat{M}\) in Figure 4, we expect that this estimator gets worse as \(\eta\) increases, which is indeed what we observe. In the next section, we will use \(\widehat{\eta}_{\widetilde{M}}\) to estimate the parameters for a PBM model that includes uncertainty in SE number introduced by the SE detection chain.
## VI SE Yield Estimation from SE Detector Voltages
As described in Section II-A, in a real particle beam microscope, direct counts of secondary electrons are usually not available. Instead, as depicted in Figure 1(e), the output of the SE detector is a series of voltage pulses. Assuming that the conversion of SE number to voltage pulse is linear, we expect that the average height of each pulse is proportional to the count of SEs incident on the detector. When \(\eta\) is low, the probability that an incident particle generated multiple SEs is low, and a count of pulses can be used to estimate the true SE count. This scenario is true in SEM, and pulse counting has been used to implement SE count imaging in SEM [35, 36, 37, 38, 39, 40]. However, if \(\eta\) is higher, as in HIM, excitation of multiple SEs by a single incident particle becomes more likely. Therefore, more sophisticated modelling is needed to estimate \(\eta\).
In this section, we will describe a probabilistic model for the observed SE voltage signal and analyze how the Fisher information about \(\eta\) varies with model parameters. We will also discuss how the model parameters may be estimated. Finally, we will discuss the performance of \(\eta\) estimators based on this model.
### _Pulse Height Model_
As in [15], we model the conversion of SE number to voltages with a _Poisson-Poisson-Gaussian_ (PPG) model. Each SE is assumed to produce a voltage described by a \(\mathcal{N}(c_{1},c_{2})\) random variable, where \(c_{1}\) is the mean voltage and \(c_{2}\) the variance. These contributions are assumed to be independent and additive, so \(j\) SEs produce a voltage with the \(\mathcal{N}(jc_{1},jc_{2})\) distribution. Thus, the probability density function for the voltage \(\widetilde{U}_{i}\) produced in the \(i\)th detection event is given by
\[f_{\widetilde{U}_{i}}(u\,;\,\eta,c_{1},c_{2}) =\sum_{j=1}^{\infty}\mathrm{P}_{\widetilde{X}}(j;\eta)f_{Z}(u\, ;\,j,c_{1},c_{2})\] \[=\sum_{j=1}^{\infty}\frac{e^{-\eta}}{1-e^{-\eta}}\frac{\eta^{j}} {j!}f_{Z}(u\,;\,j,c_{1},c_{2}), \tag{56}\]
where \(f_{Z}(u\,;\,j,c_{1},c_{2})\) is the PDF of a \(\mathcal{N}(jc_{1},jc_{2})\) random variable.
The heights of the detected SE pulses, along with the total number of pulses, form the time-resolved observation
\[\left\{\widetilde{M},\widetilde{T},\widetilde{U}\right\}=\left\{\widetilde{M },(\widetilde{T}_{1},\widetilde{T}_{2},...,\widetilde{T}_{\widetilde{M}}),( \widetilde{U}_{1},\widetilde{U}_{2},...,\widetilde{U}_{\widetilde{M}})\right\}. \tag{57}\]
Under this model, a conventional observation without time resolution is
\[V=\sum_{i=1}^{\widetilde{M}}\widetilde{U}_{i}. \tag{58}\]
Analogously to the discussion in Section II-C, conditioned on \(\widehat{M}\), there is no information about \(\eta\) in \(\widetilde{T}\).
### _Fisher Information_
We can evaluate the Fisher information numerically. Figure 8 is a plot of the Fisher information (normalized by \(\lambda\)) for the PPG model, for both conventional and time-resolved measurements, as a function of \(\sqrt{c_{2}}/c_{1}\), at \(\eta=3\). For the time-resolved case, when \(\sqrt{c_{2}}/c_{1}\leq 0.1\), the FI is nearly constant. At such low values of \(\sqrt{c_{2}}/c_{1}\), there is little overlap between the peaks in the probability density of \(\widetilde{U}_{i}\) produced by different numbers of SEs, resulting in near-perfect discrimination of SE counts. Thus, the FI reaches the marked asymptote, which is the FI with SE counting (37). Similarly, the FI for conventional measurement reaches the asymptote
Fig. 6: \(\widetilde{M}\)-based estimator for \(\eta\) fails when \(\mathrm{P}(\widetilde{M}\geq\lambda)\). This occurs with vanishing probability as (a) \(\lambda\) increases or as (b) \(\eta\) decreases.
(15). As \(\sqrt{c_{2}}/c_{1}\) increases, the FI degrades, reflecting the ambiguity in resolving the number of SEs due to overlap in the densities of different numbers of SEs. We note that although the FI from time-resolved measurements remains higher than that for conventional measurements for the whole range of \(\sqrt{c_{2}}/c_{1}\), the relative advantage of time-resolved measurements diminishes at higher values of \(\sqrt{c_{2}}/c_{1}\).
### _Estimating \(c_{1}\) and \(c_{2}\)_
The PPG model parameters, \(c_{1}\) and \(c_{2}\), are generally unknown for a given particle-beam microscope. The values of these parameters depend on the SE detector hardware settings, such as the gain in the dynode stages of the photomultiplier tube, the specifics of the pre-amplifier circuit, etc. The values of these settings are unavailable to the user. Therefore the model parameters cannot be directly computed.
Instead, we must estimate the parameters \(c_{1}\) and \(c_{2}\). We could conveniently do so if we image a sample with a well-characterized \(\eta\). In this case, we could find an ML estimate for \(c_{1}\) and \(c_{2}\) by maximizing the likelihood of the observed pulse heights under the PPG model in (56). Although standard values of \(\eta\) for different materials are widely available [26, 41], the precise value of \(\eta\) for a given sample depends on several factors such as the level of carbon contamination in the microscope vacuum chamber and surface oxidation, making estimation of \(c_{1}\) and \(c_{2}\) difficult. However, we can use \(\widehat{\eta}_{\widetilde{M}}\) given in (53) to estimate \(\eta\), since this estimator does not rely on the number of SEs (_i.e.,_ the heights of the detected voltage pulses), but only on the number of detected pulses. Therefore, it is not affected by the loss in Fisher information due to variance in pulse heights depicted in Figure 8. With this \(\eta\) estimate, we can construct ML estimates for \(c_{1}\) and \(c_{2}\).
To demonstrate this process, we imaged a uniform, featureless silicon chip on an HIM (Zeiss Orion) at a resolution of \(10^{5}\) pixels with a beam current of 0.1 \(\mathrm{pA}\) and a pixel dwell time of 10 \(\mathrm{\SIUnitSymbolMicro s}\), which corresponds to \(\lambda=6.25\). Figure 1(e) shows a snapshot of the voltage pulses detected from one pixel in the image. The average \(\widetilde{M}\) (per pixel) observed for
Fig. 7: Performance of \(\widehat{\eta}_{\widetilde{M}}\), the estimator (53) that uses the number of SE detection events without SE count information. Conditional curves give the indicated quantity conditioned on \(\widetilde{M}<\lambda\), which is when the ML estimate is finite. Unconditional curves give the indicated quantity including the effect of choosing \(\eta_{\max}\) according to (55).
Fig. 9: Estimation of PPG model parameters \(c_{1}\) and \(c_{2}\). The probability distribution with the ML estimates of the model parameters is plotted along with the observed pulse height histogram.
this sample was 4.95. Since the entire sample was treated as uniform, we used the sum of \(\widetilde{M}\) over all the pixels and the total \(\lambda\) over all the pixels in (53) to obtain \(\widehat{\eta}_{\widetilde{M}}=1.58\). Next, we used this estimate of \(\eta\) to construct ML estimates of \(c_{1}\) and \(c_{2}\). The resulting probability density function is shown in Figure 9, along with the distribution of pulse heights in the experimental data. The ML estimates of \(c_{1}\) and \(c_{2}\) obtained from this technique were \(c_{1}=0.19\)\(\mathrm{V}\) and \(c_{2}=0.0040\)\(\mathrm{V}^{2}\).
We end this section with a couple of observations about the estimation of \(c_{1}\) and \(c_{2}\) using \(\widehat{\eta}_{\widetilde{M}}\). First, although we can use \(\widehat{\eta}_{\widetilde{M}}\) in the estimation of \(c_{1}\) and \(c_{2}\), we will not use it to estimate \(\eta\) during imaging. As discussed in Section V and shown in Figure 7, for good accuracy this estimator requires the dose to be very high. Second, even though the model PDF shows a good fit with the experimental pulse height histogram, it predicts a significant density of pulses with heights near and below zero volts. This is clearly unphysical, and it points to a mismatch between the model and the experiment.
### _Accounting for Nonzero Pulse Widths_
An additional feature of the voltage pulses is a nonzero width in the time domain. Empirically, from experimental pulse sequences such as that in Figure 1(e), we found the pulses to be approximately Gaussian in shape with mean width of \(\tau=$160\,\mathrm{ns}$\). The nonzero widths raise the possibility of undercounting SE detection events due to overlap between detections from successive incident particles. To compensate for this undercounting, we introduce a correction factor \(\gamma_{\tau}(\lambda,\eta)\) such that
\[\widetilde{M}_{\mathrm{corrected}}=\frac{\widetilde{M}}{\gamma_{\tau}(\lambda, \eta)}. \tag{59}\]
This correction factor is obtained by integrating the exponential probability distribution of the SE interarrival times up to the mean pulse width \(\tau\). This gives us
\[\gamma_{\tau}(\lambda,\eta)=\exp\bigl{(}-\lambda(1-e^{-\eta})\,\tau\bigr{)}. \tag{60}\]
Some of the estimators described below use \(\widetilde{M}_{\mathrm{corrected}}\).
### _Estimators_
We now introduce \(\eta\) estimators suitable for the PPG model. We construct these estimators to be analogous to estimators using SE counts from Section IV-B. For comparison, we include a model for a typical instrument, a somewhat idealized conventional estimator, and two oracles.
#### Vi-E1 Conventional instrument
As described in Section II-A, a typical instrument forms an image by sampling the voltage output from the SE detector, adding up the samples for each pixel, and quantizing these summed values to obtain an 8-bit image. For the purpose of comparing with other \(\eta\) estimators, we define \(\widehat{\eta}_{\mathrm{CI}}\) based on emulating this process. For each incident particle, a pulse with height following (56) and width \(\tau\) is generated. The resulting waveform is sampled with period \(100\,\mathrm{ns}\) and summed to obtain an estimate. An additional factor is needed to be on the correct scale; we determine this scaling factor by matching the mean to the mean of the improved conventional estimate below.
#### Vi-E2 Improved conventional
Within the PPG model, \(V\) in (58) contains all the information acquirable without time resolution. Dividing by \(c_{1}\) puts \(V\) on the scale of SE yield \(\eta\), so analogously to (41) we define
\[\widehat{\eta}_{\mathrm{IC}}=\frac{V/c_{1}}{\lambda}. \tag{61}\]
#### Vi-E3 Ion count oracle
To contextualize the performance of the implementable estimators, we use the oracle from (42) along with an ion count oracle that assumes the true count of ions \(M\) is known:
\[\widehat{\eta}_{\mathrm{ICO}}=\frac{V/c_{1}}{M}. \tag{62}\]
#### Vi-E4 QM estimator
Analogous to the QM estimator in Section IV-B, we can use \(\widetilde{M}_{\mathrm{corrected}}\) as a proxy for \(M\). Then our estimate is the unique root of
\[\widehat{\eta}_{\mathrm{QM}}=\frac{V/c_{1}}{\widetilde{M}/\gamma_{\tau}( \lambda,\widehat{\eta}_{\mathrm{QM}})}. \tag{63}\]
#### Vi-E5 ML-inspired estimator
We can correct the bias in \(\widetilde{M}_{\mathrm{corrected}}\) analogously to the ML estimator in Section IV-B, noting that this is not a true ML estimator under our current model. The estimate is the unique root of
\[\widehat{\eta}_{\mathrm{MLI}}=\frac{V/c_{1}}{\widetilde{M}/\gamma_{\tau}( \lambda,\widehat{\eta}_{\mathrm{MLI}})+\lambda e^{-\widehat{\eta}_{\mathrm{MLI }}}}. \tag{64}\]
Figure 10 plots the bias, standard deviation, and RMSE of these estimators as functions of \(\eta\), using the values of \(c_{1}\) and \(c_{2}\) from Section VI-C and \(\lambda=20\). These values were calculated using a Monte Carlo simulation. The most important observation from this figure is that the TR estimators continue to outperform the conventional estimator over almost the entire range of \(\eta\) considered here. Similar to the results in [16], \(\widehat{\eta}_{\mathrm{QM}}\) has a high bias at low \(\eta\) due to significant underestimation of \(M\), but its RMSE is still lower than the conventional estimators for \(\eta>1.9\). The two conventional estimators, \(\widehat{\eta}_{\mathrm{CI}}\) and \(\widehat{\eta}_{\mathrm{IC}}\), have almost identical performance; however, without calculation of \(\widehat{\eta}_{\mathrm{IC}}\), the factor \(k\) in \(\widehat{\eta}_{\mathrm{CI}}\) would be unknown and the RMSE potentially larger. The ion count oracle \(\widehat{\eta}_{\mathrm{ICO}}\) forms an effective lower bound on the RMSE of the non-oracle estimators, and \(\widehat{\eta}_{\mathrm{QM}}\) and \(\widehat{\eta}_{\mathrm{MLI}}\) achieve performance close to \(\widehat{\eta}_{\mathrm{ICO}}\) for high \(\eta\). The RMSE of \(\widehat{\eta}_{\mathrm{oracle}}\) is lower than that of \(\widehat{\eta}_{\mathrm{ICO}}\) by a factor of \(\sim 1.4\), reflecting the loss in information about \(\eta\) from increased uncertainty in the SE counts.
## VII Conclusion
In this work, we have shown that TR measurements, where we measure the full vector of SE detections for every pixel, outperforms conventional scalar-valued PBM for detecting changes in \(\eta\) or estimating \(\eta\). We motivated TR measurements by quantifying a gain in Fisher information for estimation of \(\eta\) at low dose \(\lambda\), as well as increased error exponents for discrimination between two values of \(\eta\) using KLD. We also demonstrated that TR estimators outperform the conventional estimator for \(\eta\) both in the idealized scenario where direct counts of detected SEs are available (yielding the measurement vector \(\{\widetilde{M},\widetilde{T},\widetilde{X}\}=\{\widetilde{M},(\widetilde{T }_{1},\widetilde{T}_{2},\ldots,\widetilde{T}_{\widetilde{M}}),(\widetilde{X}_ {1},\widetilde{X}_{2},\ldots,\widetilde{X}_{\widetilde{M}})\}\)), as well as the more realistic scenario where noise from
the SE detection process makes direct SE counts inaccessible (making the measurement vector \(\{\widetilde{M},\widetilde{T},\widetilde{U}\}=\{\widetilde{M},(\widetilde{T}_{1}, \widetilde{T}_{2},\ldots,\widetilde{T}_{\widetilde{M}}),(\widetilde{U}_{1}, \widetilde{U}_{2},\ldots,\widetilde{U}_{\widetilde{M}})\}\)).
Our re-analysis of previously derived estimators for \(\eta\) led to new insights into their relative performances. We also developed two new estimators: the conditional expectation estimator using the conditional distribution of \(M\) given \(\widetilde{M}\); and an \(\widetilde{M}\)-based estimator that only uses the count of SE pulses. The latter estimator was particularly useful for deriving values for the PPG model parameters \(c_{1}\) and \(c_{2}\).
The estimator \(\widehat{\eta}_{\widetilde{M}}\) could also be used to calculate the PBM detector efficiency [25, 26]. For this application, a bulk sample with \(\eta\) known under the imaging conditions being used would need to be imaged (or, alternatively, \(\eta\) of the bulk sample could be measured in the PBM using a standard technique [41]). Then, the ratio of \(\widehat{\eta}_{\widetilde{M}}\) and the true sample \(\eta\) would be the instrument's detective efficiency. A similar technique was used in [39] to calculate detector efficiency; that work was further simplified by the low value of \(\eta\), which allows detection efficiency to be estimated as the ratio of \(\widetilde{M}\) and \(\lambda\) times the known \(\eta\).
Successful implementation of the \(\eta\) estimators described in Section VI-E depends on the accuracy of the PPG model in representing the instrument response, as well as accurate estimation of model parameters. As discussed previously, the PPG model leaves open the unphysical possibility of negative SE voltage pulse heights. This issue could be resolved with a heuristic approach, such as zero-truncation or folding of the probability density function. Alternatively, a pulse height histogram such as the one in Figure 9 could be acquired at a low \(\eta\), such that the probability of more than one SE being detected from a given pixel is sufficiently small. Such a histogram could then be used as the empirical single-SE instrument response.
The loss in Fisher information with increasing uncertainty in the count of SEs, as demonstrated in Figure 8, points to the potential benefits of hardware SE counting in PBM. Although direct counting of SEs is available in transmission-based PBM techniques such as transmission electron microscopy [42, 43, 44], it has not been explored in SE-based techniques. Its implementation in SEM and HIM could lead to large improvements in achievable image quality.
## Acknowledgements
The authors acknowledge Dr. Leila Kasaei, Dr. Hussein Hijazi, and Prof. Leonard Feldman from the Department of Physics, Rutgers University for enabling the acquisition of the experimental data used in this work, as well as fruitful discussions on its interpretation.
|
2305.16852 | Model-Based Simulation for Optimising Smart Reply | Smart Reply (SR) systems present a user with a set of replies, of which one
can be selected in place of having to type out a response. To perform well at
this task, a system should be able to effectively present the user with a
diverse set of options, to maximise the chance that at least one of them
conveys the user's desired response. This is a significant challenge, due to
the lack of datasets containing sets of responses to learn from. Resultantly,
previous work has focused largely on post-hoc diversification, rather than
explicitly learning to predict sets of responses. Motivated by this problem, we
present a novel method SimSR, that employs model-based simulation to discover
high-value response sets, through simulating possible user responses with a
learned world model. Unlike previous approaches, this allows our method to
directly optimise the end-goal of SR--maximising the relevance of at least one
of the predicted replies. Empirically on two public datasets, when compared to
SoTA baselines, our method achieves up to 21% and 18% improvement in ROUGE
score and Self-ROUGE score respectively. | Benjamin Towle, Ke Zhou | 2023-05-26T12:04:33Z | http://arxiv.org/abs/2305.16852v1 | # Model-Based Simulation for Optimising Smart Reply
###### Abstract
Smart Reply (SR) systems present a user with a set of replies, of which one can be selected in place of having to type out a response. To perform well at this task, a system should be able to effectively present the user with a diverse set of options, to maximise the chance that at least one of them conveys the user's desired response. This is a significant challenge, due to the lack of datasets containing sets of responses to learn from. Resultantly, previous work has focused largely on post-hoc diversification, rather than explicitly learning to predict sets of responses. Motivated by this problem, we present a novel method SimSR, that employs model-based simulation to discover high-value response sets, through simulating possible user responses with a learned world model. Unlike previous approaches, this allows our method to directly optimise the end-goal of SR-maximising the relevance of at least one of the predicted replies. Empirically on two public datasets, when compared to SoTA baselines, our method achieves up to 21% and 18% improvement in ROUGE score and Self-ROUGE score respectively. 1
Footnote 1: This paper has been accepted to appear at ACL 2023.
## 1 Introduction
Automated response suggestion, or Smart Reply (SR), is rapidly becoming a staple feature of many email and chat systems such as Gmail, Skype, Outlook, Microsoft Teams, LinkedIn and Facebook Messenger. Given a message, SR systems present the user with a selection of possible responses, e.g. How are you? \(\rightarrow\) {I'm good; I'm ok; Not great}, which they can click in place of having to type out a reply. With the growth of communication over smaller devices that are poorly suited for manual typing (Varcholik et al., 2012; Palin et al., 2019), such as smartphones and smart watches, SR is becoming an increasingly more important feature.
While early methods in SR incorporated sequence-to-sequence models (Kannan et al., 2016), the current mainstream approach favours _Matching models_ which separately encode the message and reply into a shared latent space and retrieve the nearest neighbour response (Deb et al., 2019; Zhang et al., 2021; Deb et al., 2021). This has advantages in a production context, as it enables the model to retrieve replies from a fixed response set, maintaining greater controllability of model outputs; further, the latent representations for the response set can be pre-computed prior to inference, enabling faster latency.
However, the naive approach of simply retrieving top-\(K\) highest-scoring candidates from the Matching model often fails to produce a sufficiently diverse set of reply options. For instance, in response to the message How are you?, if the first predicted response is I'm good, predicting I'm doing well as the second response provides limited incremental value, as it carries equivalent semantic meaning. By contrast, Not great would be more useful, as it captures an alternative semantic meaning a user might wish to convey. In summary, one must account for the _interdependencies_ between replies. Previous methods have sought to implicitly account for these interdependencies such as through clustering by intent/topic, learning latent variables or re-scoring replies to include inter-reply similarity (Kannan et al., 2016; Deb et al., 2019, 2021). However, these techniques face two limitations: (1) they require hard-coded trade-offs between message-reply relevance and inter-reply diversity; (2) jointly optimising these two metrics is only partially correlated with the end goal of SR-maximising the relevance _at least one_ of the predictions. Ideally, it would be more principled if the model could simply optimise over this end goal. In so doing, we hypothesise performance would improve, while a good amount of diversity should also naturally emerge, insofar as it
is correlated with performance on the task.
However, directly optimising this metric presents two problems: (1) the probability distribution over replies given messages is initially unknown; (2) we only have access to a _single_ reply for each message sampled from this distribution-i.e. the dataset of \(\langle\)message, reply\(\rangle\) pairs-which prevents simply learning to predict reply sets via supervised learning. To circumvent these problems, we introduce model-based simulation (MBS) to the SR setting as a possible avenue forward. MBS is a technique from reinforcement learning (Sutton and Barto, 2005) that allows an agent to choose what action to take by simulating the potential consequences of an action using a learned world model. We observe that the Matching model, given it is trained on a dataset of \(\langle\)message, reply\(\rangle\) pairs, can also operate as a world model. This allows us to estimate the expected relevance of any reply set, by running repeated simulations with the world model. Crucially, relevance here can be defined as the maximum similarity between the reply set and a response sampled from the world model, which replaces the reliance on hard-coded trade-offs between message-reply relevance and inter-reply similarity.
Concretely, our method-SimSR (Figure 1)-comprises an initial retrieval stage, followed by an iterative simulation stage. We first retrieve a shortlist of replies from a larger candidate pool, using a learned neural Matching model, conditioned on a given message. In parallel, we also retrieve a number of simulated replies using the same method. Next, for the simulation stage, we use a search module to select a reply set comprising three responses from the shortlist. Then, we use a valuation module, which computes the expected similarity between the simulated replies and the most similar response from the reply set. This can be computed through a simple marginalisation process, using the probabilities and corresponding simulated replies provided by the world model. This process of search and valuation is iterated until the search algorithm terminates, and finally returns the highest scoring reply set. Quantitatively, our experiments show consistent out-performance against existing SoTA methods across two relevant datasets-Reddit and PERSONA-CHAT-achieving up to 21% and 18% improvement in ROUGE score and Self-ROUGE score respectively. SimSR also runs at a comparable speed to other methods, because the simulation is highly parallelisable and the Matching model only needs to encode the message once for both its initial retrieval and world model roles. In summary, our key contributions are:
* We present model-based simulation as a novel paradigm for the Smart Reply task.
* We present SimSR, a novel method that employs model-based simulation with a learned world model.
* We demonstrate empirically the importance of taking into account reply interdependencies, achieving SoTA performance across the Reddit and PERSONA-CHAT datasets.
We make our code available for reproducibility.2
Figure 1: Overview of our approach. We combine a retrieval stage, which obtains the initial reply shortlist \(Y_{N}\), followed by a simulation stage, which iteratively searches for reply sets \(Y_{K}\) from that shortlist, and evaluates their relevance against a set of simulated replies \(Y_{M}\).
Related Work
Smart Reply.In industry, SR has a range of applications from email systems to instant messaging. Naturally, the data from these is not publicly available to train on. Instead, recent work has made use of publicly available dialogue datasets such as Reddit (Deb et al., 2021; Zhang et al., 2021), which is sufficiently similar given SR applications are principally concerned with dialogue. While the earliest SR systems used sequence-to-sequence models (Kannan et al., 2016), nowadays retrieval methods prevail which select a response from a pre-defined pool of candidates (Henderson et al., 2017), i.e. Matching models. By itself however, the Matching model has no way to ensure that the chosen reply set is sufficiently diverse. One approach to this is to ensure that no two responses in the reply set share the same topic/intent (Kannan et al., 2016; Chakravarthi and Pasternack, 2017; Weng et al., 2019). However, this becomes more difficult in an open-domain setting, where the range of topics/intents is difficult to pre-define. As a result, other approaches have focused on more fine-grained diversification through conditional variational autoencoder techniques, which learn topics/intents across a continuous latent space during training (Zhao et al., 2017; Deb et al., 2019). Maximum marginal relevance, which re-weights responses according to how similar they are with one another, has also been shown to work well (Carbonell and Goldstein-Stewart, 1998; Deb et al., 2019). Our method differs from these approaches in that they employ diversity in a post-hoc manner which does not directly optimise the end goal of SR-maximising the relevance of at least one of the predicted replies.
Simulation in NLP.In board games such as Go and chess, a model can have access to a perfect simulator, allowing it to explore various counterfactual trajectories before deciding what action to take next (Silver et al., 2017). In user-facing NLP applications, this is rarely possible. Therefore, much work has focused on settings such as self-play, in which a model learns to become better at a task such as negotiating (Lewis et al., 2017) or even open-domain dialogue (Li et al., 2016) through interacting with another copy of itself (or a version with frozen weights). User simulators are especially prevalent in task-oriented dialogue, where the domain is narrower and it is therefore easier to anticipate user behaviour (Li et al., 2016). A notable exception to the above cases is text-based games-scripted games involving interacting in a wholly text-based environment-which are typically trained with access to a perfect simulator, as the game engine allows for previous states to be restored (Jang et al., 2021). Our work is closest in spirit to those works that perform dialogue rollouts to select the next utterance using a reply prediction model (Lewis et al., 2017; Li et al., 2016)-i.e. the Matching model. However, in our case the rollouts only involve a single step look-ahead, while our action space is the set of possible reply sets, rather than individual utterances. Further, our method can be used out-of-the-box during inference, without any further retraining of the Matching model. So far as we are aware, our work is the first to apply this concept of simulation to the SR setting.
## 3 Framework
### Task Definition
Our task is to predict a set of \(K\) replies \(Y_{K}=\{y_{k}\}_{k=1}^{K}\) from a candidate pool \(Y_{R}\) of size \(R\), conditioned on a message \(x\). While in an online setting, the aim might be to maximise click-through rate (Deb et al., 2019), in an offline setting this can be approximated as maximising the similarity function \(f(y)\), given as the maximum similarity between \(Y_{K}\) and the ground truth response \(y\)(Zhang et al., 2021):
\[f(y)=\max_{k}[\{\mathrm{sim}(y,y_{k})\}_{k=1}^{K}] \tag{1}\]
### Matching Model
Following previous approaches, we use a Matching model as the backbone of our method (Henderson et al., 2017; Zhang et al., 2021). This comprises two parallel pre-trained transformer encoders \(\Phi\) (with shared weights) that _separately_ encode \(x\) and \(y\) into a shared latent space. This is obtained by taking the output hidden-state corresponding to the [CLS] token which is pre-pended to each of the inputs. We refer to the vector representations of the message and reply as \(\Phi(x)\) and \(\Phi(y)\) respectively, and their score \(g(x,y)=\Phi(x)\cdot\Phi(y)\). The model is trained using negative log-likelihood to maximise the joint probability of the context and reply:
\[p(x_{i},y_{i}){=}\frac{e^{g(x_{i},y_{i})}}{\sum_{y_{j}}e^{g(x_{i},y_{j})}+\sum _{x_{j}}e^{g(x_{j},y_{i})}-e^{g(x_{i},y_{i})}} \tag{2}\]
This is referred to as _symmetric loss_(Deb et al., 2019), and is known to impose tighter constraints on the relation between the message and reply, compared to having only a one-way classification loss function.
## 4 SimSR
For any given message \(x\), there is uncertainty about the response \(y\), which we assume to be sampled from some distribution \(Y\). This is commonly referred to as the one-to-many problem (Zhao et al., 2017; Towle and Zhou, 2022) and is due to several reasons, such as unknown facts about the user and their intent. For example, the reply to Can you meet for coffee at 2pm? is likely to be conditioned on factors such as the user's schedule or their interest in meeting, which is unknown to a vanilla SR system. As a result, Matching models that simply select the most likely individual replies only achieve a lower bound of potential performance. This can be represented by the following inequality:
\[E_{y\sim Y}[f(Y)]>=f(E_{y\sim Y}[Y]) \tag{3}\]
where \(f(Y)\) refers to the similarity function from Equation 1. The right hand side of Equation 3 represents what a Matching model approximates, while the left hand side is what we would like to obtain. Intuitively, this means that a good model should make predictions that capture the range of possible responses that could be sampled from \(Y\), rather than simply the single most likely response. To do this, we hypothesise it is important to develop a method that accounts for the interdependencies between replies, i.e. which can evaluate sets of replies, rather than only individually scoring replies.
Algorithm 1 and Figure 1 overview our method, which can be applied directly during inference. The Matching model first retrieves a shortlist of \(N\) replies from a pool of pre-computed candidates \(Y_{R}\) (Section 4.1). Then we combine a search module which selects and constructs reply tuples from this shortlist to evaluate (Section 4.4) and a valuation module (Section 4.3) which computes an expected score between a given reply set and a list of simulated replies (Section 4.2). Note that as our method does not require learning any new parameters, it can be applied to reply sets of arbitrary sizes during inference.
### Reply Shortlist
Given an overall candidate pool of size \(R\), the corollary action space of \(K\)-tuples is intractably large: \(\frac{R!}{K!(R-K)!}\). To mitigate this, we follow previous work (Deb et al., 2019) and first retrieve the top-\(N\) ranking replies conditioned on the message \(x\), using the Matching model, where \(N<<R\). We refer to this set as \(Y_{N}=\{y_{n}\}_{n=1}^{N}\). This defines the building blocks with which we can construct the action space of \(K\)-tuples of replies to perform our simulation on.
### Simulated Replies
We do not have access to the ground-truth data-generating distribution-i.e. \(p_{human}(y|x)\)-which would be required for planning in the actual environment. However, the Matching model can serve as an effective approximator of this distribution-henceforth, \(p_{model}(y|x)\)-since it was trained on \(\langle\)message,reply\(\rangle\) pairs sampled from the ground-truth distribution. Thus, using the same Matching model as above, we retrieve the top-\(M\) replies, also conditioned on the message \(x\), to obtain \(Y_{M}=\{y_{m}\}_{m=1}^{M}\). In practice, as we use the same model to retrieve both \(Y_{N}\) and \(Y_{M}\), this can be achieved with a single query of the response set-therefore, the impact on latency is kept to a minimum.
### Valuation
We define similarity between a \(K\)-tuple and the \(m\)-th simulated response \(y_{m}\in Y_{M}\) as:
\[h(y_{m},Y_{K})=\max_{k}\{\mathrm{sim}(y_{m},y_{k})\}_{k=1}^{K} \tag{4}\]
where \(\mathrm{sim}(\cdot,\cdot)\) is a similarity score. Intuitively, this rewards the model if at least one of the predictions is relevant to the user. We use term-level F1-score to represent similarity for simplicity, and leave alternative measures for future work. We obtain the expected similarity for a given \(K\)-tuple by marginalising over the scores for all \(y_{m}\in Y_{M}\):
\[E[h(y,Y_{k})]=\sum_{m}^{M}h(y_{m},Y_{K})\!\cdot\!p_{model}(y_{m}|x) \tag{5}\]
In practice, we found dividing the scores by a high temperature (\(\tau=10\)) (Hinton et al., 2015) before applying a softmax normalisation improved performance, as it encouraged the model to take into account a larger range of possible simulated responses.
### Search
Given our method for estimating the value of any given \(K\)-tuple, it is necessary to employ a search algorithm, to decide which tuples should be evaluated. In this work, we consider a selection of out-of-the-box and bespoke methods:
Exhaustive Search.A straightforward approach is to simply enumerate and evaluate all possible tuples. This is feasible because (a) \(N\) is typically a relatively small number (\(15\), in our experiments), (b) the computational cost for evaluating any given tuple is low, given it involves simply computing Equation 5 where the similarity function \(\text{sim}(\cdot,\cdot)\) only needs to be computed once for each \(y_{n}\),\(y_{m}\) pair.
Ablative Search.For larger values of \(N\), it is necessary to employ a more selective search strategy. We observe that the task of finding \(K\) replies from a shortlist of \(N\) replies can be treated partially as a clustering problem, where each reply in the \(K\)-tuple represents a cluster nucleoid, and the objective is to minimise some distance matrix. To this extent, we design a method that incrementally builds the reply set by iteratively removing (hence, _ablative_) the least useful reply from the shortlist \(N\), until only \(K\) replies remain. In detail, for each of the \((N-1)\)-tuples of \(Y_{N}\) we compute \(E[h(y,Y_{N-1})]\), such that \(Y_{N-1}^{*}\) is the \((N-1)\)-tuple that obtained the highest score. We then remove the sole reply \(y*\) from \(Y_{N}\) that is not present in \(Y_{N-1}^{*}\). Finally, we repeat this process for all of the \((N-2)\)-tuples of \(Y_{N-1}\) etc. until we are left with \(Y_{N-(N-K)}=Y_{K}\).
Greedy Search.A limitation of ablative search is that it requires a lot of non-parallelisable compute due to the iterative nature of the algorithm. We therefore consider a greedy alternative. In brief, instead of obtaining \(Y_{K}\) by whittling down \(Y_{N}\), we instead incrementally build up \(Y_{K}\) starting from the empty set. This thus requires only \(K\) non-parallelisable steps, rather than \(N-K\). In detail, let \(Y_{G}\) be the set of currently chosen replies, such that initially \(Y_{G}=\varnothing\). Then, for each reply \(y_{n}\in Y_{N}\) we compute the expected similarity for the union of \(Y_{G}\) and \(y_{n}\), i.e. \(E[h(y,Y_{G}\cup y_{n})]\). Next, we append the highest scoring \(y_{n}\) to \(Y_{G}\), and repeat until \(|Y_{G}|=K\).
Sample and Rank.Finally, we consider a simple sample and rank approach, which has been shown to work well in other NLP tasks such as dialogue Freitas et al. (2020). This involves randomly selecting a subset of all possible tuples, and evaluating them. Then, we return the tuple with the highest score according to Equation 5.
## 5 Experiments
We now turn our attention towards empirical testing of SimSR, addressing the following research questions:
* **RQ1:** How does the choice of search strategy impact relevance and diversity in SimSR? (Section 5.5)
* **RQ2:** How does SimSR compare to existing SoTA SR methods? (Section 5.6, 5.8)
* **RQ3:** How much does SimSR benefit from accounting for interdependencies between replies when selecting a reply set? (Section 5.7)
### Baselines
We identify four types of diversification strategies which serve as baselines against our model. The original implementations of these methods are typically proprietary and unavailable for direct comparison. Therefore, in the list below we summarise our re-implementations as well as key changes that were made versus the original.
Matchingis the base retrieval model discussed earlier (Section 3.2) Henderson et al. (2017); Zhang et al. (2021). It simply selects the top-\(K\) responses according to their individual scores without any additional components. Our version uses the DistilBERT model as a base Sanh et al. (2019), whereas previous methods used a variety of transformers Zhang et al. (2021) and recurrent neural networks Deb et al. (2019)-we follow this for all baselines.
Matching-Topicuses topic classification to ensure none of the top-\(K\) responses share the same topic Kannan et al. (2016); Chakravarthi and Pasternack (2017); Weng et al. (2019). We replace the classifier with an out-of-the-box classifier trained on Twitter Antypas et al. (2022), which features similarly short-form messages to those used in SR.
Maximum Marginal Relevance (MMR)re-weights responses according to how similar they are with one another, which is combined in a linear combination with their message-response score Deb et al. (2019). Our re-implementation is closer to the original algorithm Carbonell and Goldstein-Stewart (1998) in that we incrementally build the reply set, rather than in a single step-we found this performed better during early testing.
Mcvae(Deb et al., 2019) is a conditional variational autoencoder Zhao et al. (2017) built on top of the Matching model, allowing for multiple query vectors to be generated from a single message embedding. Candidates are scored using a voting process whereby each query vector selects the nearest reply, and the \(K\) most-selected replies are chosen. We re-implement this without any major changes from the original to the best of our knowledge, and use the original paper's hyperparameters, such as size of the latent variable, where possible.
### Datasets
We evaluate our methods across two datasets, summarised in Table 2. While most prior work has used proprietary datasets Kannan et al. (2016); Deb et al. (2019), we identify a single publicly available SR dataset-Reddit/MRS Zhang et al. (2021). We supplement this by also evaluating on PERSONACHAT Zhang et al. (2018), which similarly falls under the broader umbrella of open-domain dialogue. Below we provide further elaboration:
Redditor MRS Zhang et al. (2021) is, to the best of our knowledge, the only publicly available dataset created specifically for the SR setting. The dataset is multilingual, covering \(10\) languages and over \(50M\) message-reply pairs extracted from the social-media site Reddit. As our focus is only on the monolingual setting, we use only the English portion of the corpus. Further, due to limited computational resources we train and evaluate on only a small subset of the data (randomly selected).
Persona-Chat[Zhang et al., 2018] is a crowdworker-sourced dialogue dataset between pairs of speakers in which each speaker is assigned a brief persona comprising a few sentences, e.g. I have a dog. We simply concatenate this information to the message, following previous approaches Humeau et al. (2020). As it is an open-domain dialogue dataset, it covers a broad range of possible conversations, and therefore provides
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Search} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{PERSONA-CHAT} & \# Tuples \\ \cline{2-5} & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & Evaluated \(\downarrow\) \\ \hline Exhaustive & 2.47 & 2.49 & **7.85** & 8.60 & 455 \\ Ablative & 2.40 & **2.36** & 7.71 & **8.39** & 114 \\ Greedy & **2.49** & 2.77 & 7.82 & 9.76 & 42 \\ Sample-and-Rank & 2.39 & 2.79 & 7.39 & 12.27 & **25** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on the Reddit and PERSONA-CHAT Test sets under different search strategies for SimSR.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{PERSONA-CHAT} \\ \cline{2-7} & Train & Valid & Test & Train & Valid & Test \\ \hline \# Samples & 50k & 5k & 5k & 66k & 8k & 8k \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics for the datasets.
another useful benchmark of performance for an SR system, which are often deployed in similarly open-domain environments.
### Metrics
We use a weighted ROUGE Lin (2004) ensemble metric to evaluate performance, which is known to be well correlated with click-through rate in the SR setting Zhang et al. (2021). This consists of a mixture of 1/2/3-grams for ROUGE-F1:
\[\frac{\text{\sc{ROUGE-1}}}{6}+\frac{\text{\sc{ROUGE-2}}}{3}+\frac{\text{\sc{ROUGE -3}}}{2} \tag{6}\]
### Hyperparameters
We train our models using the Adam optimizer Kingma and Ba (2014) for \(3\) epochs, with an initial learning rate of \(5e-5\) and linear decay, and a batch size of \(8\). We truncate the message and response to the last \(64\) tokens each. We initialise our models from the DistilBERT checkpoint Sanh et al. (2019),3 which is a \(66M\) parameter transformer trained via knowledge distillation on BERT. During inference, we set \(K=3\) which is a standard number for SR Zhang et al. (2021). We also set the number of candidates initially retrieved by the Matching model \(N=15\), which previous work has shown provides a good trade-off between accuracy and latency Deb et al. (2019). For SimSR, we set the number of simulations \(M=25\). For both PERSONA-CHAT and Reddit we use the entire training set to retrieve from (i.e. \(Y_{R}\)). In early testing, we explored using heuristic techniques to create a more deduplicated candidate pool, but found limited benefit, and therefore opted for this simpler approach.
Footnote 3: [https://huggingface.co/distillbert-base-uncased](https://huggingface.co/distillbert-base-uncased)
During deployment, although SR systems produce multiple replies, only _one_ of them needs to be relevant. To replicate this, we only record the maximum ROUGE across the \(K=3\) replies outputted. We also report Self-ROUGE Celikyilmaz et al. (2020), which is an unreferenced metric that measures the diversity of the predicted replies. For each reply \(y_{k}\in Y_{K}\), we treat \(y_{k}\) as the prediction and the other two replies as the references, using the same ROUGE metric as above. Note that a lower Self-ROUGE indicates _more_ diversity.
### Choosing a Search Strategy
Table 1 shows the performance of SimSR under different search strategies. This is motivated by two sub-questions: (1) how robust is SimSR to the choice of search strategy? (2) What trade-offs are involved between relevance, diversity and efficiency?
Exhaustive search unsurprisingly performs the best both in terms of relevance and diversity, but is the least efficient and would not scale to larger values of \(N\). More interesting is the trade-off between relevance and diversity that occurs between the Ablative and Greedy methods. Greedy performs slightly better in relevance, perhaps suggesting that the longer sequences involved in the Ablative method leave more opportunity for errors to be propagated. However, Greedy performs significantly worse in diversity. While a high diversity is not always a good thing (e.g. random guessing would also have a high diversity), Ablative's diversity is much closer to that obtained by Exhaustive search. Sample and Rank consistently gave the worst results, suggesting randomly constructing tuples is insufficient for finding high-value tuples.
Overall, these results show that SimSR is reasonably robust to the choice of search strategy. Going forward, we opt to use Ablative search for subsequent experiments which provided arguably the best trade-off in terms of relevance, diversity and efficiency by a small margin.
### Main Results
Table 3A-B summarises our main results. Across both tasks, we find that additional filtering/diversification measures improve the diversity of the suggested replies, but provide only limited improvement to relevancy. We argue this reflects the fact the these methods often involve trading off relevance for diversity, such as MMR, which explicitly scores replies as a linear combination of their relevancy to the message and their similarity to other replies in the reply set. Similarly, whilst the out-of-the-box Topic classifier sometimes produced outputs that were more diverse than the other baselines, this came at the cost of reduced relevance, due to it being too coarse-grained-i.e. often a given message required multiple replies from the _same_ topic.
Contrastingly, we show our method is able to consistently improve on both relevancy and diversity for both tasks. On Reddit, relevancy im
proves by up to 14% and diversity by up to 21%; on PERSONA-CHAT, relevancy improves by 18% and diversity improves by 6%. All results are statistically significant on a t-test with _p_-value < 0.01. The main difference between the datasets is that PERSONA-CHAT is a less noisy dataset, being made by crowdworkers, and therefore both metrics are comparatively higher.
### Ablations
We consider the question of whether SimSR is simply learning to predict individual replies that have a high expected score, rather than learning to take advantage of interdependencies between replies. To this end, in Table 3C we present an ablation ('- Multi-Reply') that selects the top-\(K\) replies according to their _individual_ scores in simulation, without considering their scores at the _tuple_-level, i.e. \(\mathrm{TopK}(\{E[h(y,y_{n})]\}_{n=1}^{N})\). We also present a version without simulation at all as a baseline comparison, which is equivalent to the Matching model in Table 3A.
Results show that removing multi-reply significantly harms performance. Versus the baseline, there is no improvement on Reddit, while there are only limited gains on PERSONA-CHAT, suggesting most of the performance gains from SimSR are due to the ability to account for interdependencies within the reply set. We hypothesise the reason for the difference between the two datasets is because PERSONA-CHAT is a less noisy dataset, and therefore selecting individual replies with a high expected similarity may provide some benefit. Diversity is especially harmed, and even is significantly less diverse than the baseline. This is unsurprising, given maximising the similarity of each reply to the same set of simulated replies implicitly encourages responses to be similar.
### Case Study
Table 4 presents two case studies comparing the qualitative performance of SimSR versus a selection of baseline methods. In both case studies we see SimSR is able to represent three diverse intents across its predictions versus only one or two intents for the Matching and MMR models. In the left example, SimSR is crucially able to capture including both a positive and a negative intent, unlike the baselines. In the right example, SimSR successfully avoids duplicating the I'm glad intent. Note that in both cases it would be impractical to use heuristic measures to deduplicate the intents (e.g. removing replies with only 1 word edit distance) as there is often only partial term-level overlap between the utterances.
### Latency
Table 5 validates the limited latency impact of SimSR compared to the baseline methods. We used an NVIDIA GeForce RTX 3060 Ti GPU and CPU operations were conducted by an AMD Ryzen 7 5700G with Radeon Graphics. For the initial retrieval, we pre-compute the reply embeddings and store them in a FAISS index Johnson et al. (2017). Overall, we find SimSR is able to maintain comparable latency to other methods which incorporate post-hoc diversification methods such as MCVAE and MMR. The small latency difference for SimSR is mainly due to the iterative search and evaluation process not using any low-level optimisation in the code or multiprocessing. Topic is the slowest due to the additional inference cost of the Topic classifier.
## 6 Conclusion
In this work, we have presented a method for generating sets of replies for Smart Reply systems, using model-based simulation and a range
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Section} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Reddit} & \multicolumn{2}{c}{PERSONA-CHAT} \\ \cline{3-6} & & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) \\ \hline \multirow{4}{*}{(A) Baselines} & Matching & 2.04 & 6.92 & 6.61 & 12.44 \\ & Matching + Topic & 2.01 & 3.17 & 6.42 & 11.77 \\ & Matching + MMR & 2.17 & 5.19 & 6.66 & 10.76 \\ & MCVAE & 2.12 & 3.99 & 6.52 & 8.93 \\ \hline \hline \multicolumn{2}{l}{(B) Our Method} & SimSR & **2.40** & **2.36** & **7.71** & **8.39** \\ \hline \multirow{2}{*}{(C) Ablations} & - Multi-reply & 2.02 & 19.77 & 7.03 & 35.24 \\ & - Simulation & 2.04 & 6.92 & 6.61 & 12.44 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of SimSR (B) compared to baseline approaches (A) and ablations (C) on the Reddit and PERSONA-CHAT Test sets. All results are statistically significant on t-test with _p_-value < 0.01.
of search strategies to discover high-value reply sets, without the need for any additional training. Our method outperforms existing SoTA methods on both datasets tested, and we have supported our results by detailed analysis of the effect of different search strategies, demonstration of the importance of accounting for interdependencies between replies, and a detailed case study. Future work could consider whether it is possible to improve the quality of the initial retrieval (e.g. by training on sets of replies), or other methods for scoring response similarity during simulation.
## Acknowledgements
We thank the reviewers for their helpful feedback and suggestions during the reviewing process. This work is partly supported by the EPSRC DTP Studentship program. The opinions expressed in this paper are those of the authors, and are not necessarily shared or endorsed by their employers and/or sponsors.
## Limitations
While our approach is able to optimise over the retrieved shortlist of replies, it does not improve the initial retrieval from the candidate pool, which still scores individual candidates, rather than reply sets, using the Matching model. This is a limitation that is shared with prior baseline methods. A further limitation is that we only consider the monolingual setting, whereas many deployed SR applications have an international footprint. Learning a multilingual Matching model in SR is known to have additional challenges Deb et al. (2021). Another limitation is that our model is only tested on public dialogue datasets, due to actual conversations on platforms using SR being proprietary. Therefore, while our techniques should work well in the instant messaging setting, our methods have not been directly tested in the email setting.
## Ethical Considerations
As neural dialogue models have grown in expressive capabilities and fluency, ethical considerations are an increasingly prominent issue. Key considerations typically centre around model's tendencies (1) to produce information that is factually inaccurate Shuster et al. (2021) or (2) to repeat toxic/biased behaviour from the training data Xu et al. (2020). Compared to vanilla dialogue models, these risks are mitigated in SR: (1) SR is usually limited to short-form replies that express simple information, and is therefore less likely to lead to the kinds of hallucination seen in longer-form answers; (2) SR typically does not generate tokens
\begin{table}
\begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{**PERSONA-CHAT**} & **Reddit** \\ \hline
**Message:** _So do you have any pets?_ & **Message:** _where? i’ve always wanted to be in one!_ \\ \hline \multicolumn{3}{c}{**Matching**} \\ \hline No, no pets. Do you have any & I’m so glad I’m not the only one. \\ No, no pets. You? & glad i’m not the only one.... \\ No, I do not have any pets. What are some things you like & Wait... They said I’ll be the the first... \\ \hline \multicolumn{3}{c}{**MMR**} \\ \hline I do not have any but I do want a dog & I will have one of everything, please. \\ No, no pets. You? & I’m so glad I’m not the only one. \\ No, no pets. Do you have any? & glad i’m not the only one.... \\ \hline \multicolumn{3}{c}{**SimSR**} \\ \hline No, I do not have any pets. & I’ll be there, too. Also my first time seeing them. Can’t wait. \\ Nope no pets at the moment. How are you? & Glad I wasn’t the only one \\ Yes I have 2 dogs. & ME TOO. We need to go find one. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Examples of model outputs on the PERSONA-CHAT (left) and Reddit (right) Test sets. SimSR produces replies that capture multiple possible user intents, while the other approaches capture a more limited range of intents.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Latency (ms) \\ \hline Matching & 23.3 \\ Matching + Topic & 45.5 \\ Matching + MMR & 24.5 \\ MCVAE & 25.9 \\ \hline SimSR & 29.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Latency of SimSR compared to baseline approaches on the Reddit Validation set.
sequentially, but retrieves responses from a pool of candidates, which can be vetted in advance. Note however, this does not prevent replies that are contextually inappropriate when paired with a particular message, e.g. Do you hate people? \(\rightarrow\) Yes, I do. The human-in-the-loop, who must ultimately choose and be accountable for whether or not to select one of the suggested replies, can be seen as a risk mitigant compared to vanilla chatbots. Conversely however, Wenker (2023) identify risks pertaining to a loss of human agency, such as due to a user selecting a sub-optimal reply to save time or being primed by the replies. This could lead to people being more trusting of an SR-generated reply versus receiving a reply from a chatbot, due to the belief that a human ultimately is behind it. We also only experimented with datasets that were released by previous studies, which are publicly available. These datasets (especially Reddit) often contain toxic/biased behaviour which developers should bear in mind if using this system in a deployment context.
|
2310.12882 | Sequential Gibbs Posteriors with Applications to Principal Component
Analysis | Gibbs posteriors are proportional to a prior distribution multiplied by an
exponentiated loss function, with a key tuning parameter weighting information
in the loss relative to the prior and providing a control of posterior
uncertainty. Gibbs posteriors provide a principled framework for
likelihood-free Bayesian inference, but in many situations, including a single
tuning parameter inevitably leads to poor uncertainty quantification. In
particular, regardless of the value of the parameter, credible regions have far
from the nominal frequentist coverage even in large samples. We propose a
sequential extension to Gibbs posteriors to address this problem. We prove the
proposed sequential posterior exhibits concentration and a Bernstein-von Mises
theorem, which holds under easy to verify conditions in Euclidean space and on
manifolds. As a byproduct, we obtain the first Bernstein-von Mises theorem for
traditional likelihood-based Bayesian posteriors on manifolds. All methods are
illustrated with an application to principal component analysis. | Steven Winter, Omar Melikechi, David B. Dunson | 2023-10-19T16:36:18Z | http://arxiv.org/abs/2310.12882v1 | # Sequential Gibbs posteriors with applications to principal component analysis
###### Abstract.
Gibbs posteriors are proportional to a prior distribution multiplied by an exponentiated loss function, with a key tuning parameter weighting information in the loss relative to the prior and providing a control of posterior uncertainty. Gibbs posteriors provide a principled framework for likelihood-free Bayesian inference, but in many situations, including a single tuning parameter inevitably leads to poor uncertainty quantification. In particular, regardless of the value of the parameter, credible regions have far from the nominal frequentist coverage even in large samples. We propose a sequential extension to Gibbs posteriors to address this problem. We prove the proposed sequential posterior exhibits concentration and a Bernstein-von Mises theorem, which holds under easy to verify conditions in Euclidean space and on manifolds. As a byproduct, we obtain the first Bernstein-von Mises theorem for traditional likelihood-based Bayesian posteriors on manifolds. All methods are illustrated with an application to principal component analysis.
1
Footnote 1: Department of Statistical Science, Duke University, Durham, North Carolina
2
Footnote 2: Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
3
Footnote 3: Department of Mathematics, Duke University, Durham, North Carolina
## 1. Introduction
The standard Bayesian approach to data analysis involves specifying a generative model for the data via the likelihood, defining priors for all parameters, and computing parameter summaries using the posterior distribution defined by Bayes' rule. This paradigm has a number of advantages, allowing rich hierarchical models for complicated data generating processes, inclusion of expert information, and a full characterization of uncertainty in inference. One practical challenge arises in specifying realistic likelihoods for complex, high-dimensional data such as images or spatiotemporal processes. Realistic likelihoods from highly flexible parametric families may depend on more parameters than can be estimated from the data, introducing both theoretical and practical challenges for Bayesian analysis. Conversely, tractable likelihoods may miss important aspects of the data generating mechanism, leading to bias in posterior estimates, under-representation of parameter uncertainty, and poor predictive performance. The goal of this article is to extend likelihood-free Bayesian inference by leveraging loss-based learning.
Loss-based learning is an alternative approach which typically defines a loss measuring how well a parameter describes the data, estimates parameters by minimizing the loss, and occasionally quantifies estimation uncertainty relying on distributional assumptions, large-sample asymptotics, or nonparametric methods such as the bootstrap. This paradigm
## 1. Introduction
The _Bayesian inference_ of the data is a natural problem in which the data are modeled as a data-generating process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process. The Bayesian inference is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, and the data-generating process is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process process, which is a stochastic process process, which is a stochastic process process, which is a stochastic process process, which is a stochastic process process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process process, which is a stochastic process process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process, which is a stochastic process process, which is a
violations of assumptions for loss-based uncertainty quantification and practical challenges in choosing a realistic likelihood. Conceptually, these factors make principal component analysis an excellent use-case for Gibbs posteriors. However, in practice we find (1.1) cannot produce credible intervals for components with correct or near correct coverage. Similar coverage problems arise broadly when using Gibbs posteriors to study multiple quantities of interest.
We propose a generalization of Gibbs posteriors which overcomes these shortcomings by allowing a different tuning parameter controlling uncertainty for each quantity of interest. Our framework assumes each quantity is connected to the data only through a loss function and allows each loss to depend on previously estimated quantities. This encompasses many estimation problems, including principal component analysis. We extend existing Gibbs posterior theory, establishing concentration and a Bernstein-von Mises theorem under weak assumptions for losses defined on manifolds. Taking the loss to be a negative log-likelihood, we obtain what we believe is the first Bernstein-von Mises for arbitrary traditional likelihood-based Bayesian posteriors supported on manifolds. Our conditions can be verified with calculus in any chart, and do not require any advanced differential geometry machinery. The utility of our approach is highlighted through a fully developed example to principal component analysis, including simulations and an application of principal component regression to violent crime data.
## 2. Sequential Gibbs Posteriors
### Motivation
We begin with a simple example illustrating the failure of Gibbs posteriors and motivating our proposed solution. Consider estimating the mean \(\mu=E(X)\in\mathbb{R}\) by minimizing the risk
\[R(\mu)=\frac{1}{2}\int(x-\mu)^{2}P(dx).\]
Since one does not know the true data generating measure \(P\), it is standard to minimize the empirical risk based on independent and identically distributed samples \(x=(x_{1},\ldots,x_{n})\),
\[\ell(\mu\mid x)=\frac{1}{2n}\sum_{i=1}^{n}(x_{i}-\mu)^{2}.\]
Inference for \(\mu\) can be performed without assumptions about \(P\) by defining a Gibbs posterior using the empirical risk [36]. Adopting a uniform prior, (1.1) becomes
\[\pi_{\eta_{\mu}}(\mu\mid x)\propto\exp\bigg{\{}-\frac{n\eta_{\mu}}{2n}\sum_{i= 1}^{n}(x_{i}-\mu)^{2}\bigg{\}}\propto N\bigg{(}\mu;\frac{1}{n}\sum_{i=1}^{n}x _{i},\frac{1}{n\eta_{\mu}}\bigg{)}.\]
The Gibbs posterior is a normal distribution centered at the sample mean and \(n\eta_{\mu}\) is the posterior precision. Equal tailed credible intervals for \(\mu\) will be centered at the sample mean and can be made larger or smaller by decreasing or increasing \(\eta_{\mu}\). Now consider
estimating the variance \(\sigma^{2}=\operatorname{var}(X)\) conditional on \(\mu\) by minimizing
\[R(\sigma^{2}\mid\mu)=\frac{1}{2}\int\{\sigma^{2}-(x-\mu)^{2}\}^{2}P(dx).\]
This risk is minimized by \(E(X^{2})+\mu^{2}-2E(X)\mu\), which is equal to the variance if \(\mu=E(X)\). The Gibbs posterior defined by the empirical risk is
\[\pi_{\eta_{\sigma^{2}}}(\sigma^{2}\mid x,\mu)\propto\exp\bigg{[}-\frac{n\eta_ {\sigma^{2}}}{2n}\sum_{i=1}^{n}\{\sigma^{2}-(x_{i}-\mu)^{2}\}^{2}\bigg{]} \propto N_{(0,\infty)}\bigg{\{}\sigma^{2};\frac{1}{n}\sum_{i=1}^{n}(x_{i}- \mu)^{2},\frac{1}{n\eta_{\sigma^{2}}}\bigg{\}}.\]
If \(\mu\) is the sample mean, then the mode of this distribution is the sample variance. As before, \(\eta_{\sigma^{2}}\) acts as a precision parameter that can be used to control the width of credible intervals. These two Gibbs posteriors can be used separately for coherent Bayesian inference on the mean and variance, and can be tuned so credible intervals have correct coverage for a wide array of distributions. However, problems arise in performing joint inference on both parameters with a single Gibbs posterior. Inducing a joint posterior over \((\mu,\sigma^{2})\) with (1.1) requires defining a combined loss, which fixes the scale of one parameter relative to the other, resulting in poor coverage for at least one parameter in many situations. For example, summing the two losses leads to the Gibbs posterior
\[\pi_{\eta}(\mu,\sigma^{2}\mid x)\propto\exp\bigg{(}-\frac{n\eta}{2n}\sum_{i=1 }^{n}\left[(x_{i}-\mu)^{2}+\{\sigma^{2}-(x_{i}-\mu)^{2}\}^{2}\right]\bigg{)},\]
which is not a recognizable distribution, but can be sampled via Metropolis-Hastings. From (1.1), \(\eta\) controls dispersion for both \(\mu\) and \(\sigma^{2}\). Table 1 highlights the catastrophically poor coverage of credible intervals for \(\mu\) after tuning \(\eta\) so 95% credible intervals for \(\sigma^{2}\) have 95% coverage. Details on tuning these posteriors are in the appendix.
Motivated by this shortcoming, we propose to avoid combining the risks into a single loss function and instead base inference on the unique joint distribution defined by conditional Gibbs posteriors for each loss:
\[\pi_{\eta_{\mu},\eta_{\sigma^{2}}}(\mu,\sigma^{2}\mid x) =\pi_{\eta_{\mu}}(\mu\mid x)\pi_{\eta_{\sigma^{2}}}(\sigma^{2} \mid x,\mu)\] \[=N\bigg{(}\mu;\frac{1}{n}\sum_{i=1}^{n}x_{i},\frac{1}{n\eta_{ \mu}}\bigg{)}N_{(0,\infty)}\bigg{\{}\sigma^{2};\frac{1}{n}\sum_{i=1}^{n}(x_{i }-\mu)^{2},\frac{1}{n\eta_{\sigma^{2}}}\bigg{\}}.\]
\begin{table}
\begin{tabular}{l c c c c} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Joint Gibbs & 60 & 54 & 35 & 0 \\ Sequential Gibbs & 95 & 95 & 95 & 95 \\ \end{tabular}
\end{table}
Table 1. _Estimated coverage of 95% credible intervals for \(\mu\). Coverage of credible intervals for \(\mu\) after tuning \(\eta\) so 95% credible intervals for \(\sigma^{2}\) had 95% coverage. S-N denotes the Skew-Normal distribution._
Importantly, the hyperparameters \(\eta_{\mu}\) and \(\eta_{\sigma^{2}}\) can be tuned to ensure good coverage for both parameters across a variety of distributions for \(x\) (Table 1). In the next section we formalize our sequential Gibbs posterior construction.
### The Sequential Posterior
Our goal is to perform inference on \(J\) parameters \(\theta_{j}\in\mathcal{M}_{j}\), \(j\in[J]=1,\ldots,J\), connected to observed \(\mathcal{X}\)-valued data \(x=(x_{1},\ldots,x_{n})\in\mathcal{X}^{n}\) by a sequence of real-valued loss functions,
\[\ell_{j}^{(n)}:\mathcal{M}_{j}\times\mathcal{X}^{n}\times\mathcal{M}_{<j} \rightarrow\mathbb{R},\]
where \(\mathcal{X}\) is an arbitrary set, \(\mathcal{M}_{j}\) is a manifold corresponding to the parameter space for \(\theta_{j}\), and \(\mathcal{M}_{<j}=\otimes_{k=1}^{j-1}\mathcal{M}_{k}\). All manifolds in this work are assumed smooth and orientable. Orientability ensures the existence of a volume form which serves as our default reference measure1 and is equivalent to Lebesgue measure in the Euclidean setting. The \(j\)th loss measures congruence between \(\theta_{j}\) and the data conditional on \(\theta_{<j}=(\theta_{1},\ldots,\theta_{j-1})\). By allowing parameters restricted to manifolds, we encompass both unrestricted real-valued parameters and more complex settings, such as in principal component analysis when orthogonality constraints are included. This setup is broad and includes supervised and unsupervised loss functions. Our general results require neither independent, identically distributed data, nor assumptions of model correctness.
Footnote 1: In particular, all densities are implicitly with respect to the volume form.
**Example 2.1** (Multi-scale inference).: _It is often useful to study data at different levels of granularity, such as decomposing a temperature distribution into a global component, a regional component, and local variation. Practical problems often occur when fitting these models jointly, as it is possible for the fine-scale component to explain the data arbitrarily well. To resolve this, a sequence of losses can be defined estimating first the coarse scale, then the medium scale conditional on the coarse scale, and so on [11, 41, 26, 44]. For example, let \(h_{1}>\cdots>h_{J}>0\) be a set of decreasing bandwidths and \(f_{j}\) be mean zero Gaussian processes with kernels \(K_{j}(x,x^{\prime})=\exp\{-(x-x^{\prime})/h_{j}\}\), \(j\in[J]\). At the coarsest scale, we may model \(y=f_{1}(x)+\varepsilon_{1}\) where \(y\) is a response, \(x\) is a feature, and \(\varepsilon_{1}\sim N(0,\sigma_{1}^{2})\) are errors. The negative log-likelihood defines a loss for \(f_{1}\). Conditional on \(f_{1}\), we model \(y-f_{1}(x)=f_{2}(x)+\varepsilon_{2}\) with errors \(\varepsilon_{2}\sim N(0,\sigma_{2}^{2})\); again the negative log-likelihood defines a loss for \(f_{2}\). Proceeding sequentially, we obtain losses for \(f_{j}\mid f_{1},...f_{j-1}\). Similar decompositions occur broadly within spatial statistics, time series analysis, image analysis, tree-based models, and hierarchical clustering._
**Example 2.2** (Matrix/tensor factorization).: _It is routine to decompose matrices and tensors as a sum of low-rank components, as in principal component analysis. These models are often fit by recursively finding and then subtracting the best rank \(1\) approximation, defining a sequence of losses depending on previously estimated parameters [58, 31, 40, 29, 21]. For example, let \(X\) be a \(k\)-tensor of dimension \(d_{1}\times\cdots\times d_{k}\) and consider fitting a rank \(J\) approximation by iteratively finding and subtracting \(J\) rank \(1\) approximations. The best rank \(1\) approximation minimizes_
\[\ell_{1}^{(n)}(x^{(1)}\mid X)=\|X-\lambda^{(1)}x_{1}^{(1)}\otimes\cdots\otimes x _{k}^{(1)}\|^{2}\]
_where \(\lambda^{(1)}\in\mathbb{R}\), \(x_{i}^{(1)}\in\mathbb{R}^{d_{i}}\), \(i=1,...,k\), and \(x^{(1)}=\{\lambda^{(1)},x_{1}^{(1)},...,x_{k}^{(1)}\}\). Letting \(\hat{X}_{1}=\lambda_{1}x_{1}^{(1)}\otimes\cdots\otimes x_{k}^{(1)}\) be the reconstructed tensor, the next best rank \(1\) approximation minimizes_
\[\ell_{2}^{(n)}(x^{(2)}\mid X,x^{(1)})=\|X-\hat{X}_{1}-\lambda^{(2)}x_{1}^{(2)} \otimes\cdots\otimes x_{k}^{(2)}\|^{2},\]
_and so on. Characterizing uncertainty can be difficult in these settings due to high dimensionality and manifold constraints such as orthogonality._
We now define the sequential posterior.
**Definition 1** (The sequential Gibbs posterior).: _Given losses \(\ell_{j}^{(n)}\), priors \(\Pi_{j}^{(0)}\) on \(\mathcal{M}_{j}\), and precision hyperparameters \(\eta_{j}>0\), \(j\in[J]\), the sequential Gibbs posterior is_
\[\Pi_{\eta}^{(n)}(d\theta_{1},\ldots,d\theta_{J}\mid x) =\prod_{j=1}^{J}\frac{1}{z_{j}^{(n)}(x,\theta_{<j})}\exp\{-\eta_{ j}n\ell_{j}^{(n)}(\theta_{j}\mid x,\theta_{<j})\}\Pi_{j}^{(0)}(d\theta_{j}), \tag{2.1}\] \[z_{j}^{(n)}(x,\theta_{<j}) =\int_{\mathcal{M}_{j}}\exp\{-\eta_{j}n\ell_{j}^{(n)}(\theta_{j} \mid x,\theta_{<j})\}\Pi_{j}^{(0)}(d\theta_{j}).\]
_All results in this work assume \(z_{j}^{(n)}(x,\theta_{<j})<\infty\) for every \(\theta_{<j}\in\mathcal{M}_{<j}\)._
[3] consider all coherent generalizations of Bayes' rule for updating a prior based on a loss and derive (1.1) as the unique optimal decision-theoretic update. Our sequential Gibbs posterior is the unique joint distribution with Gibbs posteriors for each conditional \(\theta_{j}\mid x,\theta_{<j}\), and hence trivially retains the coherence, uniqueness, and optimality properties of [3]. Therefore (2.1) can be used for valid Bayesian loss-based inference. The sequential Gibbs posterior is not equivalent to using (1.1) with combined loss \(\eta_{1}\ell_{1}^{(n)}+\cdots+\eta_{J}\ell_{J}^{(n)}\), as the normalizing constants have considerable influence on the joint distribution.
### Large Sample Asymptotics
We now study frequentist asymptotic properties of the Gibbs posterior (1.1) and sequential Gibbs posterior (2.1). Current theory for (1.1) with Euclidean parameters provides sufficient conditions under which the posterior contracts and has a limiting Gaussian distribution [37, 36]. Theorem 2.3 establishes concentration for (2.1) over general metric spaces. Theorems 2.4 and 2.5 provide sufficient conditions for (1.1) and (2.1) to converge to Gaussian distributions as \(n\to\infty\). In particular, Theorem 2.4 extends existing Gibbs posterior asymptotics to allow parameters supported on manifolds. Taking the loss as a negative log-likelihood, this provides new asymptotic results for traditional Bayesian posteriors on non-Euclidean manifolds. Formalizing these notions requires several assumptions on the losses, their minima, and their limits. Proofs and additional assumptions for Theorems 2.3, 2.4, and 2.5 are in Appendix A of the appendix. The additional assumptions are deferred to the appendix to minimize notation in the main text; all are manifold and/or sequential analogues of standard assumptions for Euclidean, non-sequential Gibbs posteriors [37] and are often straightforward to verify in practice.
**Assumption 1**.: _For all \(j\in[J]\) there exist \(\ell_{j}:\mathcal{M}_{j}\times\mathcal{M}_{<j}\to\mathbb{R}\) such that \(\ell_{j}^{(n)}(\cdot\mid x,\theta_{<j})\to\ell_{j}(\cdot\mid x,\theta_{<j})\) almost surely for every \(\theta_{<j}\)._
**Assumption 2**.: _For all \(j\in[j]\) there exist \(\theta^{(n)}_{j}:\mathcal{M}_{<j}\to\mathcal{M}_{j}\) and \(\theta^{\star}_{j}:\mathcal{M}_{<j}\to\mathcal{M}_{j}\) satisfying \(\ell^{(n)^{\prime}}_{j}\{\theta^{(n)}_{j}(\theta_{<j})\mid x,\theta_{<j}\}=0\) almost surely and \(\ell^{\prime}_{j}\{\theta^{\star}_{j}(\theta_{<j})\mid\theta_{<j}\}=0\), where \(\ell^{(n)^{\prime}}_{j}\) is the derivative with respect to \(\theta_{j}\). Derivatives on manifolds are discussed in Appendix B of the appendix._
**Definition 2**.: _Let \(\phi^{\star}_{j}\in\mathcal{M}_{j}\) be the point obtained by sequentially minimizing the first \(j\) losses \(\ell_{1},\ldots,\ell_{j}\). For example, \(\phi^{\star}_{3}=\theta^{\star}_{3}\{\theta^{\star}_{1},\theta^{\star}_{2}( \theta^{\star}_{1})\}\). Define \(\phi^{\star}=(\phi^{\star}_{1},\ldots,\phi^{\star}_{j})\in\mathcal{M}\)._
Assumption 1 guarantees losses have non-degenerate limits and is satisfied, for example, if the losses are empirical risk functions, as the strong law of large numbers guarantees almost sure convergence to the true risk function. Assumption 2 and Definition 2 introduce optimizers of the conditional losses; these are naturally functions of previously estimated parameters.
**Theorem 2.3**.: _Fix metrics \(d_{j}\) on \(\mathcal{M}_{j}\) and let \(d\) be the metric on \(\mathcal{M}\) given by \(d^{2}=d_{1}^{2}+\cdots+d_{j}^{2}\). Set \(N_{j,\epsilon}=\{\theta_{j}:d_{j}(\theta_{j},\phi^{\star}_{j})<\epsilon\}\) and \(N_{\epsilon}=\{\theta:d(\theta,\phi^{\star})<\epsilon\}\). If \(\Pi^{(0)}_{j}(N_{j,\epsilon})>0\) for all \(\epsilon>0\) and Assumptions 1, A.1, and A.2 hold, then \(\Pi^{(n)}_{n}(N_{\epsilon})\to 1\) almost surely for all \(\eta\) and \(\epsilon>0\)._
Theorem 2.3 ensures samples from the sequential Gibbs posterior concentrate around the point \(\phi^{\star}\) obtained by sequentially minimizing each loss. The proof relies on additional regularity conditions, namely continuity (Assumption A.1) and well-separated minimizers (Assumption A.2). Theorem 2.3 generalizes to other metrics, including \(d^{p}=d_{1}^{p}+\cdots+d_{j}^{p}\) for \(p\geqslant 1\) and \(d=\max\{d_{1},...,d_{j}\}\).
We present two Bernstein-von Mises theorems: one for Gibbs posteriors on manifolds (Theorem 2.4), and its sequential extension (Theorem 2.5). In the following \(f_{\#}u\) is the pushforward of a measure \(u\) by a measurable function \(f\) defined by \(f_{\#}u(A)=u\{f^{-1}(A)\}\) for measurable sets \(A\). The total variation between measures \(P\) and \(Q\) is denoted \(d_{TV}(P,Q)\). A chart \((U,\varphi)\) on a \(p\)-dimensional manifold \(\mathcal{M}\) is an open \(U\subseteq\mathcal{M}\) and a diffeomorphism \(\varphi:U\to\varphi(U)\subseteq\mathbb{R}^{p}\).
**Theorem 2.4**.: _Let \(\mathcal{M}\) be a manifold and \(\ell^{(n)}:\mathcal{M}\times\mathcal{X}^{n}\to\mathbb{R}\) a sequence of functions converging almost surely to \(\ell:\mathcal{M}\to\mathbb{R}\). Let \(\Pi^{(n)}_{n}(d\theta\mid x)\) be the Gibbs posterior (1.1) associated to \(\ell^{(n)}\). Fix \(\phi^{\star}\in\mathcal{M}\) and assume the prior \(\Pi^{(0)}\) has a density \(\pi^{(0)}\) that is continuous and strictly positive at \(\phi^{\star}\). If there is a sequence \(\theta^{(n)}\to\phi^{\star}\) such that \((\ell^{(n)})^{\prime}(\theta^{(n)}\mid x)=0\) and Assumptions 1 and A.3-A.5 hold, then \(\ell^{\prime}(\phi^{\star})=0\) and for every chart \((U,\varphi)\) containing \(\phi^{\star}\),_
\[d_{TV}[(\tau^{(n)}\circ\varphi)_{\#}\Pi^{(n)}_{n},N(0,\eta^{-1}H^{-1})\}\to 0\]
_almost surely, where \(\tau^{(n)}(\tilde{\theta})=\sqrt{n}(\tilde{\theta}-\tilde{\theta}^{(n)})\), \(\tilde{\theta}^{(n)}=\varphi(\theta^{(n)})\), and \(H=\ell^{\prime\prime}(\phi^{\star})\)._
In the above \(\theta^{(n)}\) minimizes the finite sample loss and is mapped to Euclidean space via \(\varphi\) to obtain \(\tilde{\theta}^{(n)}=\varphi(\theta^{(n)})\). Samples from \(\theta\sim\pi^{(n)}_{n}\) are mapped to Euclidean space to produce \(\tilde{\theta}=\varphi(\theta)\), centered by subtracting \(\tilde{\theta}^{(n)}\), and then scaled by \(\sqrt{n}\). Asymptotically this results in samples from a Gaussian distribution \(\sqrt{n}(\tilde{\theta}-\tilde{\theta}^{(n)})\approx N(0,\eta^{-1}H^{-1})\). The total variation distance between these distributions vanishes almost surely [37], which is stronger than
the usual guarantees in probability [36]. The covariance of the limiting Gaussian is \(\eta^{-1}H^{-1}\) where \(H\) is the Hessian of \(\ell\) evaluated at \(\phi^{\star}\). Importantly, Lemma B.1 says \(H\) does not depend on the chart \((U,\varphi)\), hence Theorem 2.4 specifies a well-defined limiting distribution on \(\mathcal{M}\). Assumptions A.3-A.5 are standard and amount to control over the third derivatives, continuity, and well-separated minimizers, respectively. Each assumption can be verified using basic calculus in any chart; no additional differential geometry is required.
We emphasize that Theorem 2.4 applies to any density that can be written as a Gibbs posterior, including likelihood-based posteriors. Posteriors over manifolds arise in a diverse array of applications, including covariance modelling (positive semidefinite matrices), linear dimensionality reduction (Grassmann manifold), directional statistics (spheres and Stiefel manifolds), and shape analysis (Kendall's shape space) [51, 19, 10, 17, 33, 46, 56]. Despite this interest, to our knowledge, there is no Bernstein-von Mises theorem on manifolds, even in the simple case of parameters on spheres. Existing asymptotic literature focuses on specific estimates such as the Frechet mean or M-estimators, and provides much weaker guarantees than total variation convergence of the entire posterior to a Gaussian [27, 2, 9, 43, 7]. We believe Theorem 2.4 is the first result providing intuition and frequentist justification for the limiting behaviour of this broad class of Bayesian models.
We now present the sequential analogue of Theorem 2.4.
**Theorem 2.5**.: _Let \(\Pi_{\eta}^{(n)}\) be the the sequential Gibbs posterior (2.1). For each \(j\in[J]\) let \((U_{j},\varphi_{j})\) be a chart on \(\mathcal{M}_{j}\) containing \(\phi_{j}^{\star}\) and assume \(\Pi_{j}^{(\phi)}\) has a density \(\pi_{j}^{(\phi)}\) that is continuous and strictly positive at \(\phi_{j}^{\star}\). If Assumptions 1, 2, and A.6-A.10 hold then,_
\[(\tau^{(n)}\circ\varphi)_{\sharp}\Pi_{\eta}^{(n)}\to\prod_{j=1}^{J}N(0,\eta_{ j}^{-1}H_{j}^{-1})\]
_setwise, where \(H_{j}=\ell_{j}^{\prime\prime}(\phi_{j}^{\star}\mid\phi_{<j}^{\star})\). The map \(\varphi:\otimes_{j=1}^{J}U_{j}\to\otimes_{j=1}^{J}\mathbb{R}^{Pj}\) applies \(\varphi_{j}\) coordinate-wise and, setting \(\tilde{\theta}_{j}^{(n)}(\theta_{<j})=\varphi_{j}\{\theta_{j}^{(n)}(\theta_{ <j})\}\), \(\tau^{(n)}:\otimes_{j=1}^{J}\mathbb{R}^{Pj}\to\otimes_{j=1}^{J}\mathbb{R}^{Pj}\) is defined by_
\[\tau^{(n)}(\tilde{\theta})=\sqrt{n}\left\{\tilde{\theta}_{1}-\tilde{\theta}_ {1}^{(n)},\tilde{\theta}_{2}-\tilde{\theta}_{2}^{(n)}(\theta_{1}),\ldots, \tilde{\theta}_{J}-\tilde{\theta}_{J}^{(n)}(\theta_{<J})\right\}.\]
Theorem 2.5 extends Theorem 2.4 to the sequential setting. When \(J=2\), one samples \((\theta_{1},\theta_{2})\) by drawing \(\theta_{1}\sim\pi_{n_{1}}^{(n)}\) and \(\theta_{2}\mid\theta_{1}\sim\pi_{n_{2}}^{(n)}(\cdot\mid\theta_{1})\). These are mapped to Euclidean space to obtain \(\tilde{\theta}_{1}=\varphi_{1}(\theta_{1})\) and \(\tilde{\theta}_{2}=\varphi_{2}(\theta_{2})\). The finite sample minimizers \(\theta_{1}^{(n)}\) and \(\theta_{2}^{(n)}(\theta_{1})\) are then computed and mapped to Euclidean space to obtain \(\tilde{\theta}_{1}^{(n)}=\varphi_{1}(\theta_{1}^{(n)})\) and \(\tilde{\theta}_{2}^{(n)}(\theta_{1})=\varphi_{2}\{\theta_{2}^{(n)}(\theta_{1})\}\). Centering and scaling as before gives \(\sqrt{n}(\tilde{\theta}_{1}-\tilde{\theta}_{1}^{(n)})\approx N(0,\eta_{1}^{-1}H _{1}^{-1})\) and \(\sqrt{n}(\tilde{\theta}_{2}-\tilde{\theta}_{2}^{(n)}(\theta_{1})\}\approx N(0, \eta_{2}^{-1}H_{2}^{-1})\). Asymptotically \(\theta_{1}\) and \(\theta_{2}\) are independent; intuitively this happens because \(\theta_{1}\) concentrates at \(\theta_{1}^{\star}\), so for large \(n\) we have \(\theta_{2}\mid\theta_{1}\approx\theta_{2}\mid\theta_{1}^{\star}\). As before, the limiting covariances are inverse Hessians of the losses evaluated at critical points. Assumptions A.6-A.10 are natural extensions of those in Theorem 2.4; see Appendix A.
Theorems 2.4 and 2.5 highlight the role of \(\eta\) as a precision parameter. The sequential Gibbs posterior has individual tuning parameters for each \(\theta_{j}\) and hence has greater flexibility. In the following subsection we develop a practical algorithm for leveraging this flexibility to tune the sequential posterior so credible intervals for \(\theta_{j}\) are approximately valid confidence intervals.
### Calibration
We propose a bootstrap-based calibration algorithm for tuning the sequential Gibbs posterior so credible intervals have approximately valid frequentist coverage, without reliance on asymptotic results or strong parametric assumptions. Our algorithm is inspired by the general posterior calibration algorithm in [53], which uses Monte Carlo within the bootstrap to estimate coverage of credible regions and iteratively updates \(\eta\) to drive coverage to a desired value. Sampling the posterior over each bootstrap replicate at each iteration of the algorithm is computationally intensive, rendering this approach impractical for principal component analysis in moderate-to-high dimensions. In the sequential setting, the computational burden is compounded by the need to calibrate \(J\) different hyperparameters. Motivated by this, we propose a new general calibration algorithm which matches the volume of credible regions to the volume of pre-calculated bootstrap confidence regions. Pre-calculating the volume of a confidence region avoids the need to sample within the bootstrap and dramatically reduces the computational burden of calibration. Calculating volumes on manifolds can be difficult; we avoid this by restricting credible/confidence regions to be balls, which reduces matching volumes to matching radii.
We now outline the procedure for a Gibbs posterior with a single loss, dropping redundant subscripts for readability. Fix a distance \(d\) on \(\mathcal{M}=\mathcal{M}_{1}\) and let \(N_{r}(\xi)=\{\theta\in\mathcal{M}\mid d(\theta,\xi)<r\}\) be the ball of radius \(r\) around \(\xi\in\mathcal{M}\). Let \(\hat{\phi}(x)\) be the minimizer of \(\ell^{(n)}(\cdot\mid x)\); we use this as a finite sample estimator of \(\phi^{\star}\). The frequentist coverage of the ball \(N_{r}(\phi(x))\) is
\[c(r)=E_{x-P_{x}}(1[\phi^{\star}\in N_{r}\{\phi(x)\}])\]
where \(P_{x}\) is the sampling distribution of \(n\) data points. Fix \(\alpha\in(0,1)\). The radius \(r^{\star}\) of a \(100(1-\alpha)\%\) confidence ball satisfies
\[r^{\star}=\inf\{r>0\mid c(r)\geqslant 1-\alpha\}.\]
We propose to choose \(\eta\) so the Gibbs posterior assigns \(100(1-\alpha)\%\) of its mass to \(N_{r^{\star}}\{\phi(x)\}\), which would imply that the credible interval \(N_{r^{\star}}\{\phi(x)\}\) has valid frequentist coverage. The probability mass the Gibbs posterior assigns to the confidence ball is
\[m(\eta)=E_{\theta\sim\pi_{\eta}}(1[\theta\in N_{r^{\star}}\{\phi(x)\}]),\]
so calibrating the Gibbs posterior is equivalent to solving \(m(\eta)=1-\alpha\). A solution \(\eta_{\alpha}\) exists if \(\Pi^{(0)}[N_{r^{\star}}\{\phi(x)\}]<1-\alpha<1\): this follows from the limits
\[\lim_{\eta\to 0^{+}}m(\eta)=\Pi^{(0)}[N_{r^{\star}}\{\phi(x)\}],\quad\lim_{ \eta\to\infty}m(\eta)=1\]
and the intermediate value theorem. One can calculate \(\eta_{\alpha}\) with any suitable root finding method. In our experiments we estimate \(m(\eta)\) via Monte Carlo and then use stochastic approximation [49] to find \(\eta_{\alpha}\). Additional details are in the appendix.
In practice we do not know \(\phi^{*}\) or the sampling distribution \(P_{x}\), so \(r^{*}\) is unavailable. We overcome this by estimating the coverage function via the bootstrap,
\[c(r)\approx\frac{1}{B}\sum_{b=1}^{B}1[\phi(x)\in N_{r}\{\phi(x_{b})\}],\]
and then solving for \(r^{*}\) using this approximation. Here \(x_{b}\) is a bootstrap replicate of \(x\) and \(B>0\) is an integer. Euclidean bootstrap confidence regions are known to have asymptotically correct coverage up to error terms of \(O_{p}(1/n)\) under weak conditions [16], but these results are difficult to generalize to the case of balls on manifolds. In simulations we find this approximation produces well-calibrated Gibbs posteriors.
We calibrate the sequential posterior by applying the above procedure sequentially. Let \(\hat{\phi}_{j}(x)\in\mathcal{M}_{j}\) be the point obtained by sequentially minimizing \(\ell_{1}^{(n)}(\cdot\mid x),...,\ell_{j}^{(n)}(\cdot\mid x,\theta_{<j})\). The bootstrap is used to estimate the radii \(\hat{\tau}_{j}\) of \(100(1-\alpha)\%\) credible balls around \(\hat{\phi}_{j}(x)\), \(j\in[j]\). We tune \(\eta_{1}\) so \(\theta_{1}\) lies inside \(N_{\hat{\tau}_{1}}\{\hat{\phi}_{1}(x)\}\) with probability \(1-\alpha\); this parameter is then fixed and \(\eta_{2}\) is tuned so \(\theta_{2}\) lies inside \(N_{\hat{\tau}_{2}}\{\hat{\phi}_{2}(x)\}\) with probability \(1-\alpha\), and so on. In the next section we synthesize the above work on sequential posteriors, including asymptotic theory and finite sample tuning, to obtain a generalized posterior for principal component analysis.
## 3. Application to Principal Component Analysis
### The Sequential Bingham Distribution
Our sequential and manifold extensions to Gibbs posteriors are of broad interest, but were concretely motivated by principal component analysis. Recall principal component analysis projects high dimensional features \(x_{i}\in\mathbb{R}^{p}\) to low dimensional scores \(z_{i}\in\mathbb{R}^{l}\), \(J<p\), contained in a plane \(\mathcal{P}\). This defines \(J\) new features, called components, which are linear combinations of the original \(p\) features. Failure to characterize uncertainty in components and scores under represents uncertainty in downstream analysis.
Let \(X\in\mathbb{R}^{n\times p}\) be a matrix of \(n\) samples, centered so \(x_{1}+\cdots+x_{n}=0\). The optimal plane \(\hat{\mathcal{P}}\) minimizes the squared reconstruction error \(\|X-\mathcal{P}(X)\|^{2}\), where \(\mathcal{P}(X)\) is the projection of \(X\) onto \(\mathcal{P}\). It is well known that the leading unit eigenvectors \(\{v_{j}^{(n)}\}_{j=1}^{J}\) of the empirical covariance \(\hat{\Sigma}=X^{T}X/n\) form an orthonormal basis for \(\hat{\mathcal{P}}\). These can be found by sequentially solving
\[v_{j}^{(n)}(v_{<j})=\operatorname*{arg\,max}_{v_{j}\in\mathbb{R}^{p-1}\cap \operatorname{Null}\{v_{1},...,v_{j-1}\}}v_{j}^{T}\hat{\Sigma}v_{j},\quad j\in [J], \tag{3.1}\]
where \(\operatorname{Null}\{v_{1},...,v_{j-1}\}\) is the null space of the span of \(\{v_{1},\ldots,v_{j-1}\}\). The null space condition ensures eigenvectors are orthogonal; hence the matrix \(\hat{V}\in\mathbb{R}^{p\times J}\) containing solutions of (3.1) as columns is an element of the Stiefel manifold \(\mathcal{V}(J,p)=\{V\in\mathbb{R}^{p\times J}\mid V^{T}V=I\}\). Computing charts on \(\mathcal{V}(J,p)\) and sampling densities over \(\mathcal{V}(J,p)\) is difficult. We instead
use an equivalent formulation defined over spheres,
\[w_{j}^{(n)}(v_{<j})=\operatorname*{arg\,max}_{w_{j}\in\mathbb{S}^{p-j}}w_{j}^{T}N _{<j}^{T}\hat{\Sigma}N_{<j}w_{j},\quad j\in[J], \tag{3.2}\]
where \(N_{<j}\in\mathbb{R}^{p\times p-j+1}\) is an orthonormal basis for \(\operatorname{Null}\{v_{1},...,v_{j-1}\}\). The optimizer \(w_{j}^{(n)}(v_{<j})\) is the leading eigenvector of \(N_{<j}^{T}\hat{\Sigma}N_{<j}\) and is related to (3.1) by \(v_{j}^{(n)}(v_{<j})=N_{<j}w_{j}^{(n)}(v_{<j})\).
Assuming uniform priors, our sequential posterior is
\[\kappa_{n}^{(n)}(w\mid x) =\prod_{j=1}^{J}\frac{1}{z_{j}^{(n)}(w_{<j}\mid x)}\exp(\eta_{j} nw_{j}^{T}N_{<j}^{T}\hat{\Sigma}N_{<j}w_{j}), \tag{3.3}\] \[\text{with}\quad z_{j}^{(n)}(w_{<j}\mid x) ={}_{1}F_{1}\{1/2,(p-j)/2,\eta_{j}nN_{<j}^{T}\hat{\Sigma}N_{<j}\},\]
where \({}_{1}F_{1}\) is the confluent hypergeometric function of matrix argument. This posterior is a product of Bingham distributions with concentration matrices \(n\eta_{j}N_{<j}^{T}\hat{\Sigma}N_{<j}\). In (3.3), \(N_{<j}\) is computed using the samples \(v_{1},...,v_{j-1}\) which are found sequentially via the relations \(v_{j}=N_{<j}w_{j}\). We write \(\iota:\otimes_{j=1}^{J}\mathbb{S}^{p-j}\to\mathcal{V}(k,p)\) for the corresponding embedding \([w_{1},...,w_{J}]\mapsto[v_{1},...,v_{J}]\). The sequential Bingham distribution (3.3) can be used to sample posterior eigenvectors, providing a full characterization of uncertainty in components, scores, and any downstream inference involving these quantities. This can be done in isolation or jointly within a larger Bayesian model, which we illustrate in Section 4.2.
Theorems 2.3 and 2.5 apply to (3.3). To simplify presentation, we assume the data are centered with full-rank diagonal covariance; in this case the true components are \(v_{j}^{\star}=e_{j}\), \(j\in[J]\), where \(e_{j}\) is the jth standard basis vector in \(\mathbb{R}^{p}\). Samples from the sequential Gibbs posterior concentrate around the true eigenvectors, and centered/scaled samples converge to a Gaussian distribution with covariance proportional to the inverse eigengaps. A small technical detail is that (3.3) is antipodally symmetric, assigning equal mass to \(\pm B\) for any measurable \(B\subseteq\otimes_{j=1}^{J}\mathbb{S}^{p-j}\). We resolve this ambiguity by implicitly restricting the priors so \(w\sim\pi_{j}\) implies \(w_{1}>0\) almost surely.
**Proposition 3.1**.: _Assume \(E(x)=0\) and \(var(x)=diag(\lambda_{1},...,\lambda_{p})\) with \(\lambda_{1}>\cdots>\lambda_{p}>0\). Fix charts \((U_{j},\varphi_{j})\) on \(\mathbb{S}^{p-j}\) with \((1,0,...,0)\in U_{j}\), \(j\in[J]\). Then \(\iota(W)\to I_{p\times k}\) in probability where \(W\sim\kappa_{n}^{(n)}\) and_
\[(\tau^{(n)}\circ\varphi)_{\sharp}\kappa_{n}^{(n)}\to\prod_{j=1}^{J}N\Big{\{} 0,(2\eta_{j})^{-1}H_{j}^{-1}\Big{\}}\]
_setwise, where \(H_{j}^{-1}=diag\{(\lambda_{j}-\lambda_{j+1})^{-1},\ldots,(\lambda_{j}-\lambda _{p})^{-1}\}\) and \(\tau^{(n)}\), \(\varphi\) are as in Theorem 2.5._
Proposition 3.1 views \(\kappa_{n}^{(n)}\) as a density on \(\otimes_{j=1}^{J}\mathbb{S}^{p-j}\). The charts \(U_{j}\subseteq\mathbb{S}^{p-j}\) can be embedded in \(\mathbb{S}^{p-1}\) via \(w\to N_{<j}w\). For example, if \(v_{k}=e_{k}\) for \(k=1,...,j-1\), then \(N_{<j}=[e_{j},...,e_{J}]\) and \(N_{<j}(1,0,...,0)^{T}=e_{j}\); hence the assumption \((1,0,...,0)\in U_{j}\) ensures that the jth eigenvector of \(var(x)\) is in \(N_{<j}U_{j}\). Any charts \((U_{j},\varphi_{j})\) with \((1,0,...,0)\in U_{j}\) can be used for
Proposition 3.1, such as \(U_{j}=\{w\in\mathbb{S}^{p-j}\mid w_{1}>0\}\) and \(\varphi_{j}(w)=w_{-1}\) where \(w_{-1}\in\mathbb{R}^{p-j}\) is \(w\) with the first entry removed. Other viable charts include the Riemannian logarithm, stereographic projection, or projective coordinates. See Appendix A in the appendix for the proof.
### Posterior Computation
Any algorithm which produces exact samples from a Bingham distribution, such as rejection sampling [18, 28], can be combined with Algorithm 1 to produce exact samples from (3.3). Priors of the form \(\pi_{j}^{(0)}(w_{j})\propto\exp(Aw_{j}+b)\) can be accommodated by replacing the Bingham sampling step with a Fisher-Bingham sampling step [18]. The main bottleneck is computing \(N_{<j}\). In high dimensions, \(N_{<j}\) can be computed approximately [38], resulting in nearly orthogonal samples.
### Simulations
Uncertainty in eigenvectors depends on the marginal distributions of \(X\) and \(p/n\). We sample the rows of \(X\) independently from a mean zero multivariate Gaussian or mean zero multivariate \(t_{5}\) for each of the relative dimensions \(p/n\in\{1/4,1/2,1\}\). All simulations fix \(n=100\) and use a diagonal covariance where the first \(k=5\) eigenvectors explain \(90\%\) of the variance in the data. The first five eigenvalues are \((\lambda_{1},\lambda_{2},...,\lambda_{5})=(10,9,...,6)\) and the remaining \(p-5\) eigenvalues are linearly spaced and scaled to explain the remaining \(10\%\) of the variance. We evaluate the coverage of multiple methods for estimating the first five eigenvectors. All credible/confidence balls are computed using the geodesic distance of samples to the mean or mode. Sampled eigenvectors are identifiable up to right multiplication by an orthogonal matrix; we resolve this ambiguity by Procrustes aligning all samples to mean or mode prior to computing intervals.
The original Gibbs posterior and the Bayesian spiked covariance model [22] are the primary alternatives to the proposed method. The original Gibbs posterior uses \(\|X-X\mathcal{V}V^{\mathrm{T}}\|^{2}\) as a loss function. We compute credible intervals around the mode and tune \(\eta\) so the average radius of \(95\%\) credible balls around each component matches the average bootstrapped radius. The Bayesian spiked covariance model assumes the likelihood \(x_{i}\mid V,\Lambda,\sigma^{2}\sim N[0,\sigma^{2}(V\Lambda V^{\mathrm{T}}+1)]\) with \(V\in\mathcal{V}(k,p)\) the eigenvectors, \(\Lambda\) a diagonal matrix of positive strictly decreasing eigenvalues, and \(\sigma^{2}>0\) residual noise variance. Priors are chosen as \(V\sim 1,\lambda_{j}\sim N(0,5^{2})\), \(j=1,...,p\), \(\sigma^{2}\sim N(0,5^{2})\). Samples are obtained using polar augmentation [23] and Hamiltonian Monte Carlo in Stan [5]. Credible balls are computed around the mode (estimated with the sample that maximizes the log posterior density) and Frechet mean [6]. Coverage was estimated using \(500\) data replicates in all cases except the Joint Gibbs and
Bayesian spiked covariance models when \(p/n=1\), which use only 100 replicates due to the extreme computational cost of sampling.
Table 2 shows the results. Credible regions around the mode for the Bayesian spiked covariance model have poor coverage in all cases. All other methods perform well when \(X\) has Gaussian marginals, with the largest fault being over-coverage of components 4 and 5. When \(X\) has \(t_{5}\) marginals, the joint Gibbs model significantly under-covers the first two components, and the Bayesian spiked covariance model fails entirely. Both the sequential Gibbs posterior and the bootstrap provide excellent coverage independent of the marginals of \(X\) and relative dimension.
## 4. Applications to Crime Data
### Visualizing Uncertainty
We analyze the publicly available communities and crime dataset [47], which contains socio-economic, law enforcement, and crime data for communities from the 1990 United States Census, the 1990 United States Law Enforcement Management and Administrative Statistics survey, and the 1995 Federal Bureau of Investigation Uniform Crime Report. We focus on \(p=99\) numeric features including median family income, divorce rates, unemployment rates, vacancy rates, number of police officers per capita, and violent crime rate, all normalized to have unit variance. The goal is to identify groups of features predictive of higher violent crime. We applied principal component analysis to the centered/scaled data. Roughly, the first five components capture (1) income and family stability, (2) recent immigration and language barriers, (3) housing availability and occupancy, (4) youth prevalence and neighbourhood age, and (5) homelessness and
\begin{table}
\begin{tabular}{l c c c} & & \(N(0,1)\) & \\ \(p/n\) & 1/4 & 1/2 & 1 \\ Sequential Gibbs & (92, 93, 96, 99, 99) & (91, 94, 97, 98, 99) & (89, 95, 97, 99, 99) \\ Joint Gibbs & (90, 90, 97, 96, 100) & (88, 93, 96, 96, 100) & (87, 94, 92, 95, 100) \\ Bootstrap & (93, 93, 97, 100, 98) & (93, 96, 97, 99, 99) & (91, 96, 98, 99, 100) \\ BPCA (mode) & (13, 8, 8, 9, 5) & (14, 9, 12, 9, 4) & (16, 13, 14, 9, 3) \\ BPCA (mean) & (92, 95, 96, 96, 97) & (90, 95, 96, 98, 99) & (89, 94, 98, 99, 100) \\ & & \(t_{5}(0,1)\) & \\ \(p/n\) & 1/4 & 1/2 & 1 \\ Sequential Gibbs & (94, 89, 91, 94, 97) & (97, 92, 91, 90, 95) & (96, 90, 90, 90, 96) \\ Joint Gibbs & (71, 84, 88, 94, 99) & (64, 81, 91, 96, 100) & (63, 82, 79, 97, 100) \\ Bootstrap & (95, 89, 92, 94, 96) & (97, 93, 92, 90, 96) & (97, 89, 91, 91, 94) \\ BPCA (mode) & (56, 42, 32, 27, 26) & (75, 56, 48, 38, 33) & (86, 67, 62, 45, 40) \\ BPCA (mean) & (36, 49, 58, 61, 64) & (19, 31, 39, 46, 57) & (9, 20, 20, 38, 44) \\ \end{tabular}
\end{table}
Table 2. _Coverage of 95% intervals by component_. Coverage of confidence/credible balls for the first five eigenvectors under different marginal distributions and relative dimensions. BPCA denotes the Bayesian spiked covariance model.
poverty. These components explain 65% of the variance in the data; the first 21 components explain 90% of the variance. Additional information is in the appendix.
We subsample \(n=100\) communities to illustrate key aspects of uncertainty characterization from (3.3). Figure 1 shows posterior scores colored by violent crime rate after calibrating with the bootstrap matching algorithm. The variance of the jth score vector \([z_{1j},...,z_{nj}]\) increases with j. This happens for two reasons. First, uncertainty from previously estimated components accumulates, resulting in higher uncertainty for later components. Second, the eigenvalues of later components are poorly separated compared to the eigenvalues of the first components; this makes it harder to disambiguate directions and results in larger variance, as expected from Proposition 3.1. The appendix contains further details on calibration.
### Principal Component Regression
Principal component regression fits a linear model to scores, with \(Y=X\nabla\beta+\varepsilon\) where \(Y\in\mathbb{R}^{n}\) is a centered response vector for \(n\) individuals, \(X\in\mathbb{R}^{n}\), and \(Y\in\mathbb{R}^{n}\) is a centered response vector for \(n\) individuals, \(X\in\mathbb{R}^{n}\), and \(Y\in\mathbb{R}^{n}\) is a centered response vector for
\(\mathbb{R}^{n\times p}\) is a centered/scaled matrix of \(p\)-dimensional features, \(V\in\mathcal{V}(J,p)\) are components, \(\beta\in\mathbb{R}^{J}\) are coefficients, and \(\varepsilon\in\mathbb{R}^{n}\) are errors. Adopting the distribution \(\varepsilon\sim N(0,\sigma^{2}I)\) induces a Gaussian likelihood \(\pi(Y\mid V,\beta,\sigma^{2})\). We apply our sequential framework to principal component regression, using (3.2) for the first \(J\) losses and the negative log-likelihood \(-\log\{\pi(Y\mid V,\beta,\sigma^{2})\}\) for the \(J+1\)st loss. The scale of the likelihood is well specified relative to priors, so we fix \(\eta_{J+1}=1\). When the loss is a negative log-likelihood, (1.1) is exactly Bayes' rule. The sequential posterior is
\[\pi_{n}(V,\beta,\sigma^{2}\mid X,Y)=\kappa_{n}^{(n)}(V\mid X)\pi_{\text{Bayes}} (\beta,\sigma^{2}\mid X,Y,V) \tag{4.1}\]
where \(\pi_{\text{Bayes}}\) is the likelihood-based posterior for \(\beta,\sigma^{2}\mid X,Y,V\) conditional on \(V\) and we have parameterized \(\kappa_{n}^{(n)}\) from (3.3) in terms of \(v_{j}=N_{<j}w_{j}\). Choosing a normal inverse-gamma prior \(\beta\mid\sigma^{2}\sim N(0,\sigma^{2}I)\), \(1/\sigma^{2}\sim Ga(1,1)\) results in a conjugate posterior for \(\pi_{\text{Bayes}}\) and allows exact sampling of (4.1).
We apply (4.1) to the communities and crime dataset. Figure 2 shows posterior credible intervals for coefficients. The first eight components are significant, and the results are largely intuitive: for example, violent crime decreases as community income and family stability increases. As before, uncertainty grows with the score index, resulting in wider credible intervals for later coefficients. Additional analysis may be found in the appendix.
## 5. Discussion
Sequential Gibbs posteriors introduce many potential applications and research directions. One area of interest is combining loss-based Gibbs posteriors with traditional likelihood-based posteriors, as illustrated in Section 4.2. This arises when some parameters are characterized by a likelihood and others by a non-likelihood-based loss. For example, we may use a machine learning algorithm, such as a neural network, for dimensionality reduction for complex high-dimensional features, but then use a likelihood for a low-dimensional response. In addition to improving robustness, this may have major computational advantages over attempting likelihood-based neural network inferences.
Sequential Gibbs posteriors apply to a wide range of loss functions and problems not discussed in this work. It is interesting to extend our principal component analysis results to variants such as sparse, functional, and disjoint principal component analysis. Beyond principal component analysis, sequential Gibbs posteriors can be applied to specific problems in the general settings detailed in Examples 2.1 and 2.2 as well as to neural networks as just described. Nonlinear dimension reduction methods such as diffusion maps may also benefit from sequential Gibbs posteriors since they, like principal component analysis, rely on eigenvectors of matrices built from data and are often used to process data prior to further analyses such as regression. In particular, sequential Gibbs posteriors can provide uncertainty quantification in these settings.
Another line of future work is calibration of the hyperparameters \(\eta=(\eta_{1},\ldots,\eta_{J})\). In particular, it is desirable to have theoretical results guaranteeing appropriate coverage. For non-Euclidean parameters this may require development of bootstrap theory for confidence
balls on general manifolds. It is also unknown how calibration of \(\eta\) relates to selection of penalty parameters, for example in the context of sparse principal component analysis and when a regularization penalty is applied to \(f_{\theta}\) in the neural network loss mentioned above.
## Acknowledgements
This work was partially funded by grants from the United States Office of Naval Research (N000142112510) and National Institutes of Health (R01ES028804, R01ES035625).
|
2305.07787 | On Ultrafast X-ray Methods for Magnetism | With the introduction of x-ray free electron laser sources around the world,
new scientific approaches for visualizing matter at fundamental length and
time-scales have become possible. As it relates to magnetism and
"magnetic-type" systems, advanced methods are being developed for studying
ultrafast magnetic responses on the time-scales at which they occur. We
describe three capabilities which have the potential to seed new directions in
this area and present original results from each: pump-probe x-ray scattering
with low energy excitation, x-ray photon fluctuation spectroscopy, and
ultrafast diffuse x-ray scattering. By combining these experimental techniques
with advanced modeling together with machine learning, we describe how the
combination of these domains allows for a new understanding in the field of
magnetism. Finally, we give an outlook for future areas of investigation and
the newly developed instruments which will take us there. | Rajan Plumley, Sathya Chitturi, Cheng Peng, Tadesse Assefa, Nicholas Burdet, Lingjia Shen, Alex Reid, Georgi Dakovski, Matthew Seaberg, Frank O'Dowd, Sergio Montoya, Hongwei Chen, Alana Okullo, Sougata Mardanya, Stephen Kevan, Peter Fischer, Eric Fullerton, Sunil Sinha, William Colocho, Alberto Lutman, Franz-Joseph Decker, Sujoy Roy, Jun Fujioka, Yoshinori Tokura, Michael P. Minitti, Jeremy Johnson, Matthias Hoffmann, Michaela Amoo, Adrian Feiguin, Chuck Yoon, Jana Thayer, Yousseff Nashed, Chunjing Jia, Arun Bansil, Sugata Chowdhury, Aaron Lindenberg, Mike Dunne, Elizabeth Blackburn, Joshua Turner | 2023-05-12T22:32:52Z | http://arxiv.org/abs/2305.07787v1 | # On Ultrafast X-ray Methods for Magnetism
###### Abstract
With the introduction of x-ray free electron laser sources around the world, new scientific approaches for visualizing matter at fundamental length and time-scales have become possible. As it relates to magnetism and'magnetic-type' systems, advanced methods are being developed for studying ultrafast magnetic responses on the time-scales at which they occur. We describe three capabilities which have the potential to seed new directions in this area and present original results from each: pump-probe x-ray scattering with low energy excitation, x-ray photon fluctuation spectroscopy, and ultrafast diffuse x-ray scattering. By combining these experimental techniques with advanced modeling together with machine learning, we describe how the combination of these domains allows for a new understanding in the field of magnetism. Finally, we give an outlook for future areas of investigation and the newly developed instruments which will take us there.
+
Footnote †: preprint: On Ultrafast X-ray Methods for Magnetism
## I Introduction
* **Ultrafast experimental methods*
* / X-ray pump-probe scattering
* **X-ray Photon Fluctuation Spectroscopy*
* **C. Ultrafast Diffuse X-ray Scattering
* **Magnetic cross-section and resonant XPFS*
* **Density Functional Theory*
* **C. Numerical methods*
* **Machine Learning*
* **Training Methodology*
* **Single photon detection*
* **Scientific Prospects*
* **Special Accelerator Modes*
* **Fresh-slice x-ray pairs*
* **Two-bunch x-ray pairs**
* State-of-the-art Instrumentation
* 1. The chemRIXS instrument for materials
* 2. Future capabilities
* VI Conclusion
* VII Acknowledgements
## I Introduction
In recent years, remarkable new phases of matter have been both predicted and measured, such as quantum spin liquids, skyrmions, strongly spin-orbit coupled materials, quantum spin Hall insulators, and helical topological superconductors. These phases often arise due to delicate combination of multiple interactions such as quantum confinement or magnetic frustration, and can display magnetic features with both short-range and long-range order ranging from sub-nanometer to micron length scales [1]. Quantum materials exhibiting this behavior hold promise for highly efficient electronic- and spin-transport as well as tunability for technological applications [2; 3]. One such field is that of spintronics, where the magnetic degree of freedom can be used as a knob for new functionalities and enable new abilities to control materials [4; 5].
A common theme in "quantum engineering" of materials and devices is the ability to temporarily drive one phase into another, usually by the introduction of a symmetry-breaking mechanism or radiative excitation [6; 7; 8; 9; 10; 11; 12], or to create new transient states of matter [13; 14; 15]. This necessitates a robust theoretical understanding of how systems respond to stimulation at ultrafast time-scales. One such example in the field of magnetism is the 2D Van der Waals materials [16; 17]. Here, theoretical models can be tested directly, such as the Berezinskii-Kosterlitz-Thouless transition which predicts topological order. In the 2D limit, magnetic fluctuations can become dominant, and can also act to mediate the formation of other types of hidden quantum phases [18]. Probing such fluctuations requires the development of new experimental tools with sensitivity on the requisite time- and length- scales.
Another rapidly growing field of study is the understanding and control of discrete topological objects, such as magnetic skyrmions. These have been shown to respond to small magnetic fields with incredible implications for technological applications such as computer memory [2; 19]. While the motion of individual spin-moments can be described on the nanosecond time-scale by the Landau-Lifshitz-Gilbert equation, descriptions of spin-dynamics alone are insufficient for getting the full picture, since the emergence of skyrmion spin-textures arise from complicated competitions between a host of magnetic interactions, and can involve quasi-particles which are many lattice units in size. These structures exhibit dynamics that range across many time-scales from microseconds to the ultrafast, and can span length-scales much larger than the unit cells of their parent lattice [8; 20; 21; 22].
Many powerful methods exist and have continued to be developed for understanding dynamics related to novel magnetic phenomena. Nanoscale textures such as those seen in skyrmion crystals for instance are often studied using Lorentz Transmission Electron Microscopy (LTEM), a powerful tool for analyzing magnetic structure with down to 2 nm resolution [23]. However, LTEM is not sensitive to the rapid changes in the magnetic structure discussed above due to long integration times lasting seconds to minutes. Despite challenges associated with the low readout-rates of LTEM detectors, efforts to push LTEM to the ultra-fast regime using stroboscopic methods are currently being pursued [24; 25]. Small-Angle Neutron Scattering (SANS) and Neutron Spin-Echo spectroscopy (NSE) are additional important methods for accessing the dynamic structure factor down to nanosecond time-scales with atomic precision, but can sometimes be challenging compared to x-ray sources due to the low absorption of neutrons in some materials, as compared to electromagnetic radiation [26; 27]. This aspect can hinder the capability of neutron methods for studying thin samples and systems undergoing rapid time-development, where neutron signal flux has to be averaged over long times. In order to make continued progress, novel methods are needed to access magnetic structure and excitations at relevant length-scales as above, but that additionally can access dynamic phenomena at ultrafast timescales.
Complementary tools now are available which can target some of the modern topics in magnetism for the study of dynamics at the atomic scale using X-ray Free-Electron Lasers (X-FEL). This effort has a long history [28], from the first demonstrated rapid quenching of magnetic order [29] to the early use case of using the Stanford Linear Accelerator to create short magnetic field pulses to read and write in magnetic recording media [30], and now these new machines are creating enhanced capabilities in this area (For some good reviews, see [31; 32; 33]). These type of sources have had impact in many scientific research areas [34], notably in wide-ranging areas such as atomic and molecular science [35; 36; 37], astrophysics [38; 39], condensed matter and materials physics [13; 40; 41; 42], and structural biology [43; 44]. This versatility is mainly owing to the ability to provide femtosecond x-ray pulse durations, large pulse energies, and repetition rates on the order of MHz [45; 46; 47; 34]. The photon energy is also typically tunable, allowing for element-specific resonant x-ray scattering for instance, which can isolate and enhance the magnetic structure signal from the sample [48; 49; 50; 51; 52; 53; 54; 55]. These next generation light sources hold vast potential for driving the field of magnetism to new heights.
It is in this context that we provide a forward-looking perspective in the field of magnetism focused on methods based on X-FEL radiation, especially as it relates to the ultrafast regime. We specifically focus on three methods which could be transformative in this area. With a close coupling between theory, combined with recent machine-assisted analysis techniques, we outline recent progress which has occurred by optimizing this overlap. After a description of the experimental methods, we report results on a cross-section of magnetic systems. We support this with theoretical tools which have potential for advanced time domain studies, as well as machine-learning developments focused on the single photon measurement. Finally, we provide an outlook for spin-sensitive studies in the ultrafast regime. We conclude with a discussion on the
scientific prospects which will be available with the latest accelerator modes and new instruments being constructed at the LCLS-II.
## II Ultrafast experimental methods
### THz / X-ray pump-probe scattering
One area of focus combines femtosecond x-ray probes with THz excitation of materials, especially in the area of magnetism [56]. Using resonant x-ray scattering, the magnetic structure can be directly probed, such as the \(L-\)edge, responsible for \(p\)-to-\(d\) transitions of the magnetic ion. A powerful use of X-FELs is to combine this sensitivity of the magnetic structure with THz excitation. With short-pulsed THz radiation, low energy modes can be directly excited for spin relaxation, spin enhancement, or coherent spin control [41]. For instance, ultrashort THz pump pulses can directly couple to electromagnons [41]. By using soft x-ray scattering at the Mn \(L-\)edge to probe the spin state in TbMnO\({}_{3}\), strong field THz was used to both excite and study the response of an electromagnon excitation [57; 58]. With current developments underway at the LCLS-II (see Sec.V.3.1 and Sec.V.3.2), THz pumping will soon be possible while probing with high-repetition rate soft x-rays for direct sensitivity to different types of electronic ordering.
Another example is in non-resonant THz pumping, where THz excitation has been shown to have spin sensitivity. Preliminary measurements were carried out [59] on a manganite single crystal of Nd\({}_{1-x}\)Sr\({}_{1+x}\)MnO\({}_{4}\) (\(x\) = 2/3) consisting of non-resonant THz pumping the system, and x-ray or optical probe. This sample was grown by the floating-zone method [60] and polished along the (110) direction. Complementary measurements of optical conductivity [61], 800 nm pump-probe measurements with soft x-ray resonant probe [55], and time-resolved optical reflectivity measurements [55] have all been carried out. A high-power Ti:sapphire-based laser (1.55 eV) with a 50-fs pulse duration and 120-Hz repetition rate which was split and cross-polarized for the pump pulses, which gives a temporal resolution of about 75 fs. THz generation was produced through non-linear rectification using a organic salt crystal (DAST) to generate field strengths in excess of 300 kV/cm (See Fig.1a). By cooling the sample with a liquid He cryostat in UHV [58; 62], the crystal could be studied below the Neel temperature of 90 K [55].
The main THz results demonstrate how sensitive the magnetic ordering is to THz excitation, and are shown in Fig.1b. This illustrates the reflectivity response to high-field, short-pulse THz radiation centered at around 1.5 THz, the frequency spectrum of which is shown in Fig.1b. The curves show the THz pulse trace (red), the negligible response to IR light above the ordering temperature at 300K (black), and the response of the spin state which occurs below the Neel temperature (blue). When the crystal is magnetically ordered, the THz response shows a much more dramatic response compared to above room temperature. This is reminiscent of other effects that seem to be enhanced with THz radiation, such as the surprising sensitivity to the superconducting condensate in the presence of charge rather than magnetic order, in the high-temperature superconductor YBCO [63]. These results point to the value of exploiting the use of the strong THz response to magnetic systems with different types of ordering while using the high repetition rate capability at X-FELs to map out the ultrafast magnetic response in fresh detail.
### X-ray Photon Fluctuation Spectroscopy
Another area of anticipation is in using x-ray pulses with different separation times between pulses, to perform 'probe' measurements to study magnetic fluctuations, sometimes referred to as X-ray Photon Fluctuation Spectroscopy' (XPFS) [64]. This is similar to X-ray intensity fluctuation spectroscopy (XIFS) [65] or X-ray photon correlation spectroscopy (XPCS) [66], but instead of correlating scattered x-ray speckle patterns between shots, the shots are added together and the contrast is extracted from the pulse pair, when the pulses within each pair are finely spaced. These methods take advantage of the high degree of coherence of the x-rays and advanced light sources to produce'speckle' patterns, where scattered photons create a complex pattern based on the exact configuration of the system, the configuration which is typically not detected but averaged over when the degree of coherence is not high. By monitoring this speckle pattern, fluctuations of the structure can be measured and related back to theory to directly access the interactions between constituents within the system.
XPFS is an ultrafast version of XIFS or XPCS and is in the spirit of speckle visibility spectroscopy, where the contrast is analyzed rather than the intensity-intensity autocorrelation function, but the contrast is obtained from a pair of summed pulses [67]. This benefit provides technical motivation because in this case, the area detector collecting images does not have to be read out at the rate of the pulse separation, but rather the repetition rate of the pulse pairs. This capability provides an incredible potential for access to much shorter times. Importantly, the information can still be captured with the summation of the individual pulses, when keeping track of the number of pulses added and with the caveat that the signal-to-noise is lower for multi-shot images.
By using XPFS, ground state fluctuations can be studied of a magnetic texture in its natural state, without inducing the non-equilibrium nature which is typically associated with the ultrafast. This also requires ultra-short pulses to take snap shots of the magnetic state for different times, but the focus is to make a comparison of the system for different times. For instance, this has been carried out in the multilayered system FeGd [68; 69; 70], which has been shown to form a skyrmion lattice [71; 72; 73]. Here by varying the pulse separation, a damped 'phonon-like' mode was measured of the skyrmion lattice. This is reminiscent of the Goldstone mode, and was observed by directly probing the resonant scattering from the magnetic quasi-particles.
One point to note is that, in using this method, equal pulses are needed to extract the fluctuations in the system being
studied. We demonstrate this by showing new data from the damped oscillatory fluctuations from the skyrmion lattice mentioned above, in the FeGd system in Fig. 2. In a Self-Amplified Spontaneous Emission (SASE) process such as that produced at the LCLS, each pulse amplitude generated in the accelerator can vary dramatically. This can be captured in the nanosecond regime by using a fast digitizer as a diagnostic [68; 69; 74; 75]. In Fig. 2, we analyze pulses that have the pulse ratio between the first and second pulses \(\Lambda_{12}=a_{1}/a_{2}-1\leq 10\%\), or vice versa \(\Lambda_{21}=a_{2}/a_{1}-1\leq 10\%\), shown in the blue curve, as reported by Seaberg et al. [70]. This was determined to be the closest in amplitude one could analyze, while leaving reasonable statistics for this data set. In addition, we show plots with \(\Lambda\leq 12.5\%\) (green), and \(\Lambda\leq 20\%\) (pink). These plots demonstrate both the washing out of the oscillation amplitude in the correlation function as well as the larger background at long times. Both of these effects are expected as a result of adding pulse pairs with more'single-pulse' characteristics. For instance, in the extreme case, where one pulse is large while the other is negligible, the contrast will approach \(C(\mathbf{q},\tau)\sim 1\), the value of a single shot - regardless of the state of the system. This behavior underpins the importance of this type of diagnostic in XPFS and the need to produce equal pulses in this mode. Future work could correct for this amplitude ratio and retain the largely asymmetric data to improve statistics further.
Moreover, recent work has focused on changing the ratio of the two probes to increase the first well beyond the second pulse by adjusting the pulse energies generated from the machine (Sec. V.2.1). The idea of this option is to perform x-ray pump / x-ray probe by selectively x-ray pumping the system to study the response with soft x-rays. This is quite different in nature to XPFS and instead has the goal of exciting the system at very high energies.
Finally, we point out the natural extension of this, where one combines XPFS with an optical/THz pump pulse to probe non-equilibrium fluctuations in the excited state. Here the idea would instead be to not keep the system in equilibrium, but understand how the dynamical heterogeneity takes place during the excitation process. This is more sophisticated then a typical pump-probe study because much more information about the length-scale is available than in the scattered intensity. In other words, the monitoring of the contrast can be used
Figure 1: Amplitude spectrum of the THz pump pulse obtained from the Fourier transform of its time trace. (b) Temporal evolution of the 800 nm optical reflectivity (left axis) at 50 K (blue) and 300 K (black) after p-polarized THz photoexcitation of the manganite single crystal, Nd\({}_{1-x}\)Sr\({}_{1+x}\)MnO\({}_{4}\) (\(x=2\)/3). The lower temperature is below the Neel temperature in this crystal and is likely demonstrating enhanced sensitivity to the magnetic order. The time trace of a single THz pulse is shown in red (right axis).
Figure 2: The time dependent structure factor \(S(\mathbf{q},t)\) of the skyrmion lattice of FeGd recorded at the skyrmion lattice scattering peak. This is a multi-layered system which forms a lattice and spontaneous dynamics can be measured with XPFS on nanosecond times. The correlation function as a function of amplitude ratio \(\Lambda=20\%\) (pink), 12.5% (green), and 10% (blue). The curves are plotted along with an pure exponential decay of the same time constant (dotted lines) are offset by 0.05 for clarity.
for going beyond the coherent response to probe disorder and heterogeneity in non-equilibrium systems as well.
### Ultrafast Diffuse X-ray Scattering
A third method that will be critical to exploit in the field of ultrafast magnetism is in the study of diffuse magnetic scattering, where weakly or less well defined long-range scattering can be measured. Features that often define the most interesting properties are in regions where materials exhibit well-defined structure and excitations, and can be resolved in reciprocal space and can even be controlled externally (see Sec. II.1). However, sometimes considerable information is contained away from the well-defined structural responses of the system, in the shorter range interactions, which disturb the density-density correlations and lead to diffuse scattering near the peaks corresponding to long-range order. For example, as the thermal motion of atoms about their sites generates thermal diffuse scattering that can be used to obtain information about phonons [76], there is analogous information which will shed light on the magnetic structure as well. In this case, scattering from different components can overlap in reciprocal space, but can be unraveled by observing different signatures in the time domain.
For example, in the spin-glass systems, structural or charge scattering can obscure magnetic scattering directly related to the spin structure. In thin films of CuMn, this has been observed by comparing non-resonant to resonant scattering at the Mn-edge as a function of momentum transfer. This was shown to be successful in the measurement of the 4-spin correlation function which displays dynamics on very slow timescales, at the level of hundreds to thousands of seconds, to determine the Edwards-Anderson order parameter [77]. Diffuse scattering was furthermore measured in a forward scattering geometry at the LCLS, shown in Fig. 3. This shows a resonant coherent speckle pattern of the diffuse scattering around \(q=0\) using the XPFS prototype instrument at the LCLS [75]. It was shown that the large amount of diffuse scattering in the spin glass state could be measured out to large \(q\), up to almost \(100\,\mathrm{mA}^{-1}\), and could furthermore be captured on the order of one pulse width, about 100fs. Here, instead of using resonant enhancement to separate the charge and magnetic scattering for extraction of pure spin dynamics, the dynamical signatures of the scattering could be used to measure the dynamic spin component, on top of the static charge.
Another path for magnetic diffuse x-ray scattering is working at a suitable resonant edge, which under the right conditions is advisable to boost the observable magnetic signal. To push to the true ultrafast regime requires some of the techniques discussed earlier in this section (see Sec. II.2 for instance), especially with new electron accelerator technology [78] (see Sec. V.2.1). A celebrated example of this in magnetism is the observation of the so-called "pinch points" that are characteristic of the spin ice state [79]. Diffuse scattering studies using neutrons for instance have been carried out, where the spin-\(\frac{1}{2}\) neutrons interact directly with the magnetic fields generated by magnetic ions. Such measurements typically take a long time due to the limited neutron flux available, and can usually be considered as long-time averages of the short-range order. As we might expect from the fluctuation-dissipation theorem, this short-range scattering may display dynamics on a range of time scales, depending on the specific origin of the diffuse scattering. This is well studied in soft condensed matter, for example regarding the glass transition in polymers [80], but less so in crystalline materials.
On longer time scales, diffuse scattering from neutron scattering on frustrated magnets has been observed to vary on time scales of 10s of minutes, with changes failing to stabilize over several hours in the Ising spin chain Ca\({}_{3}\)Co\({}_{2}\)O\({}_{6}\)[81]. In a similar type of system, \(\gamma\)-CoV\({}_{2}\)O\({}_{6}\), similar changes occur but on a much shorter time scale, with relaxations observed by muon spin rotation decaying over several \(\mu\)s [82]. Using neutrons, the timescale can be pushed shorter by using the neutron spin-echo technique [83], where timescales ps to 100s of \(\mu\)s can be probed. This has also been used extensively in the study of polymers, but again is very flux hungry, with a limited number of instruments around the world.
Just as with polymer glasses, magnetic glasses, quantum spin glasses, and frustrated magnets are classes of materials where diffuse scattering is vital in being able to unravel the nature of the interactions, which are frustrated either geometrically [84] or by introducing site or bond disorder to ran
Figure 3: Resonant speckle pattern collected at an X-FEL which can be used to study nanosecond scale spontaneous fluctuations of the prototypical spin-glass system CuMn. The image shows the spin-glass speckle over a 500\(\times\)500 pixel image using the XPFS prototype instrument [75]. The image was collected at the \(L-\)edge resonance for Mn in a forward scattering geometry and demonstrates the degree to which magnetism can be extracted with a large amount of diffuse scattering and on short timescales, of order one pulse length of about 100fs. The center of the peak located \(q=0\) is positioned to the right of the sensor region.
domize the exchange interactions [85]. Neutron spin echo has been used to study a variety of spin glasses, often showing that significant dynamical processes exist on sub-picosecond timescales [86], but no ultrafast version of any of these neutron scattering methods exists. By targeting the many areas which have been addressed above in magnetic neutron diffuse scattering with ultrafast x-rays, a wide ranging and fertile ground is available for exploration.
## III Theory
To take full advantage of the research available with the expanding X-FEL capabilities, certain theoretical tools are vital to tie the methods as outlined above together. Especially due to the notorious scarcity of X-FEL beamtime and experimental complexity of the methods involved. Strong predictive models are needed before and during the experiment to ensure the allocated time is used optimally and efficiently. Furthermore, because of the rich variety of interactions between x-rays and materials, a theoretical perspective is crucial when analyzing data in order to distinguish signal from background. Since the focal point here is on resonant x-ray scattering, we start with an overview of the theoretical background for the magnetic cross-section, with an added discussion of how this relates directly to XPFS. This is followed by a discussion of Density Functional Theory (DFT) and the numerical methods we focus on for magnetism, Exact Diagnolization (ED) and the Density Matrix Renormalization Group (DMRG).
### Magnetic cross-section and resonant XPFS
The main mechanism we will focus on in this article is resonant magnetic scattering. In an experiment this is achieved by tuning the incoming x-ray photon energy to an absorption edge of the metallic ion carrying the magnetic spin-moments originating from unpaired valence electrons. To model this, we typically focus on dipole transitions, though the quadropolar channel can also be studied. The full cross-section for resonant scattering for the electric dipole transitions was first worked out by M. Blume [87] and is given by:
\[f=f_{c}-if_{m1}(\epsilon_{f}^{s}\times\epsilon_{i})\cdot\mathbf{s}+f_{m2}( \epsilon_{f}^{s}\cdot\mathbf{s})(\epsilon_{i}\cdot\mathbf{s}) \tag{1}\]
where \(\epsilon\) represents the incoming and final polarization states, \(s\) is the spin of the atom, and the \(f_{i}\)'s are charge, first, and second order magnetic, frequency-dependent scattering amplitudes.
Typically, in a scattering experiment, we project the cross-section into components that are either in the scattering plane, or orthogonal to the scattering plane. For our purposes discussed in this paper, we tend to focus on the second term, which for small angle scattering gives only off-diagonal matrix elements for the scattering process [88]:
\[\epsilon_{f}^{s}\times\epsilon_{i}=\left(\begin{array}{cc}\epsilon_{\sigma} ^{s}\times\epsilon_{\sigma}&\epsilon_{\sigma}^{s}\times\epsilon_{\pi}\\ \epsilon_{\pi}^{s}\times\epsilon_{\sigma}&\epsilon_{\pi}^{s}\times\epsilon_{ \pi}\end{array}\right)=\left(\begin{array}{cc}0&\mathbf{k_{i}}\\ -\mathbf{k_{f}}&0\end{array}\right)\]
In this case, the cross-section goes as \(\sim k\cdot s\) and is optimized for spins pointing out of the sample plane, or parallel to the incoming beam. The resonant intensity enhancement can be several hundredfold that of the non-resonant magnetic contribution, but is still often small compared to the intensity originating from charge-scattering. In practice, it is best to choose Bragg reflections which are forbidden by the space-group of the chemical lattice but allowed by the magnetic sublattice, so that the first term of 1 can be ignored and only the magnetic terms remain.
In order to observe the dynamics associated with the magnetic scatterers as described in II.2 the resultant scattering intensity from 1 is measured as a time-series at a region of interest in reciprocal space \(q\). For a typical XPCS experiment, the intensity-intensity autocorrelation function can be calculated:
\[g_{2}(q,\tau)=\frac{\left\langle I(q,t)I(q,t+\tau)\right\rangle}{\left\langle I (q,t)\right\rangle^{2}} \tag{2}\]
where \(\tau\) is the time difference between intensities at different times, and the brackets designate an average over \(t\). Importantly, this can be cast in terms of the intermediate scattering function by the Siegert relation [89] as:
\[g_{2}(q,\tau)=1+A[S(q,\tau)/S(q)]^{2} \tag{3}\]
where the intermediate scattering function is equal to the field-field correlation function, \(g_{1}(q,\tau)\). This holds as long as the scattering being observed is a Gaussian process, with the phase having an equal probability on the range of \(\phi\in[0,2\pi]^{90}\). One note here is that when representing magnetic x-ray scattering, this is not the spin-spin correlation function, which is typically calculated from theory, but the squared amplitude of the spin-spin correlation function, or a 4-spin correlation function. However, one is also able to carry out so-called 'heterodyne' measurements, and so provide access directly to the spin-spin correlation function.
For the XPFS measurements mentioned in Sec. II.2, one additional element is the relationship of the contrast, which can be directly calculated from the summed pulses, and the correlation function above. An important development in this area was in the demonstration of the equivalence of these two quantities to within a multiplicative factor [91]. This indicates that a contrast measurement is able to retrieve the equivalent information about the intermediate scattering function, as in XPCS.
Lastly, typical experiments rely on both the large pulse energies and the pulse structure of the beam in time, but the scattered photons are usually collected in the sparse limit. In this case, photon counting is necessary to evaluate the contrast. Because the beam is fully coherent, the contrast can be determined by fitting the distribution of photon counts per speckle to the negative binomial distribution, which relates the contrast \(C=C(q,\tau)\) to the probability of \(k-\)events for a given average intensity, \(\overline{k}\):
\[P(k)=A_{0}(k,M)\left(\frac{\overline{k}}{\overline{k}+M}\right)^{k}\left( \frac{M}{\overline{k}+M}\right)^{M}, \tag{4}\]
where \(A_{0}(k,M)\) is a normalization constant which depends on the contrast and the speckle photon density, given by:
\[A_{0}(k,M)=\frac{\Gamma(k+M)}{\Gamma(M)\Gamma(k+1)} \tag{5}\]
In Eq. 4 and Eq. 5, the dependence is expressed as the number of degrees of freedom of the speckle pattern \(M\), or \(M=1/\sqrt{C(q,\tau)}\). Since this equation can not be solved analytically, the parameters of the distribution are usually estimated by invoking estimators that are valid in the low \(\overline{k}\) limit [92] or by using maximum likelihood estimation [93]. Under certain conditions, an analytical solution has been shown to exist when multiple \(k-\)events are observed [64].
### Density Functional Theory
Next, we outline our theoretical approach, which starts with first-principles density functional theory (DFT) based modeling. The recipe proceeds along the following steps: (1) Advanced density functionals [94; 95], such as the recently constructed SCAN functional [96], are used to gain a handle on the ground-state electronic, magnetic, and topological structure. Spin-orbit coupling (SOC) effects can be accounted for in these computations [97]. (2) First-principles spin-resolved band structures and wavefunctions are then used to evaluate magnetic anisotropy energies, magnetic moments, exchange parameters, anisotropic exchange coefficients, and the Dzyaloshinskii-Moriya interaction (DMI) [98; 99; 100]. (3) Informed by first-principles results, material-specific, effective tight-binding model Hamiltonians can be constructed for incorporating effects of electron-electron interactions in our modeling. (4) Using atomistic spin dynamics (ASD) simulations with our model Hamiltonians, we can investigate the evolution of skyrmion states in various materials [101; 102]. Finally, (5) the preceding steps can be repeated after revealing effects of strains and magnetic fields on effects such as skyrmion formation.
The modeling of skyrmion structures within the DFT framework is quite challenging due to the large size of the magnetic unit cells involved. Therefore a combination of DFT and ASD simulations are needed to study DMI-induced skyrmions in the presence of an external magnetic field. This has previously been used in CrI\({}_{3}\)[103]. It involves finding the solution to the Landau-Lifshitz-Gilbert (LLG) equation [102]:
\[\frac{dm}{dt}=-|\gamma|m\times\mathcal{H}_{eff}+\alpha(m\times\frac{dm}{dt}) \tag{6}\]
where \(\gamma\) is the gyromagnetic ratio, \(m\) is the magnetic moment vector for the magnetic atom, \(\alpha\) is the Gilbert damping coefficient and \(\mathcal{H}_{eff}\) is the effective magnetic field,
\[\mathcal{H}_{eff}(x)=-\nabla\mathcal{H}(x), \tag{7}\]
where \(\mathcal{H}\) is the Hamiltonian of the system, which can be written as:
\[\mathcal{H}=\mathcal{H}_{ex}+\mathcal{H}_{ani}+\mathcal{H}_{z}+\mathcal{H}_{ DMI} \tag{8}\]
Here \(\mathcal{H}_{ex}\), \(\mathcal{H}_{ani}\), \(\mathcal{H}_{z}\), and \(\mathcal{H}_{DMI}\) are the exchange, anisotropy, Zeeman, and DMI terms, respectively. This can be expressed more specifically as:
\[\mathcal{H}=-\sum_{<ij>}J_{ij}n_{i}\cdot n_{j}-\sum_{ij}K_{j}(\hat{K}_{j}\cdot n _{i})^{2}-\sum_{i}\mu_{i}B\cdot n_{i}-\sum_{<ij>}D_{ij}\cdot(n_{i}\times n_{ j}) \tag{9}\]
where \(J_{ij}\) is the Heisenberg symmetric exchange, \(K_{j}\) is the single-ion magnetic anisotropy, \(B\) is the external magnetic field, and \(D_{ij}\) is the DMI vector. Note that \(\hat{K}_{j}\) denotes the direction of the anisotropy and \(m_{i}=\mu_{i}\cdot n_{i}\) is the magnetic moment.
The first step of this approach is to obtain the electronic structure of a given magnetic material within the DFT framework. We will then use this electronic structure to build a highly accurate real-space low-energy model using Wannier functions as the basis implemented in the Wannier90 code [104; 105]. Once we achieve the low-energy model, we can employ Green's functions and magnetic force theorem to systematically calculate the exchange parameters of the Heisenberg Hamiltonian following the Korotin approach [106]. Following this approach, we can get the isotropic exchange parameters, DMI, and the anisotropic exchange between two sites i,and j from the following equations.
\[J_{ij}=Im(A_{ij}^{00}-A_{ij}^{xx}-A_{ij}^{yy}-A_{ij}^{zz}), \tag{10}\]
\[J^{ani}=Im(A^{\mu\nu}+A^{\nu\alpha}) \tag{11}\]
\[\vec{D}_{ij}^{\mu}=Re(A_{ij}^{0u}-A_{ij}^{\mu 0}), \tag{12}\]
where \(A_{ij}^{\mu\nu}=-\frac{1}{\pi}\int_{-\infty}^{E_{F}}Tr\{\textbf{p}_{i}^{*} \textbf{G}_{ij}^{\mu}\textbf{p}_{j}^{*}\textbf{G}_{ji}^{\nu}\}d\varepsilon\), \(u,v=\{0,x,y,z\}\), \(\textbf{G}_{ij}\) is the Greens function, and \(\textbf{p}_{i}=\textbf{H}_{ii}(R=0)\) is the intra-atomic part of the Hamiltonian. These magnetic exchange parameters are then used in the ASD simulation to predict the presence of skyrmions. To our satisfaction, this method has been extensively tested for various magnetic materials and predicted their magnetic properties. For example, we hested this method to obtain exchange parameters for NiPS\({}_{3}\) with a zig-zag AFM structure. The first, second, and third
nearest neighbor exchange parameters are obtained as 1.2917 meV, 0.1044 meV, and -4.089 meV, respectively, which are similar to the previous data-, 1.093 meV, 0.613 meV, and -3.882 meV, respectively.
### Numerical methods
The heart of this process after using first-principles DFT computations as the starting point input, is in the strongly correlated numerical methods. Studying the ground state of an effective model Hamiltonian derived from DFT computations can uncover the low-energy physics in quantum magnetism. The spin interactions in the Hamiltonian are typically strongly correlated, which requires associated methods, such as exact diagonalization (ED) [107], variational Monte Carlo [108; 109; 110], and density matrix renormalization group (DMRG) [111], to simulate an accurate ground state of super-exchange electrons.
Since quantum Monte Carlo suffers from the severe "sign problem" [112; 113] at low temperatures and the possible system size can be limited in ED [114], we point out that DMRG can have an impact on the ultrafast study of quantum magnets. DMRG is an unbiased numerical method that is widely used for calculating quantum systems. The second generation DMRG [115; 116; 117; 118; 119; 120] is based on the matrix product state (MPS) and matrix product operator (MPO), which for a given Hamiltonian, such as that defined in Eq. (9), can precisely encode a MPO. This allows the algorithm to target the true ground state by minimizing the variational ground state energy.
Searching for this ground state of a given Hamiltonian can be costly due to the exponentially increasing Hilbert space with system size. DMRG is a one-dimensional (1D) algorithm, as initially proposed, but has been used to simulate two-dimensional (2D) systems. This is possible by numbering each site of an interacting spin in real space into a sequence so that some of the nearest neighbor spins in real 2D become long-range pairs in the sequence. The approximation is inevitable in DMRG because it prioritizes short-range entanglement. However, the long-range entanglement, if not dominant, can also be truncated to minimize the computational cost. Therefore, DMRG always utilizes a cylinder geometry (as shown in Fig.4) with periodic boundary conditions in one direction to approximate a 2D system so that the nearest neighbor sites will not be separated by a large distance when transforming to the numbered sequence. The realistic 2D system is approached by expanding the cylinder width.
Considering conserved quantum numbers can further reduce the computational cost when approaching larger cylinders with higher accuracy. The constraints rule out unwanted states, such as violation of particle number conservation, so that one can search the variational ground state in a subspace faster. The ground state properties of quantum materials can suffer from finite-size effects, low computational accuracy, or both, so it is highly nontrivial to push the limit of large-scale DMRG calculations to achieve higher accuracy with large system sizes. Further speedup is possible through parallel computing.
Moreover, the dynamical density-matrix renormalization group (DDMRG) [121] is a variational method for calculating the zero-temperature dynamical properties in 1D and quasi 1D quantum many-body systems. Typically, it is given by a dynamical correlation function, defined as
\[G_{X}(\omega+i\eta)=-\frac{1}{\pi}\langle\psi_{0}|\hat{X}^{\dagger}\frac{1}{ E_{0}+\omega+i\eta-H}\hat{X}|\psi_{0}\rangle \tag{13}\]
Where, \(|\psi_{0}\rangle\) is the ground state and \(\hat{X}\) is a quantum operator. However, consider \(W_{X,\eta}(\omega,\psi)\) substantively with the formula
\[W_{X,\eta}(\omega,\psi)=\langle\psi|(E_{0}+\omega-H)^{2}+\eta^{2}|\psi\rangle+ \eta\langle X|\psi\rangle+\eta\langle\psi|X\rangle \tag{14}\]
Where \(|X\rangle=\hat{X}|\psi_{0}\rangle\). Then minimizing \(W_{X,\eta}(\omega,\psi)\) yields the imaginary part of the dynamical correlation function for \(\eta\to 0\), _i.e._, \(I_{X}(\omega+i\eta)=\text{Im}\ G_{X}(\omega+i\eta)=-W_{X,\eta}(\omega,\psi_{ min})/\pi\eta\). The variational problem for solving \(|\psi_{min}\rangle\) is achievable with the standard DMRG.
The zero-temperature time-dependent correlation function, defined as \(G_{X}(t\geq 0)=\langle\psi_{0}|\hat{X}(t)\hat{X}(0)|\psi_{0}\rangle\), is solvable through the Laplace transform of a spectral function \(G_{X}(\omega+i\eta)\), or using ED together with time-dependent DMRG [122]. Comparing the calculated results through the numerical techniques discussed here with real experimental data, can help confirm or further modify the microscopic model provided by DFT. Significantly, this type of modeling of magnetic systems has fresh potential for original contributions in further understanding the results using the experimental X-FEL methods in the ultrafast regime.
## IV Machine learning
In this section, we present a machine learning (ML) model that uses predictive classification to effectively replace the
Figure 4: The quasi-one-dimensional system is used for the DMRG calculations. (a) An example of a 4-leg ladder, with extension in the vertical direction. (b) For the computation to mimic the 2D condition, a cylinder geometry allows for a periodic boundary condition in one dimension.
more traditional droplet algorithms that are widely used in XPFS-type measurements. Based on fully convolutional neural networks (CNNs), we show the algorithm is able to photonize raw input XPFS patterns and extract important information under significant levels of electron cloud smearing. We demonstrate that the CNN is able to obtain discrete photon maps from our procedure, which is used to extract the contrast in this method. We previously reported the success of this pipeline using a regression based ML-approach in which the discrete number of photon counts was estimated as a continuous value [93]. Here, we present an alternative approach in which photon assignments are classified into different discrete bins via optimization with the categorical cross-entropy loss function.
### Training Methodology
To train the model, we simulated a training dataset of 100,000 XPFS patterns with contrasts between 0.7 and 0.9 as well as their corresponding photonized labels. This data was obtained from an accurate simulator developed for the Linac Coherent Light Source (LCLS) on previous XPFS data, reported in previous work [123]. Each input pattern has dimensionality 90x90x1 and corresponds to the raw ADU intensity profile of the x-ray measurment. The relevant simulation parameters are the photon energy (340 ADU), the baseline gaussian noise (\(N(0,15)\) ADU) and the probabilistic charge cloud generation (cloud radii \(\sigma_{g}\) values: [0.1,0.25,0.35,0.45,0.55,0.6]). The labels for the machine learning task correspond to reduced images with dimensionality 30x30x1 which contain the true photonized maps. Here, each pixel in the reduced image is matched to the speckle size of the raw data, and stores the number of photons detected per speckle.
The photonizing task can be formulated as a semantic segmentation computer vision problem [124]. Specifically, for each pixel in a given XPFS frame we aim to predict a corresponding integer count for the number of photons contained in that pixel. This can be conceptualized as a classification problem where the classes are the photon counts: (0, 1, 2, 3,..., 8, 8+). To represent each class, we use a one-hot encoding of the class [125]. For example, class 3 (representing a 2 photon event) is mapped to [0 0 1 0 0 0 0 0 0 0]. Therefore, the dimensionality of the output labels changes from 30x30x1 to 30x30x9 in the one-hot encoding representation. We use a U-net neural network model [93; 126] with the last layer predicting softmax probabilities corresponding to each photon count level and for each pixel \(\hat{p}_{i}\) in the resultant photon map, i.e. \(\hat{p}_{i,j}\) represents the probability that pixel \(i\) is from class \(j\). To train the model, we use a cross-entropy loss function \(L(p_{i},\hat{p}_{i})\) to measure the difference between the predicted probabilities and true one-hot encoded label (\(p_{i}\)), averaged over all pixels in all frames (N).
\[L(p_{i},\hat{p}_{i})=-\sum_{i=1}^{\text{N}}p_{i}\cdot\log\hat{p}_{i} \tag{15}\]
We obtain an estimate of the contrast by using a per-image estimate for \(\tilde{k}\) and using the maximum likelihood procedure [127]. Error bars for the contrast follow the asymptotic 95% CI interval: \(\hat{\beta}\pm\frac{1.96}{\sqrt{\pi l(\hat{p})}}\). The trained neural network model is freely available at [https://github.com/src47/CNN_XPFS](https://github.com/src47/CNN_XPFS).
The ML model discussed here was shown to perform comparably with even the most advanced droplet algorithms, such as the GGG [123], on typical simulated XPFS data with a number of evaluation metrics. In addition, since we train our models on simulated data with a known ground truth, we furthermore are able to evaluate the accuracy of the trained CNN, relative to the known absolute contrast value.
### Metrics
The metrics used to evaluate the photonizing task are an important part of our evaluation and are described here. For example, the overall accuracy is not a good metric because of data sparsity. In other words, a model which predicts 0 for each pixel will show a uninformatively high accuracy, but this is clearly undesirable. Therefore, we report a set of metric precision scores \(\mathcal{P}\), recall \(\mathcal{R}\), and the per-class \(\mathcal{F}1\) which can be obtained from the number of true positives (\(T_{P}\)), false positives (\(F_{P}\)) and false negatives (\(F_{N}\)), defined as:
\[\begin{split}&\mathcal{P}=\frac{T_{P}}{T_{P}+F_{P}}\\ &\mathcal{R}=\frac{T_{P}}{T_{P}+F_{N}}\\ &\mathcal{F}1\ =\frac{2*\mathcal{P}*\mathcal{R}}{\mathcal{P}+ \mathcal{R}}\end{split} \tag{16}\]
As an example, the per-class precision for the 2-photon class is the number of true 2-photon events divided by the sum of the true number of 2-photon events and the number of 2-photon events that are incorrectly predicted. In Table 1, we show the results for three different metrics to evaluate the contrast based on an average contrast of 0.7. The droplet algorithm is compared to the CNN using all three of these metrics, and the results are shown for different photon number events. For this work, we used an average simulated contrast value of between 0.7 and 0.9, as this is what was measured in a previous experiment [68] and was used as a basis for the modeling [123]. In Table 2, the same data are displayed for an average simulated contrast value of 0.9.
In all test cases here, the results of the ML algorithm based on classification due to the discrete number of photon events was shown to work just as well as the droplet algorithm. In addition to being much faster, a critical benefit is that there is no user input needed for this algorithm, as is the case for the droplet algorithm, as the model has fully learned how to interpret sparse speckle patterns.
### Single photon detection
A representative example of a true label photon map, the corresponding CNN, and the GGG droplet algorithm predictions are shown in Figure 5. Here, it is clear that the CNN and the droplet algorithm agree for most pixels and give visually similar profiles. Although these two methods give similar profiles, small differences in photon count predictions can lead to very different contrast estimates. This is especially true in cases where the counts of high photon events are incorrectly estimated. In Figure 6, we compare contrast estimates derived from both the optimized GGG droplet algorithm [123] and our CNN model (Figure 6). For the ranges which match the experimental data (contrast values from 0.7 - 0.9), we find that the CNN and the droplet models agree well (Figure 6).
In order to determine which algorithm gives superior performance in the single photon detection regime, we evaluate the full performance using the metrics as outlined in Equation 16 for photon maps obtained from both the droplet algorithm and the CNN for the same contrast levels of 0.7 and 0.9. The results are summarized in Table 1-2. In conclusion, the CNN performs comparably, or outperforms, the droplet algorithm in 97% of the cases across the various metrics.
As briefly mentioned in Section IV.2, it is likely that the CNN model outperforms the GGG algorithm because it it able to learn a variety of filters from labeled datasets constructed using physical knowledge of the scattered photon radiation at the detector. Furthermore, the increased representational power of neural networks allows for the possibility of learning different filters which can handle experimental subtleties, such as variable charge cloud sizes or a wide range of average photon energies [93]. In contrast, the droplet algorithm is unsupervised and requires substantial tuning 'by-hand' to produce optimal results. The results outlined here based on a new ML algorithm to handle XPFS data will be able to accelerate progress in this field and also points to the potential for bridging other efforts in this area, such as the continuous photon modeling approach utilized previously [93]. Moreover, this also serves to bring more advanced X-ray methods such as this to a wider range of scientific areas, easing the obstacles for general practitioners to execute complex experiments with user-friendly computational tools.
## V Outlook
### Scientific Prospects
With the ultrafast experimental methods outlined here, the prospects for new routes towards understanding magnetism are numerous. THz excitation enables a variety of states to be excited and probed directly, especially with resonant x-ray scattering. By new developments with the pulse structure, x-ray scattering pulses can be compared to study spontaneous fluctuations in the natural ground state of magnetic systems. Finally, diffuse scattering on ultra-fast timescales can also offer new features for the field of magnetism by extending the studies to short range ordered structures, especially in systems that host a large degree of magnetic frustration, or those with short correlation lengths, such as in the spin-glass field. To maximize the potential of these techniques, we discuss in the following special modes of ultra-fast x-ray experiments. We conclude the future outlook with a description about novel instrumentation, and current instruments presently being constructed.
### Special Accelerator Modes
Generating different x-ray pulse separation times for performing probe-probe such as XPFS, or x-ray pump / x-ray probe studies as mentioned earlier, is important for executing the ultrafast methods outlined in this paper. This can be accomplished by either optics or special pulse modes developed
\begin{table}
\begin{tabular}{l l l l l} Photon Count\({}^{\text{a}}\) & Precision (Droplet, CNN) & Recall (Droplet, CNN) & F1-score (Droplet, CNN) & Data Volume \\ \hline
0 & 1.00, 1.00 & 1.00, 1.00 & 1.00, 1.00 & 3957196 \\
1 & 0.96, 0.97 & 0.96, 0.97 & 0.96, 0.97 & 472173 \\
2 & 0.91, 0.92 & 0.90, 0.93 & 0.90, 0.92 & 60729 \\
3 & 0.84, 0.86 & 0.83, 0.87 & 0.84, 0.87 & 8536 \\
4 & 0.75, 0.73 & 0.75, 0.77 & 0.75, 0.75 & 1140 \\ \end{tabular} \({}^{\text{b}}\) Performance on 5, 6, 7, 8, 8+ photon events is not shown as there is insufficient testing data for these classes.
\end{table}
Table 2: Per-class Precision, Recall and F1-score for the CNN and Droplet methods (contrast = 0.9).
\begin{table}
\begin{tabular}{l l l l l} Photon Count\({}^{\text{b}}\) & Precision (Droplet, CNN) & Recall (Droplet, CNN) & F1-score (Droplet, CNN) & Data Volume \\ \hline
0 & 1.00, 1.00 & 1.00, 1.00 & 1.00, 1.00 & 3950406 \\
1 & 0.97, 0.97 & 0.96, 0.97 & 0.96, 0.97 & 488364 \\
2 & 0.90, 0.91 & 0.90, 0.93 & 0.90, 0.92 & 54530 \\
3 & 0.81, 0.83 & 0.83, 0.87 & 0.82, 0.85 & 5992 \\
4 & 0.73, 0.74 & 0.74, 0.77 & 0.74, 0.74 & 637 \\ \end{tabular}
\end{table}
Table 1: Per-class Precision, Recall and F1-score for the CNN and Droplet methods (contrast = 0.7).
at the accelerator. Though a large effort has been put into the use of x-ray optics for this task and research progress is tremendous [128], here we outline the two most valuable modes in the soft x-ray regime which will be important for the scientific outlook in ultrafast magnetism.
#### iii.2.1 Fresh-slice x-ray pairs
The Fresh-slice scheme [129] can produce pairs of high-power femtosecond x-ray pulses, with simple control of their wavelength and delay. The scheme is based on controlling which temporal slice of an electron bunch lases in a section of the undulator line. This is typically achieved by tilting the electron bunch and subsequently controlling the bunch trajectory. Lasing slices can be a couple of femtosecond short, and wavelengths are controlled by the undulator strength in each section, enabling color separation ranging from being at the same wavelength to larger than a factor 2. The maximum delay depends on the strength of an intra-undulator line magnetic chicane. Practically, the scheme has been demonstrated for delays up to 1 picosecond. Temporal coincidence, or exchanging the arrival time of the two pulses, is possible if the slice on the tail is set to lase in the upstream undulator section.
Figure 5: Example of a representative Ground Truth photon map (a), CNN photon map (b) and Droplet photon map (c). On close inspection, it is clear the CNN gets more of the minor features of the photon map predicted correctly.
Figure 6: Estimated contrast (the GGG droplet algorithm and CNN) versus the true contrast for both types of photon maps. The value of contrast was chosen to mimic previous experimental contrast value from previous XPFS experiments.
Figure 7: Two bucket correlation. The pulse energy of each pulse in a two-bucket approach. The red is plotted negative for convenience. The two peaks are aligned at the same energy, showing they have the same color and can arbitrarily be delayed by integer bucket values of 350ps.
#### iv.2.2 Two-bunch x-ray pairs
For longer delays, ranging from hundreds of picoseconds to hundreds of nanoseconds, two separate electron bunches are extracted at the cathode, accelerated and compressed in the linear accelerator and lase in the undulator line [78]. The unitary time separation depends on the accelerator RF frequency; for the LCLS S-Band linac, it is close to 350 picoseconds. Performance for each bunch can be similar to that of a regular, single bunch SASE pulse. Typically, the two-bucket scheme is set up to have both pulses lasing at the same wavelength, but a small color separation is achievable by having the two electron bunches at slightly different energies. For instance, Fig. 7 shows the pulse energy of each pulse in a dispersive location of the accelerator, with a positive correlation. This can be used to adjust the color of each pulse to have the same wavelength as well as equal pulse energies.
### State-of-the-art Instrumentation
Currently, instruments are being developed to take full advantage of the capabilities at these types of sources in the area of magnetism. For instance, at the SLAC National Accelerator Laboratory, the new accelerator for the LCLS-II will soon provide up to nearly \(\sim\)1 MHz repetition rate with an array of possible pumping schemes, including THz, with new instruments currently being designed, constructed, and commissioned. Some efforts have been made to retrofit current x-ray instruments, as well as the development of full-scale instruments designed for the purpose of carrying out XPFS measurements in the soft x-ray regime for a variety of different geometries. We outline two such cases here. The first is
Figure 8: The chemRIXS Instrument for Solids. An image showing the chemRIXS endstation with the enhancements designed for XPFS measurements of solids. The main sample chamber (left) houses the liquid jet system or solid material sample. The electromagnet is inserted perpendicular to the beam horizontally, while the detector and mask assembly (right) are placed in the forward scattering direction for transmission geometry measurements. All dark blue color-coding designate components that were designed for the thin film, ultrafast magnetism experiments.
Figure 9: ChemRIXS Instrument Close-up A close-up image inside the sample chamber. This chamber was modified to house an electromagnet and cryostat sample holder to perform XPFS studies on solids.
the modification of the so-called chemRIXS station recently constructed to carry out XPFS studies on materials, while the second is a future instrument currently under construction.
#### iv.2.1 The chemRIXS instrument for materials
As a first test case, we have developed a capability to take advantage of the recent beamline developed as part of the LCLS-II suite of instruments focused on ultrafast chemical science, specifically in x-ray absorption spectroscopy and resonant inelastic x-ray scattering, the so-called "chemRIXS" beamline. This is specifically focused on using liquid jet or liquid sheet jet systems for the study of ultrafast physics and chemistry experiments [130]. The spectrum of the molecules illuminated in the solvent can be optically excited and the chemical structure evolution can be followed on the time-scale of these chemical changes [131].
While not designed for solid material samples, some instrumentation enhancements were made such that first XPFS experiments could be performed on the newly designed beamline. This was carried out on thin film magnets by designing and fabricating a solid sample holder to replace the liquid jet, a mounted CCD detector, together with a detector mask, and a manipulator to control the placement of an electromagnet. The mask was used to limit the area of illumination on the detector chip to enable higher speed readout [68]. This feature can be adjusted, i.e. to collect a larger fraction of the speckle pattern and a slower rate, depending on the experimental system. The detector was placed in a forward scattering geometry, compatible with the chemRIXS set-up. This detector scheme implemented for this set of experiments has been described in detail elsewhere as a prototype XPFS instrument [75], but was here used in conjunction with the liquid jet endstation. An overview of this set-up in this configuration is shown in Fig. 8. For a close-up view inside the system, a rendition of the chemRIXS setup from inside the chamber, emphasizing the cryostat and the electromagnetic for magnetic field dependent studies, is shown in Fig. 9.
#### iv.2.2 Future capabilities
Looking forward to future capabilities, LCLS-II beamlines are already in use with new instruments on the horizon. For THz pump/ magnetic scattering probe studies or ultrafast magnetic diffuse x-ray scattering for instance, the latest capability will occur with the installation of the qRIXS instrument.
The qRIXS instrument will host a sample chamber and a rotatable spectrometer consisting of grating and high-speed 2D detector covering the range of scattering angles from 40-150 degrees in the horizontal scattering geometry. The spectrometer is designed to achieve an energy resolution of \(\sim 20\) meV at 1 keV. The sample chamber is designed to support elastic soft x-ray diffraction. The chamber is equipped with an in-vacuum diffractometer with a 6-axis degree of freedom of
Figure 10: XPFS Instrument. The image shows a rendition of the XPFS chamber housing the fast, pixelated detector, on its own movable stand. The scattering path is evacuated at UHV conditions after precision alignment. The whole apparatus mounts to the qRIXS sample chamber, and is part of the NEH 2.2 instrument at the LCLS-II, shown here installed on the beamline. The instrument is still currently under design.
motion. Sample cooling down to about 25 K will be possible with the cryogenic sample installation. The ability to introduce over-sized optics for long-wavelength THz beams, will be accommodated in the near future [56, 58, 41].
In addition the current plan, we are developing another capability to be able to carry out XPFS measurements in the soft x-ray regime as well (see Fig.10). The focal point of this instrument will be the in-vacuum, long distance, and movable detector motion. This will incorporate the 'ePixM' series designed at Stanford University and SLAC National Laboratory. This will be a large, pixelated detector that can count single photons down to the carbon edge (\(\sim\) 285 eV) and will operate at the full repetition rate of the new, superconducting accelerator, at 929 kHz. The instrument, which is still under design currently, will be capable of moving the detector about a limited scattering angle range while in UHV conditions, with the use of precision laser trackers. It will be housed in a small vacuum chamber and stand with air bearings for chamber motion. An advanced laser tracker system will place the detector coordinates in the correct geometry at a sample-to-detector distance of about 3 m. Once aligned, a scattering path length drift tube will be attached from the sample chamber, using a rotary seal, to the detector chamber. It will be evacuated to UHV conditions to accommodate soft x-ray scattering in the area of quantum materials. Once complete, this will offer the potential for first XPFS measurements with soft x-rays at a arbitrary scattering angle, important for a myriad of quantum materials and other types of studies on solids.
## VI Conclusion
In conclusion, with the latest developments at XFEL sources, studies in ultrafast magnetism are ripe for a renewed focus across many areas of research. We have targeted three new areas that are continued to be developed and will help to capitalize on fresh capabilities to spur magnetic studies in both equilibrium and non-equilibrium investigations. Combining these advanced methods with the latest progress in condensed matter theory and machine learning, the synergy and progress capable in the future for ultrafast magnetism is bright.
## VII Acknowledgements
This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0022216. Portions of this work were also supported by the U. S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515. The use of the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, is also supported by the DOE, Office of Science under the same contract. S. A. M. acknowledges support by the U. S. Office of Naval Research, In-House Lab Independent Research program. S.K., P.F., and S.R. acknowledge support by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 (NEMM program MSMAG). The research at UCSD was supported by the research programs of the U.S. Department of Energy (DOE), Office of Basic Energy Sciences (Award No. DE-SC0003678). J. J. Turner acknowledges support from the U.S. DOE, Office of Science, Basic Energy Sciences through the Early Career Research Program.
|
2307.00849 | Protection of Correlation-Induced Phase Instabilities by Exceptional
Susceptibilities | At thermal equilibrium, we find that generalized susceptibilities encoding
the static physical response properties of Hermitian many-electron systems
possess inherent non-Hermitian (NH) matrix symmetries. This leads to the
generic occurrence of exceptional points (EPs), i.e., NH spectral degeneracies,
in the generalized susceptibilities of prototypical Fermi-Hubbard models, as a
function of a single parameter such as chemical potential. We demonstrate that
these EPs are necessary to promote correlation-induced thermodynamic
instabilities, such as phase-separation occurring in the proximity of a Mott
transition, to a topologically stable phenomenon. | Matthias Reitner, Lorenzo Crippa, Dominik Robert Fus, Jan Carl Budich, Alessandro Toschi, Giorgio Sangiovanni | 2023-07-03T08:45:09Z | http://arxiv.org/abs/2307.00849v3 | # Protection of Correlation-Induced Phase Instabilities by Exceptional Susceptibilities
###### Abstract
At thermal equilibrium, we find that generalized susceptibilities encoding the static physical response properties of Hermitian many-electron systems possess inherent non-Hermitian (NH) matrix symmetries, emerging solely due to Fermi-Dirac statistics. This leads to the generic occurrence of exceptional points (EPs), i.e., NH spectral degeneracies, in the generalized susceptibilities of prototypical Fermi-Hubbard models, as a function of a single parameter such as chemical potential. We demonstrate that these EPs promote correlation-induced thermodynamic instabilities, such as phase-separation occurring in the proximity of a Mott transition, to a topologically stable phenomenon.
pacs: 71.10.-m, 71.10.-m, 71.
introduced by imaginary damping terms in the dissipative time-evolution. By contrast, since the generalized susceptibilities at the heart of our present analysis do not relate to effective complex energy spectra, their NH topology has a direct impact on natural observables, independent of idealistic assumptions on temperature and without the need for complex multi-orbital models, respectively.
NH symmetries of generalized susceptibilities - The central objects of our analysis are four-point functions describing the propagation of a particle-hole pair (see Fig. 2b) in the setting of a time-independent Hamiltonian \(H\) at thermal equilibrium. These are expressed as matrices of two fermionic Matsubara frequencies \(\nu\) and \(\nu^{\prime}\), where \(\nu^{(\prime)}=(2n^{(\prime)}+1)\pi/\beta\), \(n^{(\prime)}\in\mathbb{Z}\), and \(\beta=1/T\) is the inverse temperature. In the literature, such quantities are referred to as _generalized_ susceptibilities, since they yield static physical response functions when summed over both fermionic frequencies [50]. Specifically, we define them as
\[\chi^{\nu\nu^{\prime}}_{ph,\alpha_{1}\dots\alpha_{4}} =\overbrace{\left\langle\mathcal{T}c^{\dagger}_{\nu\alpha_{1}}c_ {\nu\alpha_{2}}c^{\dagger}_{\nu\alpha_{3}}c_{\nu^{\prime}\alpha_{4}}\right\rangle}^ {G^{(2)\nu\nu^{\prime}}_{\alpha_{1}\dots\alpha_{4}}} \tag{1}\] \[\quad-\left\langle\mathcal{T}c^{\dagger}_{\nu\alpha_{1}}c_{\nu \alpha_{2}}\right\rangle\!\!\left\langle\mathcal{T}c^{\dagger}_{\nu^{\prime} \alpha_{3}}c_{\nu^{\prime}\alpha_{4}}\right\rangle\]
(illustrated in Fig. 2a), where \(\mathcal{T}\) denotes the imaginary time ordering operator, \(\left\langle\dots\right\rangle=1/Z\operatorname{Tr}(\mathrm{e}^{-\beta H}\dots)\) the thermal expectation value, \(c^{(\dagger)}_{\nu\alpha_{i}}=\frac{1}{\sqrt{\beta}}\int_{0}^{\beta}\mathrm{d }\tau\mathrm{e}^{(-)\mathrm{i}\nu\tau}\mathrm{e}^{H\tau}c^{(\dagger)}_{\alpha _{i}}\mathrm{e}^{-H\tau}\) the Fourier transform of the (creation) annihilation operators [51], and \(G^{(2)\nu\nu^{\prime}}_{\alpha_{1}\dots\alpha_{4}}\) the two-particle Green's function. \(\alpha_{i}\) refers to, in principle, any of the degrees of freedom of the model (momenta, spin, orbital, etc.). Some properties of \(\chi^{\nu\nu^{\prime}}_{ph,\alpha_{1}\dots\alpha_{4}}\) have been already analyzed in Refs. [16; 50; 52; 53]. In this work, we are investigating the topological properties of the corresponding eigenvalue spectrum.
Taking the complex conjugate of Eq. (1) and considering \((c^{\dagger}_{\nu})^{*}=-c_{-\nu}\), and \((c_{\nu})^{*}=-c^{\dagger}_{-\nu}\)_inside_ of \(\left\langle\dots\right\rangle\), one obtains \((\chi^{\nu\nu^{\prime}}_{ph,\alpha_{1}\dots\alpha_{4}})^{*}=\chi^{-\nu^{ \prime}-\nu}_{ph,\alpha_{4}\dots\alpha_{1}}\)[54]. With simple further manipulations on the indices leaving \(\left\langle\mathcal{T}\dots\right\rangle\) invariant, this leads to \((\chi^{\nu\nu^{\prime}}_{ph,\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}})^{*}= \chi^{-\nu-\nu^{\prime}}_{ph,\alpha_{2}\alpha_{1}\alpha_{4}\alpha_{3}}\). The latter mathematical object has the form of a matrix with coefficients \(\chi^{\beta\beta^{\prime}}_{ph}\), which crucially, satisfies the relation
\[\chi^{\beta\beta^{\prime}}_{ph}=\sum_{\beta_{1}\beta_{2}}\Pi^{\beta\beta_{1}} (\chi^{\beta_{1}\beta_{2}}_{ph})^{*}\Pi^{\beta_{2}\beta^{\prime}}, \tag{2}\]
where \(\Pi^{\beta\beta^{\prime}}\) is the permutation matrix \(\beta:=(\nu,\alpha_{1},\alpha_{2})\to\beta^{\prime}:=(-\nu,\alpha_{2},\alpha_ {1})\). This property has important consequences for the eigenvalues \(\lambda\) of \(\chi_{ph}\), which belongs to the class of \(\kappa\)-real matrices \(\mathbf{K}_{r}=\mathbf{\Pi}\mathbf{K}_{r}^{*}\mathbf{\Pi}\)[55], where \(\mathbf{\Pi}\) refers to any permutation matrix. These have been shown [55] to have a characteristic polynomial with real coefficients and, hence, either real or complex conjugate eigenvalues due to the fundamental theorem of algebra. A relevant subclass of \(\kappa\)-real matrices are _centroHermitian_ matrices [56; 57], which are invariant under a transformation that combines complex conjugation with centro-symmetry, as illustrated in Fig. 2c.
In the following, we consider the _local_ generalized charge susceptibility \(\chi^{\nu\nu^{\prime}}_{c}=\frac{1}{2}\sum_{\sigma\sigma^{\prime}}\chi^{\nu\nu^ {\prime}}_{ph,\sigma\sigma\sigma^{\prime}\sigma^{\prime}}\) of a one-orbital model that satisfies the following relation:
\[(\chi^{\nu\nu^{\prime}}_{c})^{*}=\frac{1}{2}\sum_{\sigma\sigma^{\prime}}\chi^{- \nu-\nu^{\prime}}_{ph,\sigma\sigma^{\prime}\sigma^{\prime}}=\chi^{-\nu-\nu^ {\prime}}_{c} \tag{3}\]
and is therefore a _centroHermitian_ matrix. In addition, if the Hamiltonian possesses specific symmetries, these can impose even stricter matrix properties. For instance, for particle-hole symmetry (PHS) \(\chi^{\nu\nu^{\prime}}_{c}\) becomes real and has only real eigenvalues [17; 53].
Minimal model for exceptional susceptibilities - To illustrate our general findings, we consider a \(2\times 2\) matrix \(\chi^{2\times 2}_{ph}\) obeying the _centroHermitian_ condition:
\[\chi^{2\times 2}_{ph}=\begin{pmatrix}a+\mathrm{i}b&c-\mathrm{i}d\\ c+\mathrm{i}d&a-\mathrm{i}b\end{pmatrix}=a\cdot\mathbbm{1}+\vec{v}\cdot\vec{\sigma} \tag{4}\]
where \(a,b,c,d\in\mathbb{R}\) and \(\vec{v}=\vec{v}_{R}+i\vec{v}_{I}=(c,d,0)+i(0,0,b)\) is a complex vector. The \(a\) parameter can be safely disregarded, as it only amounts to a rigid eigenvalue shift. EPs are globally stable for a two- or higher-dimensional parameter space, because for the matrix to become non-diagonalizable, two conditions \((v_{R}^{2}-v_{I}^{2}=0\), \(\vec{v}_{R}\cdot\vec{v}_{I}=0)\) have to be simultaneously satisfied. It is immediate to see that the centroHermitian property implies that the second is always fulfilled. It is then sufficient, for the exceptional points to manifest, that \(c^{2}+d^{2}-b^{2}=0\), which implies that even in a one-dimensional space, EPs - if any are present - will be globally robust against any
perturbation representable by a matrix of the form given in Eq. (4). Crucially, no other perturbation can arise, because the centroHermitian condition does not originate from any further symmetry, but it is an intrinsic consequence of the Fermi-Dirac statistics. On the contrary, if we additionally impose PHS, \(\chi_{ph}^{2\times 2}\) becomes purely real and symmetric (\(b,d=0\)), which implies that the only solution of the two conditions for the EPs will be for \(\vec{v}_{R}=\vec{v}_{I}=0\). This is known in literature as a diabolic point, which is effectively concurrent with a Hermitian degeneracy [15] and, indeed, generally requires fine-tuning.
Analytical study of the atomic limit - As the simplest physical platform to exemplify the spectral properties of \(\chi_{c}^{\nu\nu^{\prime}}\), we now study the exactly solvable atomic limit of the Hubbard model (AL)
\[H=-\mu(n_{\uparrow}+n_{\downarrow})+Un_{\uparrow}n_{\downarrow}, \tag{5}\]
where \(\mu\) is the chemical potential, \(n_{\sigma}=c_{\sigma}^{\dagger}c_{\sigma}\) the occupation of an electron with spin \(\sigma\), and \(U\) the on-site Coulomb repulsion given in arbitrary units of energy. This model fulfills PHS if \(\mu=U/2\) and is in general SU(2)-symmetric, which in turn means that \(\chi_{c}^{\nu\nu^{\prime}}\) decouples from the magnetic response [16; 18]. In the case of zero interaction \(U=0\), the local generalized charge susceptibility reads
\[\chi_{c}^{\nu\nu^{\prime}}\stackrel{{ U=0}}{{=}}-G(\mathrm{i}\nu)G( \mathrm{i}\nu^{\prime})\delta^{\nu\nu^{\prime}}=-\frac{\delta^{\nu\nu^{\prime }}}{(\mathrm{i}\nu+\mu)^{2}}, \tag{6}\]
where \(G(\mathrm{i}\nu)=\left\langle\mathcal{T}c_{\nu\sigma}^{\dagger}c_{\nu\sigma}\right\rangle\) is the one-particle Green's function. \(\chi_{c}^{\nu\nu^{\prime}}\) is diagonal, hence the eigenvalues can be immediately read from Eq. (6). These become doubly degenerate (\(\lambda_{\nu}=\lambda_{-\nu}=1/\nu^{2}\)) at PHS, i.e. \(\mu=0\), while they form complex conjugate pairs (\(\lambda_{\nu}=\lambda_{-\nu}^{*}\)) at finite \(\mu\). In the left column of Fig. 3 the real (a) and imaginary part (b) of \(\lambda_{i}\) are shown for different \(\mu\) at finite temperature \(T=1/100\).
At finite interaction \(U>0\), \(\chi_{c}^{\nu\nu^{\prime}}\) becomes a more complicated expression [58; 17; 59], given in the supplemental material [60]. The crucial point is the appearance of progressively larger off-diagonal components. The resulting eigenvalues \(\lambda_{i}\) are shown in the right column of Fig. 3. Significantly, at PHS, \(\lambda_{i}\) are still purely real but no longer degenerate. Importantly, this remains true in a finite region of \(\mu\) around \(U/2\). Far away from PHS, however, the effect of the interaction weakens, and all eigenvalues become complex conjugate pairs. To switch between these two regimes, two eigenvalues have to coalesce: this creates a pair of distinct EPs in \(\mu\)-space, which delimit and protect the real-eigenvalue "lens"-shaped structure (Fig. 3c). In the 2\(\times\)2-picture of Eq. (4), we can identify the interaction \(U\) as responsible for the presence of the off-diagonal finite elements \(c\) and \(d\) in the matrix, and the finite \(\mu\) for the diagonal element \(b\), which are the two ingredients necessary to satisfy the EP conditions. Hence, for the AL any finite value of \(U\) will result in exceptional points away from PHS and a finite-size real eigenvalue lens shape.
Implications on correlation-induced instabilities - We now turn to a more generic scenario, namely the single-orbital Hubbard Hamiltonian on a lattice:
\[\begin{split} H=&-t\sum_{\langle ij\rangle,\sigma}(c _{i\sigma}^{\dagger}c_{j\sigma}+c_{j\sigma}^{\dagger}c_{i\sigma})-\mu\sum_{i, \sigma}n_{i\sigma}\\ &+U\sum_{i}n_{i\uparrow}n_{i\downarrow}.\end{split} \tag{7}\]
with constant hopping \(t\) between neighboring sites \(i\) and \(j\). This model is again SU(2)-symmetric and for \(\mu=U/2\) it fulfills PHS. Except for one or infinite spatial dimensions, the model has not been solved analytically. In order to get a non-perturbative, albeit approximated many-body solution, we use dynamical mean-field theory (DMFT) (which becomes exact only in the limit of infinite dimensions) [61; 62] with a continuous-time quantum Monte Carlo solver from _w2dynamics_[63]. As shown in Ref. [30], the eigenvalues \(\lambda_{i}\) and the corresponding eigenvectors \(v_{i}^{\nu}\) of the local generalized susceptibility \(\chi_{c}^{\nu\nu^{\prime}}\) (\(\sum_{\nu}\chi_{c}^{\nu\nu^{\prime}}v_{i}^{\nu^{\prime}}=\lambda_{i}v_{i}^{\nu}\)) play an important role for the response functions of the whole lattice: they lead to an enhancement and in some cases to a divergence of the _uniform_ (i.e. for zero transfer momentum \(\mathbf{q}=0\)) susceptibility. In particular, for the Bethe lattice with infinite connectivity (where DMFT is exact) the static uniform charge response, obtained by summing the
Figure 3: Real part (top row) and imaginary part (bottom row) of the eigenvalues \(\lambda_{i}\) of \(\chi_{c}^{\nu\nu^{\prime}}\) for the atomic limit (AL) at temperature \(T=1/100\), \(U=0\) (a, b) and at \(U=1\) (c, d) as function of chemical potential away from particle-hole symmetry (PHS) at \(\mu=U/2\). The ten eigenvalues which are lowest in \(\mathrm{Re}\,\lambda_{i}\) at \(U=1\) are displayed.
generalized susceptibility over all Matsubara frequencies \(\chi_{\mathbf{q}=0}=\sum_{\nu\nu^{\prime}}\chi_{\mathbf{q}=0}^{\nu\nu^{\prime}}\), can be re-expressed in terms of \(\lambda_{i}\) and corresponding weights \(w_{i}=(\sum_{\nu}(v_{i}^{-1})^{\nu})(\sum_{\nu^{\prime}}v_{i}^{\nu^{\prime}})\). This leads to the following expression
\[\chi_{\mathbf{q}=0}=\frac{1}{\beta}\sum_{i}\left(\frac{1}{\lambda_{i}}+t^{2} \right)^{-1}w_{i}, \tag{8}\]
which diverges - thus inducing a phase instability in the charge sector - when one eigenvalue fulfills the condition \(\lambda_{i}=-1/t^{2}\). Importantly, this is possible only when \(\lambda_{i}\) is _real_[64].
Although Eq. (8) is only exact in the case of the Bethe lattice, numerical calculations have shown [30] that it holds also for a square lattice if \(t\) is replaced by a temperature- and \(\mu\)-dependent \(t_{\text{eff}}(\mu,T)\). Here, the central role of EPs becomes apparent: their presence guarantees that the imaginary part of \(\lambda_{i}\) remains zero in the whole extended region of the lens shape. In other words, the possibility of inducing a divergence in \(\chi_{\mathbf{q}=0}\) is not accidental and does not rely on a fine-tuning of \(U\), \(T\) and \(\mu\): the phase instability is in fact topologically protected. For the square lattice, this is illustrated in Fig. 4, where we plot the real (a) and imaginary part (b) of the eigenvalues \(\lambda_{i}\) of the local charge susceptibility \(\chi_{c}^{\nu\nu^{\prime}}\) close to the critical point of the phase separation (cf. sketch in Fig. 1). Here, the lowest eigenvalue \(\lambda_{I}\) satisfies (up to numerical accuracy) the condition \(\lambda_{I}=-1/t_{\text{eff}}^{2}\) in the region of the lens shape. Hence, the phase instability condition is also fulfilled for any further reduction of the temperature \(T\) (or for any moderate reduction of the interaction \(U\)). In particular, at lower \(T\), we enter a regime, where a first order phase separation occurs. This regime is, thus, characterized by two locally stable DMFT solutions (i.e., two coexisting values of \(\lambda_{I}\)), corresponding to a less correlated metallic and a "bad metal" phase (connected by an instable solution, where \(\lambda_{I}<-1/t_{\text{eff}}^{2}\)[65]). Here, the topological robust arguments related to the condition \(\lambda_{I}=-1/t_{\text{eff}}^{2}\) remain nonetheless applicable, albeit to the two corresponding metastable solutions [66].
Finally, let us notice that a negative eigenvalue is a necessary condition for the instability criterion to be fulfilled [30]. Remarkably, the role of the negative eigenvalues in the generalized charge susceptibility has been recently related to the local moment formation [67; 68; 69; 70] and, on a more formal level, to divergences of the irreducible vertex function and the multivaluedness of the Luttinger-Ward functional [34; 30; 31; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91]. Therefore, these negative eigenvalues can be regarded as a feature of strong electronic correlations, which cannot be commonly described by "perturbative" theories, e.g., the random phase approximation. However, the considerations behind Eq. (8) are not solely restricted to negative eigenvalues. They can be also applied to the opposite case, where a positive eigenvalue reaching a maximum (e.g., \(\lambda_{i}=1/t_{\text{eff}}^{2}\)) triggers a phase instability, such as the antiferromagnetic transitions of the Hubbard model [92]. Thus, in strongly correlated systems, the presence of EPs is found to generally promote phase instabilities in the \(ph\)-channel [93] to a stable phenomenon.
Conclusion - We have found the opening of an EP phase for the associated eigenvalues of the static local susceptibility in the \(U/\mu\) phase diagram of models for correlated electron systems. The remarkable consequence is that the interaction-induced charge instabilities such as the phase separation occurring close to the Mott metal-to-insulator transition in the Hubbard model do not need any fine-tuning but can occur in an entire finite range of parameters. This unexpected global robustness is a
Figure 4: Real part (a) and imaginary part (b) of the eigenvalues \(\lambda_{i}\) of \(\chi_{c}^{\nu\nu^{\prime}}\) for the square lattice Hubbard model (with half bandwidth \(D=4t=1\)) solved within DMFT as function of chemical potential away from PHS at \(\mu=U/2\). Interaction strength \(U=2.4\) and temperature \(T=1/53\) coincide with Ref. [30] to show the situation close to the thermodynamic instability. Calculated data are displayed as dots, the positive \((\mu-U/2)\)-axis is mapped from the negative one, exploiting the symmetry of the model considered. The ten eigenvalues which are lowest in \(\text{Re}\,\lambda_{i}\) at \(\mu-U/2=0\) are displayed.
consequence of the peculiar _centroHermitian_ form of the susceptibility matrix, which is not dictated by some _ad-hoc_ antiunitary symmetry but by the intrinsic nature of Fermi-Dirac statistics.
The susceptibility EPs represent a clear-cut and compelling manifestation of non-Hermitian topology, surpassing the conventional realizations based on spectral functions. This phenomenon is indeed ubiquitous even in the simplest correlated fermion models and does not require any assumption on the interaction nor any specific choice of non-Hermitian Hamiltonian terms. Our results call for future investigations beyond the local correlation effects on the charge sector considered here: e.g., of the spin or particle-particle channel and including non-local correlations in the description. Further, one could also search for higher order exceptional degeneracies in the susceptibility spectrum and explore the respective consequences on the phase instabilities. This may open new doors to experimentally detectable hallmarks of non-Hermitian topology.
###### Acknowledgements.
Acknowledgments - We thank P. Chalupa-Gantner, H. Esl, P. Oberleitner for insightful discussions, and S. Di Cataldo, P. Kappl, P. Worm for helpful comments. M.R. acknowledges support as a recipient of a DOC fellowship of the Austrian Academy of Sciences and financial support from the Austrian Science Fund (FWF), within the project 1 5487. A.T. acknowledges the Austrian Science Fund (FWF) for the project 1 5868 (part of the FOR 5249 [QUAST] of the German Science Foundation, DFG). L.C., J.C.B. and G.S. acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy-EXC2147 "ct.qmat" (project-id 390858490) as well as through Project-ID 258499086 - SFB 1170 "ToCoTronics" and Project-ID 247310070 - SFB 1143.
|
2302.03546 | Suspensions of viscoelastic capsules: effect of membrane viscosity on
transient dynamics | Membrane viscosity is known to play a central role in the transient dynamics
of isolated viscoelastic capsules by decreasing their deformation, inducing
shape oscillations and reducing the loading time, that is, the time required to
reach the steady-state deformation. However, for dense suspensions of capsules,
our understanding of the influence of the membrane viscosity is minimal. In
this work, we perform a systematic numerical investigation based on coupled
immersed boundary -- lattice Boltzmann (IB-LB) simulations of viscoelastic
spherical capsule suspensions in the non-inertial regime. We show the effect of
the membrane viscosity on the transient dynamics as a function of volume
fraction and capillary number. Our results indicate that the influence of
membrane viscosity on both deformation and loading time strongly depends on the
volume fraction in a non-trivial manner: dense suspensions with large surface
viscosity are more resistant to deformation but attain loading times that are
characteristic of capsules with no surface viscosity, thus opening the
possibility to obtain richer combinations of mechanical features. | Fabio Guglietta, Francesca Pelusi, Marcello Sega, Othmane Aouane, Jens Harting | 2023-02-07T15:54:37Z | http://arxiv.org/abs/2302.03546v2 | # Suspensions of viscoelastic capsules: effect of membrane viscosity on transient dynamics
###### Abstract
Membrane viscosity is known to play a central role in the transient dynamics of isolated viscoelastic capsules by decreasing their deformation, inducing shape oscillations and reducing the loading time, that is, the time required to reach the steady-state deformation. However, for dense suspensions of capsules, our understanding of the influence of the membrane viscosity is minimal. In this work, we perform a systematic numerical investigation based on coupled immersed boundary - lattice Boltzmann (IB-LB) simulations of viscoelastic spherical capsule suspensions in the non-inertial regime. We show the effect of the membrane viscosity on the transient dynamics as a function of volume fraction and capillary number. Our results indicate that the influence of membrane viscosity on both deformation and loading time strongly depends on the volume fraction in a non-trivial manner: dense suspensions with large surface viscosity are more resistant to deformation but attain loading times that are characteristic of capsules with no surface viscosity, thus opening the possibility to obtain richer combinations of mechanical features.
## 1 Introduction
A capsule is formed by a liquid drop core enclosed by a thin membrane, which can be engineered with tailored mechanical properties such as strain-softening, strain-hardening and viscoelastic properties (Barthes-Biesel, 2016). Capsules have emerged as a promising material for encapsulation, transportation, and sustained release of substances in various applications such as cosmetics, personal care products, self-healing paints, fire-retardant coatings, and pharmaceutical drugs (Luo and Bai, 2019; Bah, Bilal, and Wang, 2020; Kim et al., 2009; Sun et al., 2021). They are also used as a simplified model to study complex biological cells such as red blood cells numerically (Zhang, Johnson, and Popel, 2007; Kruger, 2012; Shen et al., 2018; Gekle, 2016; Bacher et al., 2018). The viscous component of the membrane is often disregarded when simulating the flow behaviour of red blood cells. However, microfluidic experiments have shown that, in such systems, the membrane surface viscosity is an important feature, and the interplay between the viscous and elastic contributions of the membrane is not trivial (Tomaiuolo et al., 2011; Tomaiuolo et al., 2016; Braunmuller et al., 2012; Prado et al., 2015; Tran-Son-Tay, Sutera, and Rao, 1984). The mechanical and rheological properties of suspensions of purely elastic capsules have been thoroughly studied analytically (Barthes-Biesel and Rallison, 1981; Barthes-Biesel, 1980; Barthes-Biesel, 1991; Barthes-Biesel, 1993; Barthes-Biesel, Diaz, and Dhenin, 2002), experimentally (Chang and Olbricht, 1993; Walter, Rehage, and Leonhard, 2001) and numerically (Pozrikidis, 1995; Ramanujan and Pozrikidis, 1998; Aouane, Scagliarini, and Harting, 2021; Pranay, Henriquez-Rivera, and Graham, 2012; Karyappa, Deshmukh, and Thaokar, 2014; Clausen and Aidun, 2010; Clausen, Reasor, and Aidun, 2011; Rorai et al., 2015; Dodson and Dimitrakopoulos, 2009; Kruger, Kaoui, and Harting, 2014; Kruger, Varnik, and Raabe, 2011; Esposito et al., 2022; Diaz, Pelekasis, and Barthes-Biesel, 2000; Tran et al., 2020; Cordasco and Bagchi, 2013; Wouters et al., 2020; Bielinski et al., 2021; Alizad Banaei et al., 2017; Kessler, Finken, and Seifert, 2008; Bagchi and Kalluri, 2011). However, only a few studies were dedicated to understanding the effect of the capsules' membrane viscosity (Barthes-Biesel and Sgaier, 1985; Yazdani and Bagchi, 2013; Li and Zhang, 2019; Guglietta et al., 2020; Guglietta et al.,
2021b; Guglietta et al. 2021a; Rezghi, Li, and Zhang 2022; Li and Zhang 2021; Diaz, Barthes-Biesel, and Pelekasis 2001; Zhang et al. 2020; Rezghi and Zhang 2022).
In their theoretical contribution, Barthes-Biesel and Sgaier 1985 performed perturbative calculations in the small-deformation limit showing that the membrane viscosity reduces the overall deformation. Concerning the loading time, that is, the time required to reach the steady-state deformation, Diaz, Barthes-Biesel, and Pelekasis 2001 were among the first investigating the effect of membrane viscosity on the transient dynamics using numerical simulations: using a boundary integral method they showed that, in an elongational flow, the presence of the membrane viscosity induces an increase in the loading time that is proportional to the membrane viscosity. Yazdani and Bagchi 2013 studied the effect of the membrane viscosity on the deformation and the tank-treading frequency of a single viscoelastic capsule numerically, also observing wrinkles appearing on the surface due to the membrane viscosity. Recently, Li and Zhang 2019; Li and Zhang 2020 coupled a finite difference method with the IB-LB method to simulate the effect of the viscosity at the interface. This implementation has been then employed to investigate mainly the dynamics of RBCs, highlighting the key role played by the membrane viscosity on the deformation and the associated characteristic times (Guglietta et al. 2020; Guglietta et al. 2021b; Li and Zhang 2021) as well as on the tumbling and tank-treading dynamics (Guglietta et al. 2021a; Rezghi and Zhang 2022).
The works mentioned above investigate the effect of membrane viscosity on single capsules. However, the understanding of its effect on the suspension of capsules is still missing. To the best of our knowledge, a parametric study on the effect of membrane viscosity on such systems does not exist yet. Our contribution aims at filling this gap by focusing on generic spherical viscoelastic capsules. We present the results of a numerical investigation of the effect of membrane viscosity on suspensions of (initially spherical) viscoelastic capsules by using our coupled IB-LB implementation.
To study the impact of membrane viscosity, quantified via the Boussinesq number Bq (see Eq. (23)), on the deformation \(D\) and loading time \(t_{\rm{a}}\), we conducted simulations using different values of Bq, capillary number Ca, and volume fraction \(\phi\). We aim to investigate how different values of the membrane viscosity and volume fraction affect the deformation and loading time of viscoelastic capsules.
The remainder of this paper is organised as follows: in Sec. 2 we present a few details on the IB-LB method (Sec. 2.1) and the viscoelastic membrane model (Sec. 2.2). Sec. 3 is dedicated to the numerical results: we first show and discuss the deformation and the loading time for a single capsule and then for suspensions with different volume fraction (Sec. 3.2). We finally summarise the main findings and provide some conclusions and future perspectives in Sec. 4.
## 2 Numerical model
We simulate the dynamics of the capsules and the surrounding fluid using the coupled IB-LB method. In a nutshell, the IB method uses a triangulated mesh of Lagrangian points as support to compute forces that are then used to impose the correct space and time-dependent boundary conditions on the fluid, which is simulated using the LB method. The IB-LB method provides a two-way coupling: the boundary surface deforms due to the fluid flow, and the fluid local momentum balance is changed due to the viscoelastic forces exerted by the boundary surface. Boundary surface forces comprise membrane elasticity, membrane viscosity, a volume-conserving regularization term, and a repulsive force to prevent capsules from penetrating each other. Details are reported below.
### The immersed boundary - lattice Boltzmann method
The LB method solves numerically a discretized version of the Boltzmann transport equation for the particle populations \(\mathrm{n}_{i}\), representing the probability density function of fluid molecules moving with a discrete velocity \(\mathbf{c}_{i}\) at position \(\mathbf{x}\) on the lattice and at time \(t\)(Benzi, Succi, and Vergassola 1992). The solution to the Navier-Stokes equations emerges from the transport equation via the calculation of the moments of the particle distribution and the appropriate Chapman-Enskog analysis (Chapman and Cowling 1990).
The evolution of the functions \(\mathrm{n}_{i}\) provided by the LB equation is
\[\mathrm{n}_{i}(\mathbf{x}+\mathbf{c}_{i}\Delta t,t+\Delta t)-\mathrm{n}_{i}(\mathbf{x},t)= \Omega_{i}+S_{i}\, \tag{1}\]
where \(\Delta t\) is the discrete time step, \(\Omega_{i}\) represents the collision operator and \(S_{i}\) is a source term proportional to the acting external forces \(\mathbf{F}\) (such as membrane forces, see Sec. 2.2) that is implemented following Guo, Zheng, and Shi 2002:
\[S_{i}(\mathbf{x},t)=\left(1-\frac{\Delta t}{2\tau}\right)\frac{w_{i}}{c_{s}^{2}} \left[\left(\frac{\mathbf{c}_{i}\cdot\mathbf{u}}{c_{s}^{2}}+1\right)\mathbf{c}_{i}-\mathbf{u} \right]\cdot\mathbf{F}\, \tag{2}\]
Here, \(\tau\) is the relaxation time, i.e., the time the functions \(\mathrm{n}_{i}\) take to reach the equilibrium distribution \(\mathrm{n}_{i}^{(\mathrm{au})}\) which is given by (Qian, d'Humieres, and Lallemand 1992)
\[\mathrm{n}_{i}^{(\mathrm{au})}(\mathbf{x},t)=w_{i}\rho\left(1+\frac{\mathbf{u}\cdot\mathbf{c }_{i}}{c_{s}^{2}}+\frac{(\mathbf{u}\cdot\mathbf{c}_{i})^{2}}{2c_{s}^{4}}-\frac{\mathbf{u} \cdot\mathbf{u}}{c_{s}^{2}}\right)\, \tag{3}\]
with \(c_{s}=\Delta x/\Delta t\sqrt{3}\) being the speed of sound, \(\Delta x\) the lattice spacing and \(w_{i}\) suitable weights. In the D3Q19 scheme used in this work, \(w_{0}=1/3\), \(w_{1-6}=1/18\), \(w_{7-18}=1/36\). We implement the Bhatnagar-Gross-Krook collision operator (Qian, d'Humieres, and Lallemand 1992)
\[\Omega_{i}=-\frac{\Delta t}{\tau}\left(\mathrm{n}_{i}(\mathbf{x},t)-\mathrm{n}_{i }^{(\mathrm{eu})}(\mathbf{x},t)\right). \tag{4}\]
The Chapman-Enskog analysis provides the bridge between the LB and the Navier-Stokes equations by linking the relaxation time \(\tau\) to the fluid transport coefficients, for example the dynamic viscosity
\[\mu=\rho c_{s}^{2}\left(\tau-\frac{\Delta t}{2}\right). \tag{5}\]
The functions \(\mathrm{n}_{i}\) are then used to compute the hydrodynamic density (\(\rho\)) and velocity (\(\mathbf{u}\)) fields of the fluid as
\[\rho(\mathbf{x},t)=\sum_{i}\mathrm{n}_{i}(\mathbf{x},t)\,\qquad\qquad\rho\mathbf{u}(\mathbf{x },t)=\sum_{i}\mathbf{c}_{i}\mathrm{n}_{i}(\mathbf{x},t)+\frac{\mathbf{F}\Delta t}{2}. \tag{6}\]
Figure 1: Sketch of the simulations performed in this work. Left side: 3D cubic domain with \(L^{3}\) lattice nodes (Eulerian lattice) containing a dense suspension of viscoelastic spherical capsules with initial radius \(R\). The domain is bound along the z-axis by two planar walls moving with constant speed \(U_{w}\) in opposite directions. In this setup, we impose a simple shear flow with constant shear rate \(\dot{\gamma}\). Top-right box: detail of a single capsule deformed under a simple shear flow. The capsules are represented using 3D triangular meshes with 2420 elements. The Taylor deformation \(D\) is given by \(D=(r_{1}-r_{3})/(r_{1}+r_{3})\), where \(r_{1}\) and \(r_{3}\) are the main semi-axes (green segments). The time evolution of the deformation \(D(t)\) is used to evaluate the loading time \(t_{\mathrm{L}}\) (see Eq. (26)). Bottom-right box: on each triangular element, the viscoelastic forces are computed and distributed to the vertices. These forces are coupled to the fluid via the immersed boundary (IB) method and the fluid dynamics is simulated using the lattice Boltzmann (LB) method (see Sec. 2).
The coupling between the fluid and the viscoelastic membrane is accounted through the IB method. The membrane is represented by a set of Lagrangian nodes linked to build a 3D triangular mesh (see Fig. 1). The idea is to interpolate the fluid (Eulerian) velocity (\(\mathbf{u}\)) to compute the nodal (Lagrangian) velocity (\(\dot{\mathbf{r}}\)) and to spread the nodal force (\(\mathbf{\varphi}\)) to find the force acting on the fluid (\(\mathbf{F}\)). Such interpolations are given by the following equations (Kruger et al., 2017; Peskin, 2002):
\[\mathbf{F}(\mathbf{x},t)=\sum_{i}\mathbf{\varphi}_{i}(t)\Delta(\mathbf{r}_{i}-\mathbf{x})\, \dot{\mathbf{r}}_{i}(t)=\sum_{\mathbf{x}}\mathbf{u}(\mathbf{x},t)\Delta(\mathbf{r}_{i}-\mathbf{x}) \Delta x^{3}\, \tag{7}\]
where \(\Delta\) is a discretised approximation of a Dirac delta function which can be factorised as the product of three interpolation stencils \(\Delta(\mathbf{x})=\phi(x)\phi(y)\phi(z)\). In this work, we use the two-point interpolation stencil
\[\phi_{2}(x)=\begin{cases}1-|x|&\text{for }0\leq|x|\leq 1\,\\ 0&\text{elsewhere}\.\end{cases} \tag{8}\]
### Membrane model
#### 2.2.1 Elastic model
We use the Skalak model to account for the membrane elasticity (Skalak et al., 1973). Here, the elastic free energy is given by
\[W_{\text{S}}=\sum_{j}A_{j}\left[\frac{k_{\text{S}}}{12}\left(I_{1,j}^{2}+2I_{ 1,j}-2I_{2,j}\right)+\frac{k_{\alpha}}{12}I_{2,j}^{2}\right]\, \tag{9}\]
where \(A_{j}\) is the area of the \(j-\)th triangular element of the mesh, \(k_{\text{S}}\) and \(k_{\alpha}\) are the elastic shear and dilatational moduli (we restrict ourselves to \(k_{\alpha}=k_{\text{S}}\).), respectively, \(I_{1,j}=\lambda_{1,j}^{2}+\lambda_{2,j}^{2}-2\) and \(I_{2,j}=\lambda_{1,j}^{2}\lambda_{2,j}^{2}-1\) are the strain invariants for the \(j\)-th triangular element, with \(\lambda_{1,j}\) and \(\lambda_{2,j}\) being the principal stretch ratios of the triangle (Skalak et al., 1973; Kruger, Kaoui, and Harting, 2014). The free energy \(W_{\text{S}}^{(j)}\) computed on the \(j-\)th element is used to compute the force on its three vertices: we can write the force acting on the \(i-\)th node with coordinates \(\mathbf{x}_{i}\) as
\[\mathbf{\varphi}_{i}=-\frac{\partial W_{\text{S}}^{(j)}}{\mathbf{x}_{i}}. \tag{10}\]
#### 2.2.2 Viscous model
The membrane viscosity can be implemented through the incorporation of the viscous stress tensor given by
\[\mathbf{\tau}_{\nu}=\mu_{\text{s}}\left(2\mathbf{e}-\text{tr}(\mathbf{e})\mathbf{P}\right)+\mu _{\text{d}}\text{tr}(\mathbf{e})\mathbf{P}=2\mu_{\text{m}}\mathbf{e}\, \tag{11}\]
where \(\mu_{\text{s}}\) and \(\mu_{\text{d}}\) are, respectively, the shear and dilatational membrane viscosity (in order to reduce the number of parameters, we consider \(\mu_{\text{s}}=\mu_{\text{d}}=\mu_{\text{m}}\), and we will only refer to the membrane viscosity \(\mu_{\text{m}}\)(Barthes-Biesel and Sgaier, 1985)), \(\mathbf{P}\) is the projector tensor to the 2D surface, and
\[\mathbf{e}=\frac{1}{2}\left\{\mathbf{P}\cdot\left[\left(\mathbf{\nabla}^{\text{S}}\mathbf{u} ^{\text{S}}\right)+\left(\mathbf{\nabla}^{\text{S}}\mathbf{u}^{\text{S}}\right)^{ \dagger}\right]\cdot\mathbf{P}\right\} \tag{12}\]
is the surface rate of strain. In Eq. (12), the superscript \(\mathbf{S}\) identifies the surface projection of the gradient operator (\(\mathbf{\nabla}^{\text{S}}\)) and local membrane velocity (\(\mathbf{u}^{\text{S}}\)) (Li and Zhang, 2019). By following Li and Zhang, 2019, we employ the standard linear solid model to compute \(\mathbf{\tau}_{\nu}\). We evaluate the stress tensor \(\mathbf{\tau}_{\nu}^{(j)}\) on each triangular element \(j\) (i.e., we rotate the triangular element on the \(xy\)-plane), and we then compute the force on its vertices \(i\) as
\[\mathbf{\varphi}_{i}(x,y)=A_{j}\mathbf{\mathcal{P}}^{(j)}\cdot\mathbf{\nabla}N_{i}\, \tag{13}\]
where \(N_{i}(x,y)=a_{i}x+b_{i}y+c_{i}\) are the linear shape functions, the tensor \(\mathbf{\mathcal{P}}^{(j)}=\left[\mathbf{\tau}_{\nu}\cdot(\mathbf{\mathcal{F}}^{-1})^{T} \right]^{(j)}\), with \((\mathbf{\mathcal{F}}^{-1})^{T}\) being the transpose of the inverse of the deformation gradient tensor \(\mathbf{\mathcal{F}}\)(Kruger, 2012; Li and Zhang, 2019; Guglietta et al., 2020).
#### 2.2.3 Volume conservation
In addition to the previous two contributions to the nodal force, we also impose the volume conservation by adding another term to the elastic free energy given in Eq. (9):
\[W_{\text{V}}=k_{V}\frac{(V-V_{0})^{2}}{2V_{0}}. \tag{14}\]
\(k_{V}\) is an artificial modulus tuning the strength of the volume conservation, \(V\) is the total volume of the capsule (the subscript \(0\) refers to the volume at rest, i.e., \(V_{0}=4\pi R^{3}/3\)) (Kruger, 2012; Aouane, Scagliarini, and Harting, 2021). The nodal force is then computed in the same way as for the elastic model (Eq. (10)).
#### 2.2.4 Capsule-capsule repulsion
Finally, to avoid capsules penetrating each others, we introduce a force
\[\boldsymbol{\varphi}_{ij}=\begin{cases}\bar{\epsilon}\left[\left(\frac{\Delta \pi}{d_{ij}}\right)^{2}-\left(\frac{\Delta\pi}{\delta_{0}}\right)^{2}\right] \hat{\boldsymbol{d}}_{ij}&\text{if }d_{ij}<\delta_{0}\,\\ 0&\text{if }d_{ij}\geq\delta_{0}\,\end{cases} \tag{15}\]
acting on nodes \(i\) and \(j\) belonging to two different capsules, where \(d_{ij}\) is the distance between nodes \(i\) and \(j\), \(\hat{\boldsymbol{d}}_{ij}=\frac{d_{ij}}{d_{ij}}\) is the unit vector connecting them, \(\delta_{0}\) is the interaction range and \(\bar{\epsilon}\approx 100/3k_{\text{S}}\). The choice of the parameter \(\bar{\epsilon}\) is as such that the macroscopic behaviour of the suspension is not affected by this additional nodal force contribution (Aouane, Scagliarii, and Harting 2021 provide further details).
### Membrane geometry
The information on the geometry of the capsules is retrieved from the inertia tensor, which is defined by (Kruger 2012; Ramanujan and Pozrikidis 1998)
\[\mathcal{I}_{\alpha\beta}=\frac{\rho_{p}}{5}\sum_{i}A_{i}(\boldsymbol{r}_{i}^ {2}\delta_{\alpha\beta}-r_{i\alpha}r_{i\beta})r_{i\gamma}n_{i\gamma}. \tag{16}\]
Here, \(\rho_{p}\) is the density of the particle (in our case, \(\rho_{p}=1\)), \(\boldsymbol{r}_{i}\) is a vector pointing form the centre of mass of the capsule to the centroid of face \(i\). \(A_{i}\) and \(\boldsymbol{n}_{i}\) are the area and the unit normal of the face \(i\), respectively. We now consider the inertia ellipsoid, i.e., the equivalent ellipsoid with the same inertia tensor \(\boldsymbol{\mathcal{I}}\). The three eigenvalues (\(\mathcal{I}_{1}\), \(\mathcal{I}_{2}\) and \(\mathcal{I}_{3}\)) can be used to compute the lengths of the three semi-axes of the ellipsoid with density \(\rho_{p}\) and volume \(V\)(Kruger 2012; Ramanujan and Pozrikidis 1998):
\[r_{1} =\sqrt{\frac{5(\mathcal{I}_{2}+\mathcal{I}_{3}-\mathcal{I}_{1}) }{2\rho_{p}V}}\, \tag{17}\] \[r_{2} =\sqrt{\frac{5(\mathcal{I}_{1}+\mathcal{I}_{3}-\mathcal{I}_{2}) }{2\rho_{p}V}}\,\] (18) \[r_{3} =\sqrt{\frac{5(\mathcal{I}_{1}+\mathcal{I}_{2}-\mathcal{I}_{3}) }{2\rho_{p}V}}\, \tag{19}\]
with \(r_{1}\geq r_{2}\geq r_{3}\). By comparing with Fig. 1, \(r_{1}\) and \(r_{3}\) are the longest and shortest radii in the shear plane (respectively), while \(r_{2}\) is the radius directed along the vorticity direction (y-axis).
Once we know the length of the two main semi-axes \(r_{1}\) and \(r_{3}\), we can evaluate the deformation index
\[D(t)=\frac{r_{1}(t)-r_{3}(t)}{r_{1}(t)+r_{3}(t)}\, \tag{20}\]
which is equal to zero when the spherical capsule is not deformed (i.e., \(r_{1}=r_{3}\)).
## 3 Results: deformation and loading time
This section introduces the numerical setup and the main dimensionless numbers (Sec. 3.1). We then show the numerical results concerning the deformation \(D\) and the loading time \(t_{\text{L}}\) of both a single spherical capsule (Sec. 3.2) and a suspension of particles (Sec. 3.3).
### Simulation setup and physical parameters
The numerical setup consists of a cubic Eulerian domain with \(L^{3}\) lattice nodes, where \(L=128\,\Delta x\). The domain is bound along the z-axis by two planar walls at which we impose a constant velocity \(U_{w}\) to generate a simple shear flow with constant shear rate \(\dot{\gamma}\) (see Fig. 1). The viscoelastic capsules have an initial radius \(R=8\,\Delta x\), and the corresponding mesh is made of 2420 triangular elements. Each capsule is initialised as a rigid sphere in order to start the simulation with zero stress and deformation of the surface. The capsules are positioned randomly at the beginning of the simulation, with the constraint that the distance between their surfaces cannot be less than one lattice spacing.
Several dimensionless numbers may play a role in describing the dynamics of the system. First of all, the Reynolds number
\[\text{Re}=\frac{\dot{\gamma}R^{2}\rho}{\mu} \tag{21}\]
gives the balance between inertial and viscous forces. We chose Re small enough (\(\text{Re}\sim 10^{-2}\)) to neglect inertial effects. The capillary number
\[\text{Ca}=\frac{\dot{\gamma}R\mu}{k_{s}} \tag{22}\]
measures instead the importance of the viscosity of the fluid with respect to the elasticity of the membrane: we chose the range of Ca in order to work as close as possible to the small-deformation regime, avoiding strongly non-linear effects (\(\text{Ca}\in[0.05,0.5]\)). The dimensionless number accounting for the membrane viscosity \(\mu_{\text{m}}\) is the Boussinesq number
\[\text{Bq}=\frac{\mu_{\text{m}}}{\mu R}\, \tag{23}\]
which describes the importance of the membrane viscosity with respect to the fluid viscosity (in this work, we consider the range \(\text{Bq}\in[0,50]\)). Note that \(\mu_{\text{m}}\) describes the viscosity of a 2D membrane: for this reason, it is measured in [m Pa s], while the fluid viscosity is given in [Pa s]. Finally, for dense suspensions, it is important to define the volume fraction
\[\phi=\frac{\sum_{i}V_{i}}{L^{3}}\, \tag{24}\]
which ranges in \(\phi\in[0.001,0.4]\) (i.e., from 1 to 400 capsules). In Eq. (24), \(\sum_{i}V_{i}\) coincides with the total volume occupied by the viscoelastic spheres. The computational time is normalised with the capillary time as
\[t^{*}=\frac{R\mu}{k_{s}}. \tag{25}\]
The main quantities mentioned above are also summarised in Tab. 1.
Intending to study and quantify the transient deformation of viscoelastic capsules, we use the solution of a damped oscillator to describe the deformation behaviour as a function of time:
\[D_{\text{fit}}(t)=\bar{D}\left[1-\exp\left(-\frac{t}{t_{\text{L}}}\right)\cos \left(\omega t\right)\right] \tag{26}\]
\(\bar{D}\) represents the steady-state value of the deformation, \(t_{\text{L}}\) is the loading time (i.e., the time the capsule takes to deform) and \(\omega\) coincides with the frequency of the deformation oscillations. To show how Eq. (26) fits data from numerical
Figure 2: Data corresponding to the single capsule case (\(\phi=0.001\)). The deformation \(D\) (Eq. (20)) as a function of the dimensionless time \(t/t^{*}\) (Eq. (25)) for a capillary number \(\text{Ca}=0.2\) and for different values of the Boussinesq number (Eq. (23)) (\(\text{Bq}=0\) (\(\boldsymbol{\text{\O}}\)), \(\text{Bq}=10\) (\(\boldsymbol{\text{\O}}\)), \(\text{Bq}=25\) (\(\boldsymbol{\text{\O}}\)), \(\text{Bq}=50\) (\(\boldsymbol{\text{\O}}\))). Green lines represent the numerical fit with Eq. (26).
simulations, in Fig. 2 we report the measured deformation \(D\) as a function of the dimensionless time \(t/t^{*}\) for the single capsule case. Different colours correspond to different values of Bq, while all data refer to the case with \(\text{Ca}=0.2\). Fig. 2 shows an excellent agreement between \(D_{\text{in}}(t)\) (solid lines) and the numerical simulations (circles), confirming that Eq. (26) is a suitable estimate for the dynamical observables \(t_{\text{L}}\) and \(\omega\).
### Single capsule
In this section, we report the numerical results for the deformation and loading time of a single capsule (i.e., \(\phi=0.001\)) under shear flow, which will serve as a reference for the next section, where suspensions of capsules are considered. Fig. 3 shows the steady-state configuration of a capsule under shear flow with \(\text{Ca}=0.5\), for two values of the Boussinesq number (\(\text{Bq}=0\), top panels; \(\text{Bq}=50\), bottom panels). The capsule is initialised in the middle of the channel; white arrows represent the velocity of the walls \(\mathbf{U}_{\text{w}}\). Left and right parts of Fig. 3 show side views in the \(yz\)- and \(xz\)-plane, respectively. Fig. 3 shows that some wrinkles appear on the surface when Bq increases up to \(\text{Ca}=0.5\). These results agree with what was observed by Yazdani and Bagchi 2013. In the case of a purely elastic capsule (i.e., when \(\text{Bq}=0\)), no wrinkles appear if the capillary number is large enough (\(\text{Ca}\gtrapprox 0.1\)), but some of them do appear when the capillary number is small (\(\text{Ca}=0.05\)). We emphasise that these wrinkles are not a numerical artefact, as they have also been observed in experiments (Walter, Rehage, and Leonhard 2001; Unverfehrt, Koleva, and Rehage 2015) and analytically studied (Finken and Seifert 2006).
\begin{table}
\begin{tabular}{c c} \(L\) & (length of the domain) & \(128\,\Delta x\) \\ \hline \(R\) & (radius of the spherical capsule) & \(8\,\Delta x\) \\ \hline Re & (Reynolds number) & \(\sim 0.01\) \\ \hline Ca & (Capillary number) & 0.05 - 0.5 \\ \hline Bq & (Boussinesq number) & 0 - 50 \\ \hline \(\phi\) & (Volume fraction) & 0.001 - 0.4 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation parameters.
Figure 3: Steady-state configurations for a single capsule (\(\phi=0.001\)) under shear flow with \(\text{Ca}=0.5\). Top panel: single capsule configuration with \(\text{Bq}=0\). Bottom panel: single capsule configuration with \(\text{Bq}=50\). Left part: side view in the \(yz\)-plane. Right part: side view in the \(xz\)-plane.
Figure 4: Data corresponding to the single capsule case (\(\phi=0.001\)) for different values of the Boussinesq number Bq (\(\text{Bq}=0\) (\(\circ\)), Bq \(=10\) (\(\circ\)), Bq \(=25\) (\(\blacklozenge\)), Bq \(=50\) (\(\blacklozenge\))). Panel a: steady-state deformation \(\bar{D}\) as a function of the capillary number Ca, where black crosses represent data from Aouane, Scagliarini, and Harting 2021. Panel b: loading time \(t_{\text{L}}\) as a function of the capillary number Ca. Panel c: frequency \(\omega\) as a function of the capillary number Ca.
Figure 5: Data corresponding to the single capsule case (\(\phi=0.001\)) for different values of the Boussinesq number Bq (\(\text{Bq}=0\) (\(\circ\)), Bq \(=10\) (\(\circ\)), Bq \(=25\) (\(\blacklozenge\)), Bq \(=50\) (\(\blacklozenge\))). The lengths of the three main radii of the capsule \(r_{1}\) (panel a), \(r_{2}\) (panel b) and \(r_{3}\) (panel c) normalised with the radius of the spherical capsule at rest \(R\) as functions of the capillary number Ca.
In Fig. 4(a), we show the steady-state value of the deformation \(\bar{D}\) as a function of the capillary number \(\mathrm{Ca}\) for different values of the Boussinesq number \(\mathrm{Bq}\) (the darker the colour, the higher the value of \(\mathrm{Bq}\)). We also report results from Aouane, Scagliarini, and Harting 2021 (black crosses), as a benchmark of our implementation, which corresponds to a case without membrane viscosity.
Fig.4(a) shows that the effect of increasing \(\mathrm{Bq}\) is to decrease the deformation, a trend that has been previously observed in other works (Yazdani and Bagchi 2013; Li and Zhang 2019; Guglietta et al. 2020; Guglietta et al. 2021b). This can be explained by an energetic argument: for a fixed value of the elastic modulus \(k_{\mathrm{S}}\) and a given intensity of the shear rate \(\dot{\gamma}\) (i.e., for the same value of the capillary number \(\mathrm{Ca}\)), the energy injected into the system is the same. However, the simple shear flow can be split into two contributions, accounting for the rotation and the elongation of the capsule, respectively:
\[\mathbf{\nabla}\mathbf{u}=\begin{pmatrix}0&\dot{\gamma}\\ 0&0\end{pmatrix}=\begin{pmatrix}0&\dot{\frac{\dot{\gamma}}{2}}\\ \frac{\dot{\gamma}}{2}&0\end{pmatrix}+\begin{pmatrix}0&\frac{\dot{\gamma}}{2} \\ -\frac{\dot{\gamma}}{2}&0\end{pmatrix} \tag{27}\]
This means that the energy injected by the applied shear flow not only contributes to the deformation of the capsules but also to their rotation. Therefore, increasing the value of the membrane viscosity leads to an increase in the dissipative effects on the surface due to viscous friction, which in turn reduces the energy available for deformation. If one deforms the capsule without using a flow but via external forces acting directly on the membrane (like the typical stretching experiment performed on RBCs by using optical tweezers (Suresh et al. 2005)), the dependence of the steady-state value of the deformation on the membrane viscosity clearly disappears (Guglietta et al. 2020; Guglietta et al. 2021b). Additionally, in an elongational flow, where the rotation of the membrane is suppressed, the steady-state value of the deformation does not depend on the value of \(\mathrm{Bq}\) (Guglietta et al. 2021b).
Fig. 4(b) shows that the loading time \(t_{\mathrm{L}}\) depends on both \(\mathrm{Ca}\) and \(\mathrm{Bq}\). In particular, on the one hand, it decreases when \(\mathrm{Ca}\) increases, and seems to converge to a constant value. On the other hand, the increase of \(t_{\mathrm{L}}\) when the membrane viscosity increases is expected because of the viscous dissipation at the interface. The loading time \(t_{\mathrm{L}}\) depends on \(\mathrm{Bq}\) even when we apply an elongational flow or perform a stretching experiment (Guglietta et al. 2021b). This behaviour is opposite to that of the steady-state deformation value, which does not show such a dependence when only the membrane deformation is present.
Fig. 4(c) depicts the frequency of the deformation oscillations \(\omega\). It does not show a strong dependence on the membrane viscosity but only on the capillary number \(\mathrm{Ca}\). This means that this characteristic time simply scales with the characteristic time of the flow (i.e., \(\dot{\gamma}^{-1}\)). The results for \(t_{\mathrm{L}}\) and \(\omega\) are in qualitative agreement with results for a single RBC in simple shear flow (Guglietta et al. 2021b).
Since the deformation as defined in Eq. (20) only contains information about the main axes in the shear plane, it does not provide a complete description of how the capsule is deforming in three-dimensional space. Therefore, we examined the three main radii \(r_{1},r_{2}\), and \(r_{3}\) separately (Fig. 5, panels (a), (b), and (c), respectively.). The radii are normalized by the initial radius \(R\), which is the capsule's radius at rest. In all three cases, the variation in the length of the radii \(r_{i}\) (\(i=1,2,3\)) relative to their values at rest decrease as the membrane viscosity increases (as expected based on the measurements of the deformation). However, the most significant variation is seen in \(r_{1}\) and \(r_{3}\) (i.e., in the shear plane), while \(r_{2}\) changes only slightly when \(\mathrm{Bq}=0\) and is almost unchanged for \(\mathrm{Bq}=50\).
### Suspensions
We consider the same numerical setup as before, but now we increase the number of capsules \(N\) up to 400, corresponding to an increase of the volume fraction \(\phi\) up to 0.4. We introduce the capsule-averaged quantities represented by
\[\langle A\rangle=\frac{1}{N}\sum_{i}A_{i}\;, \tag{28}\]
where the sum runs over the number of particles \(N\) and \(A_{i}\) is a general observable measured for the \(i-\)th capsule (such as the steady-state value of the deformation \(\bar{D}\), the loading time \(t_{\mathrm{L}}\), the radius \(r_{i}\), etc.). The data reported in this section are provided with error bars, which are calculated from the standard deviation normalized with \(\sqrt{N}\).
Fig. 6 shows some steady-state configurations for three different values of \(\phi\) (columns) and two values of \(\mathrm{Bq}\) (rows). Data refer to \(\mathrm{Ca}=0.1\). It is interesting to observe that wrinkles do not appear on the surface when \(\mathrm{Bq}=0\), whereas they are visible for \(\mathrm{Bq}=50\). However, in the latter case, the volume fraction seems to play a role: indeed, while the cases with \(\phi=0.01\) and \(\phi=0.1\) show just a few particles with small wrinkles (panels d and e, respectively), the most dense case (panel f) shows more pronounced wrinkles on more particles.
In Fig. 7, the capsule-averaged steady-state deformation is reported as a function of the capillary number \(\mathrm{Ca}\) for different values of the Boussinesq number \(\mathrm{Bq}\) and volume fraction \(\phi\). Data for a single capsule (i.e., \(\phi=0.001\)) are also reported
for comparison (panel a). As already observed for elastic capsules in the absence of membrane viscosity, our data shows that the capsule-averaged steady-state deformation \(\langle\bar{D}\rangle\) slightly increases with increasing volume fraction \(\phi\)(Aouane, Scagliarini, and Harting, 2021). It is interesting to perform a comparison between panels (a) and (d) that are the two extreme cases we simulated (i.e., \(\phi=0.001\) and \(\phi=0.4\), respectively): we observe that, when there is no membrane viscosity (i.e., \(\text{Bq}=0\)), \(\langle\bar{D}\rangle(\phi=0.4)\) is about \(5-10\)% higher than \(\langle\bar{D}\rangle(\phi=0.001)\), while when \(\text{Bq}=50\), there is an increase of about 250%. This suggests that the effect of membrane viscosity in reducing the deformation becomes weaker for higher values of \(\phi\). This general trend can be observed in Fig. 7 for all the reported values of \(\phi\): we note that \(\langle\bar{D}\rangle\) increases when \(\phi\) increases (from panel a-d) at \(\text{Bq}=50\), but this difference in \(\langle\bar{D}\rangle\) shrinks when Bq is smaller.
Concerning the capsule-averaged loading time \(\langle t_{\mathrm{L}}\rangle\), reported in Fig. 8, we observe again that the volume fraction \(\phi\) mitigates the effect of the presence of the membrane viscosity, especially at increasing values of the capillary number. In fact, the apparent increase of \(\langle t_{\mathrm{L}}\rangle\) with Bq for \(\phi\leq 0.01\) (panels a,b) is not present for \(\phi=0.4\). Furthermore, it is worth noting that \(\langle t_{\mathrm{L}}\rangle\) shows a slight dependence on the volume fraction \(\phi\) for \(\text{Bq}=0\), and the \(\phi=0.001\) and \(\phi=0.4\) data superpose almost perfectly. The dependence of \(\langle t_{\mathrm{L}}\rangle\) on \(\phi\) and Bq is even more evident for small values of the capillary number Ca (close to the linear response), i.e., when focusing on the intrinsic properties of the membrane: for the volume fraction \(\phi\geq 0.1\), \(\langle t_{\mathrm{L}}\rangle\) still shows a dependence on Bq, but if the capillary number Ca increases, the data tend to collapse on the same curve. This means that, for suspensions with a concentration \(\phi\geq 0.1\) and for finite values of capillary number Ca (i.e., \(\text{Ca}>0.3\)), the effect of membrane viscosity almost disappears. The origin of the reduction of the effect of membrane viscosity with volume fraction increase can be traced to the viscous tensor defined in Eq. (11): while the elastic contribution depends only on the geometry (i.e., the deformation) of the capsule, the viscous tensor depends only on the surface velocity gradient \(\mathbf{\nabla}^{S}\mathbf{u}^{S}\). Therefore, when the volume fraction \(\phi\) increases, the strain tensor \(\mathbf{e}\) (see Eq. (12)) decreases, and the effect of the membrane viscosity becomes smaller. Note that the data for \(\text{Bq}=25\) and \(\text{Bq}=50\) overlap within error bars.
Concerning the capsule-averaged frequency of the oscillations \(\omega\), we observe that there is a weak dependence on Bq for volume fractions up to \(\phi=0.1\) (see Fig. 9, panels a-c); however, for \(\phi>0.1\) (panel d), the oscillations of the deformation disappear, and therefore \(\omega\) goes to zero at large Ca. This may be due to the strong capsule-capsule interaction that does not allow the deformation of the capsules to oscillate freely.
Figure 6: Snapshots of capsule suspensions. Configurations are for a capillary number \(\text{Ca}=0.1\) and Boussinesq numbers \(\text{Bq}=0\) (top panels, (a-c)) and \(\text{Bq}=50\) (bottom panels, (d-f)).
Figure 8: Capsule-averaged loading time \(\langle t_{\rm L}\rangle\) (see Eq. (26)) as a function of the capillary number Ca for different values of the volume fraction \(\phi\) (panel a: \(\phi=0.001\); panel b: \(\phi=0.01\); panel c: \(\phi=0.1\); panel d: \(\phi=0.4\)) and Bq (Bq \(=0\), [\(\mathtt{o},\mathtt{\Delta},\mathtt{\underline{\alpha}},\mathtt{\underline{o}}\)]; Bq \(=10\), [\(\mathtt{o},\mathtt{\Delta},\mathtt{\underline{a}},\mathtt{\underline{o}}\)]; Bq \(=25\), [\(\mathtt{o},\mathtt{\Delta},\mathtt{\underline{a}},\mathtt{\underline{o}}\)]; Bq \(=50\), [\(\mathtt{o},\mathtt{\Delta},\mathtt{\underline{a}},\mathtt{\underline{o}}\)]).
Figure 10: The capsule-averaged lengths of the three main radii of the capsule \(\langle r_{1}\rangle\) (panels a-d), \(\langle r_{2}\rangle\) (panels e-h) and \(\langle r_{3}\rangle\) (panels i-l), normalised with the radius of the spherical capsule at rest \(R\) as functions of the capillary number Ca for different values of volume fraction \(\phi\) (panels a,e,i: \(\phi=0.001\); panels b,f,j: \(\phi=0.01\); panels c,g,k: \(\phi=0.1\); panels d,h,l: \(\phi=0.4\)) and the Boussinesq number (Bq \(=0\), [\(\mathtt{O},\Delta,\mathtt{n},\mathtt{o}\)]; Bq \(=10\), [\(\mathtt{O},\Delta,\mathtt{n},\mathtt{o}\)]; Bq \(=25\), [\(\mathtt{O},\Delta,\mathtt{n},\mathtt{o}\)]; Bq \(=50\), [\(\mathtt{O},\Delta,\mathtt{n},\mathtt{o}\)]).
As presented in the previous section for the single capsule, in Fig. 10 we show the capsule-averaged values of the normalised radii \(\langle r_{1}\rangle/R\), \(\langle r_{2}\rangle/R\) and \(\langle r_{3}\rangle/R\) (panels a-d, e-h and i-l, respectively). We observe that, at a given value of the volume fraction, the membrane viscosity clearly reduces the deformation of the three radii. The effect of the volume fraction becomes important for \(\phi>0.1\), that is, when capsules start to interact with each other. Even when the volume fraction increases, most of the deformation occurs in the shear plane (i.e., \(r_{2}\) is less affected than \(r_{1}\) and \(r_{3}\)). The effect of the volume fraction becomes prominent for \(\phi>0.1\): indeed, for all the values of Bq we have simulated, when \(\phi=0.4\) the radii \(r_{1}\) and \(r_{3}\) (panels d and l) are different if compared with the cases \(\phi\leq 0.1\). This might be due to the strong capsule-capsule interaction when \(\phi=0.4\), confirming again that the effect of membrane viscosity reduces for high values of the volume fraction. Concerning the deformation in the vorticity direction (i.e., \(r_{2}\)), it is \(\sim 10\%\) for Bq \(=0\) and \(\lesssim 5\%\) for Bq \(>0\). While \(r_{2}\) shows a clear hierarchy in the Boussinesq number for \(\phi<0.4\), a more complex behaviour appears when \(\phi=0.4\). However, we are facing very small deformations (less than \(5\%\)), which means that the length of \(r_{2}\) changes by about \(0.4\Delta x\). We conclude that the deformation in the vorticity direction is in general small, especially when we increase the volume fraction. To provide a more quantitative and precise investigation for the behaviour of \(r_{2}\), one should perform simulations with larger capsules (and therefore with a more resolved mesh); however, such a detailed study on the deformation in the vorticity direction goes beyond the scope of this work.
## 4 Conclusions
In this study, we performed a parametric investigation of the impact of membrane viscosity on the transient dynamics of suspensions of viscoelastic spherical capsules for different values of the volume fraction \(\phi\). To achieve this, we performed numerical simulations using the IB-LB method. Our results indicate that the effect of membrane viscosity, as measured by the dimensionless Boussinesq number Bq, strongly impacts the dynamics of a single capsule. However, this effect is diminished as the volume fraction \(\phi\) increases. The comparison between the single-capsule case (\(\phi=0.001\)) and the most-dense case simulated (\(\phi=0.4\)) revealed that while the capsule-averaged deformation \(\langle\bar{D}\rangle\) is greatly affected by the presence of membrane viscosity, the capsule-averaged loading time \(t_{\text{L}}\) does not show a strong dependence on Bq when \(\phi=0.4\). We can therefore conclude that, for the flow conditions simulated in this work (i.e., \(\text{Re}\sim 0.01\) and \(\text{Ca}\in[0.05,0.5]\), as outlined in Tab. 1), including membrane viscosity in the membrane model does not significantly affect the characteristic time when the volume fraction is high enough, but it still has a substantial impact on the deformation.
Looking forward, it will be valuable to investigate the dynamics of both dilute and dense suspensions flowing through small channels. The interaction between membrane viscosity and confinement is yet to be studied in this context. Additionally, it would be of interest to study the effect of membrane viscosity on different geometries and membrane models, with a focus on red blood cells as an example.
## 5 Acknowledgements
This work has received financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 431791331 - SFB 1452 "Catalysis at liquid interfaces" and research unit FOR2688 "Instabilities, Bifurcations and Migration in Pulsatile Flows" (Project-ID 417989464). The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS (Julich Supercomputing Centre 2019) at Julich Supercomputing Centre (JSC).
|
2303.03109 | Impossibility of spontaneous vector flavor symmetry breaking on the
lattice | I show that spontaneous breaking of vector flavor symmetry on the lattice is
impossible in gauge theories with a positive functional-integral measure, for
discretized Dirac operators linear in the quark masses, if the corresponding
propagator and its commutator with the flavor symmetry generators can be
bounded in norm independently of the gauge configuration and uniformly in the
volume. Under these assumptions, any order parameter vanishes in the symmetric
limit of fermions of equal masses. I show that these assumptions are satisfied
by staggered, minimally doubled and Ginsparg-Wilson fermions for positive
fermion mass, for any value of the lattice spacing, and so in the continuum
limit if this exists. They are instead not satisfied by Wilson fermions, for
which spontaneous vector flavor symmetry breaking is known to take place in the
Aoki phase. The existence of regularizations unaffected by residual fermion
doubling for which the symmetry cannot break spontaneously on the lattice
establishes rigorously (at the physicist's level) the impossibility of its
spontaneous breaking in the continuum for any number of flavors. | Matteo Giordano | 2023-03-06T13:20:49Z | http://arxiv.org/abs/2303.03109v2 | # On the impossibility of spontaneous vector flavor symmetry breaking on the lattice
###### Abstract
I show that spontaneous breaking of vector flavor symmetry on the lattice is impossible in gauge theories with a positive functional-integral measure, for discretized Dirac operators linear in the quark masses, if the corresponding propagator and its commutator with the flavor symmetry generators can be bounded in norm independently of the gauge configuration and uniformly in the volume. Under these assumptions, any order parameter vanishes in the symmetric limit of fermions of equal masses. I show that these assumptions are satisfied by staggered, minimally doubled and Ginsparg-Wilson fermions for positive fermion mass, for any value of the lattice spacing, and so in the continuum limit if this exists. They are instead not satisfied by Wilson fermions, for which spontaneous vector flavor symmetry breaking is known to take place in the Aoki phase. The existence of regularizations unaffected by residual fermion doubling for which the symmetry cannot break spontaneously on the lattice establishes rigorously (at the physicist's level) the impossibility of its spontaneous breaking in the continuum for any number of flavors.
## I Introduction
The importance of symmetries and of the way in which they are realized in quantum field theories can hardly be overemphasized. In the context of strong interactions and its microscopic theory, i.e., QCD, an important role is played by the approximate vector flavor symmetry involving the lightest two or three types ("flavors") of quarks, which holds exactly in the limit of quarks of equal masses; and by its enhancement to chiral flavor symmetry in the limit of massless quarks. Vector flavor symmetry and the pattern of its explicit breaking largely determine the structure of the hadronic spectrum; chiral flavor symmetry and its spontaneous breaking down to vector flavor symmetry explain the lightness of pions and their dynamics, as well as the absence of parity partners of hadrons. The full symmetry group at the classical level includes also the U(1)\({}_{B}\) symmetry responsible for baryon number conservation, and the axial U(1)\({}_{A}\) symmetry, that does not survive the quantization process and becomes anomalous in the quantum theory.
An interesting question is whether baryon number and vector flavor symmetry can break down spontaneously in general vector gauge theories, where the fermions' left-handed and right-handed chiralities are coupled in the same way to the gauge fields. This could in principle happen for exactly degenerate massive fermions, leading to the appearance of massless Goldstone bosons; and in the chiral limit of massless fermions it could lead to a different symmetry breaking pattern than the usual one, and so to a different set of Goldstone bosons. This question has been essentially answered in the negative by Vafa and Witten in a famous paper [1]. There they actually prove a stronger result, namely the impossibility of finding massless particles in the spectrum of a gauge theory with positive functional-integral measure that couple to operators with nonvanishing baryon number or transforming nontrivially under vector flavor transformations. This is done by deriving a bound on the fermion propagator that guarantees its exponential decay with the distance as long as the fermion mass is nonzero. Since massless bosons coupling to the operators mentioned above would appear in the spectrum as a consequence of Goldstone's theorem [2; 3; 4] if those symmetries were spontaneously broken, the impossibility of spontaneous breaking follows.
The elegant and powerful argument of Vafa and Witten is developed using the "mathematical fiction" of the functional integral formalism for interacting quantum field theories in continuum (Euclidean) spacetime. The crucial issue of the regularization of the functional integral, generally required to make it a mathematically well defined object, is discussed only briefly. In particular, the possibility of formulating the argument using a lattice regularization is mentioned, but not discussed in detail. The general validity of this statement is called into question by the existence of examples of spontaneous breaking of vector flavor symmetry on the lattice, namely in the Aoki phase [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25] of lattice gauge theories with Wilson fermions [26]. While this is not in contradiction with the argument of Vafa and Witten in the continuum [22], it also makes clear that this argument does not trivially extend to the lattice in a general setting. It would then be desirable to identify conditions that guarantee the impossibility of baryon number and vector flavor symmetry breaking on the lattice, at least for small lattice spacing, which could help in putting Vafa and Witten's "theorem" on more solid ground.
The strategy of widest generality is to directly prove a lattice version of Vafa and Witten's bound on the propagator, which would allow one to recover all the conclusions of Ref. [1] in a rigorous way (under the tacit assumption of the existence of the continuum limit). This was done for staggered fermions [27; 28; 29] in Ref. [30], so excluding completely the possibility of breaking baryon number symmetry and the vector flavor symmetry of several staggered fields on the lattice using this discretiza |
2308.16110 | Improving Few-shot Image Generation by Structural Discrimination and
Textural Modulation | Few-shot image generation, which aims to produce plausible and diverse images
for one category given a few images from this category, has drawn extensive
attention. Existing approaches either globally interpolate different images or
fuse local representations with pre-defined coefficients. However, such an
intuitive combination of images/features only exploits the most relevant
information for generation, leading to poor diversity and coarse-grained
semantic fusion. To remedy this, this paper proposes a novel textural
modulation (TexMod) mechanism to inject external semantic signals into internal
local representations. Parameterized by the feedback from the discriminator,
our TexMod enables more fined-grained semantic injection while maintaining the
synthesis fidelity. Moreover, a global structural discriminator (StructD) is
developed to explicitly guide the model to generate images with reasonable
layout and outline. Furthermore, the frequency awareness of the model is
reinforced by encouraging the model to distinguish frequency signals. Together
with these techniques, we build a novel and effective model for few-shot image
generation. The effectiveness of our model is identified by extensive
experiments on three popular datasets and various settings. Besides achieving
state-of-the-art synthesis performance on these datasets, our proposed
techniques could be seamlessly integrated into existing models for a further
performance boost. | Mengping Yang, Zhe Wang, Wenyi Feng, Qian Zhang, Ting Xiao | 2023-08-30T16:10:21Z | http://arxiv.org/abs/2308.16110v1 | # Improving Few-shot Image Generation by Structural Discrimination and Textural Modulation
###### Abstract
Few-shot image generation, which aims to produce plausible and diverse images for one category given a few images from this category, has drawn extensive attention. Existing approaches either globally interpolate different images or fuse local representations with predefined coefficients. However, such an intuitive combination of images/features only exploits the most relevant information for generation, leading to poor diversity and coarse-grained semantic fusion. To remedy this, this paper proposes a novel textural modulation (TexMod) mechanism to inject external semantic signals into
internal local representations. Parameterized by the feedback from the discriminator, our TexMod enables more fine-grained semantic injection while maintaining the synthesis fidelity. Moreover, a global structural discriminator (StructD) is developed to explicitly guide the model to generate images with reasonable layout and outline. Furthermore, the frequency awareness of the model is reinforced by encouraging the model to distinguish frequency signals. Together with these techniques, we build a novel and effective model for few-shot image generation. The effectiveness of our model is identified by extensive experiments on three popular datasets and various settings. Besides achieving state-of-the-art synthesis performance on these datasets, our proposed techniques could be seamlessly integrated into existing models for a further performance boost. Our code and models are available at here.
## CCS Concepts
* **Computing methodologies \(\rightarrow\) Computer vision representations; Image representations; Neural networks.**
## Keywords
* Few-shot Learning; Image Generation; Textural Modulation; Structural Discrimination
## 1. Introduction
Thrilling features of Generative Adversarial Networks (Goodfellow et al., 2016) such as impressive sample quality and flexible content controllability have significantly advanced visual applications including image (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2017) and video generation (Sutskever et al., 2017; Goodfellow et al., 2017; Goodfellow et al., 2018), image editing (Sutskever et al., 2017; Goodfellow et al., 2017), image-to-image translation (Sutskever et al., 2017; Goodfellow et al., 2017; Goodfellow et al., 2017), _etc_. However, their breakthroughs mainly attribute to ample training data and sufficient computation resources. For instance, current state-of-the-art StyleGAN models (Sutskever et al., 2017; Goodfellow et al., 2017; Goodfellow et al., 2017) are trained on Flickr-Faces (FFHQ) which involves 70K images for desirable performance. Such requirement on massive data poses limitations of GANs on adapting to new categories (Goodfellow et al., 2016; Goodfellow et al., 2017) and practical domains with only limited training data (Goodfellow et al., 2016; Goodfellow et al., 2017; Goodfellow et al., 2017). Consequently, it is critical to consider how to produce novel images given only a few images per category. Such a task, dubbed few-shot image generation (Goodfellow et al., 2016; Goodfellow et al., 2017; Goodfellow et al., 2017), has attracted extensive attentions recently.
The goal of few-shot image generation is to quickly adapt knowledge learned from seen classes to unseen classes (see Fig. 1). Specifically, the model is firstly trained in an episodic manner (Sutskever et al., 2017) on seen categories with sufficient training samples and per-sample class labels. Then, the learned model is required to transfer the generation ability to a new unseen category, _i.e._, producing diverse images for a new class given a few images (_e.g._, 3) from the same class, and there are no overlaps between the seen categories and the unseen categories. Thus the model is expected to learn how to generate novel images instead of merely capturing the distribution of seen classes.
Existing few-shot generation models seek to ameliorate the synthesis quality via 1) transforming intra-class representations to new classes (Goodfellow et al., 2016), 2) optimizing new criterion to achieve better knowledge transferabilities (Goodfellow et al., 2016; Goodfellow et al., 2017; Goodfellow et al., 2017), and 3) fusing global images or local features (Sutskever et al., 2017; Goodfellow et al., 2017; Goodfellow et al., 2017). For instance, LoFGAN (Sutskever et al., 2017) produces plausible and diverse images by fusing the local features of different images based on a pre-defined similarity map. The current state-of-the-art WaveGAN (Wang et al., 2017) encourages the model to synthesize high-frequency signals with frequency residual connections, enabling better awareness of spectrum information. Although these models have made remarkable progress, they still struggle to produce images with desirable diversity and fidelity simultaneously due to two critical limitations. On one hand, they only fuse semantically relevant features _i.e._, features with relatively high similarity, lacking more fine-grained semantic combinations and thus losing diversity. On the other hand, the arrangement of generated content might be arbitrary after fusing the local features since no explicit structural guidance is provided, degrading the synthesis fidelity.
we present a novel few-shot generation model, dubbed SDTMGAN, that addresses the aforementioned limitations through the incorporation of two key components: structural discrimination (StructD) and textural modulation (TexMod). Specifically, TexMod is performed via modulating the textural style of generated images at the semantic level. By injecting external semantic layouts from different samples into the internal textural style of generated images, TexMod could better combine local semantic representations and thus capture more semantic variations. Considering that fusing semantic features might cause arbitrary structures, we further develop StructD to ensure global coherence. Concretely, we first perform Laplace Operator (Sutskever et al., 2017) on the input images to obtain laplacian representations which encode rich global structural information such as contour edges and object boundaries. A lightweight discriminator, _i.e._, StructD, which distinguishes the laplacian representations of real and generated images, is then proposed to explicitly provide structural guidelines to the generator, facilitating the fidelity of global appearance. Meanwhile, inspired by the findings that neural networks prefer to fit low-frequency signals while tending to ignore high-frequency information (Sutskever et al., 2017; Goodfellow et al., 2017; Goodfellow et al., 2017), we further adopt a frequency discriminator to encourage the discriminator to capture high-frequency signals.
Together with the above techniques, our model can 1) capture the global structural and high-frequency signals, facilitating the fidelity of generated images; and 2) produce diverse images via modulating semantic features in a more fine-grained manner. We evaluate the effectiveness of our method on several popular few-shot datasets and the results demonstrate that our method achieves appealing synthesis performance in terms of image quality and richness (see Fig. 1 and Sec. 4). Additionally, our proposed techniques are complementary to existing models, _i.e._, integrating our methods into existing models gains a further performance boost.
**Contributions.** Our contributions are summarized as follows: 1) We propose a novel few-shot image generation model (_i.e._, SDTMGAN) which incorporates structural discrimination and textural modulation to respectively improve the global coherence of generated images and accomplish more fine-grained semantic fusion. 2) The proposed techniques could be readily integrated into existing few-shot generation models to further boost the performance with
negligible computation cost, further suggesting the efficacy and compatibility of our methods. 3) Under popular benchmarks and various experimental settings, our method consistently outperforms prior arts by a substantial margin. Besides, the images produced by our model are utilized for augmenting the training set for downstream classification problems, leading to improved classification accuracy. Overall, our method brings advantageous potential for improving few-shot image generation and downstream applications.
## 2. Related Works
**Generative adversarial networks** (GANs) (Goodfellow et al., 2016; Goodfellow et al., 2016) are typically composed of a discriminator and a generator, where the former learns to distinguish real images from generated ones and the latter tries to deceive the discriminator via reproducing the data distribution. Benefiting from the compelling ability to capture data distributions, GANs have been ubiquitously applied in various visual domains, such as image-to-image translation (Goodfellow et al., 2016; Goodfellow et al., 2016), image/video generation (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016), image manipulation and inpainting (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016), _etc_. However, their performance drops drastically when trained on few-shot datasets due to the discriminator overfitting and memorization issues (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016). Some recent works mitigate the overfitting problem by applying extensive data augmentation (Goodfellow et al., 2016; Goodfellow et al., 2016) to enlarge the training sets or developing additional branches and constraints (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016) to dig more available information. Unlike their concentration on improving unconditional image generation, our goal is to produce novel images for one specific class when provided with a few images from the same class. Trained in an episodic manner as few-shot learning (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016), our model is expected to capture the knowledge of generating new images.
**Few-shot image generation.** Many attempts have been endowed to ameliorate synthesis quality for few-shot scenarios. Existing alternatives could be roughly divided into three categories based on their different techniques (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016), namely optimization-based, and transformation-based approaches. Optimization-based methods (Goodfellow et al., 2016; Goodfellow et al., 2016) combine GANs with meta-learning (Goodfellow et al., 2016) to generate new images via finetuning the parameters of the inner generating loop and outer meta training loop, but their sample quality is often limited. Differently, transformation-based models like DAGAN (Ashman et al., 2016) transform intra-class and randomly sampled latent representations into new images, enabling relatively high diversity yet bringing unsatisfactory aliasing artifacts. By contrast, the fusion-based (Goodfellow et al., 2016; Goodfellow et al., 2016) methods achieve better synthesis quality. For instance, F2GAN (Goodfellow et al., 2016) proposes a fusing-and-filling scheme to interpolate input conditional images and fill fine details into the fused image. Considering that fusing the image globally leads to a semantic misalignment, LoFGAN (Goodfellow et al., 2016) further improves the performance by combining local representations following a pre-computed semantic similarity. Moreover, WaveGAN (Wang et al., 2017) explicitly encourages the model to pour more attention on high-frequency signals, which previous models usually ignore.
However, there are still two main limitations that remain under-explored among prior studies. On one hand, fusing local features based on a similarity map only combines the most relevant semantics, leading to unfavorable synthesis diversity. Besides, no learnable parameters are involved in the fusion process, lacking explicit optimization. On the other hand, global coherence might be affected by local fusion and produce arbitrary images without global structural guidelines. In this paper, we fuse local semantics via learnable textural modulation and explicitly provide structural information to the model.
**Frequency bias in GANs.** Deep neural networks are identified to have a preference for capturing frequency signals from low to high (Goodfellow et al., 2016; Goodfellow et al., 2016), which also holds for GANs. Accordingly, many works have been developed to improve GANs' frequency awareness. For instance, Jiang _et al._ propose focal frequency loss to iteratively attach higher importance to hard frequency signals (Goodfellow et al., 2016). Gao _et al._ alleviate GAN's frequency bias by residual frequency connections (Goodfellow et al., 2016) and Yang _et al._ employ high-frequency discriminator (Wang et al., 2017) to achieve this. Similarly, we assign a frequency discriminator to help the model better encode frequency signals.
**Modulation techniques** are effective ways to combine external information with internal representations and have been successfully applied to many practical domains such as style transfer (Goodfellow et al., 2016), semantic image synthesis and editing (Goodfellow et al., 2016; Goodfellow et al., 2016; Goodfellow et al., 2016). Specifically, the input features are first normalized to zero mean and unit deviation. Then, the normalized representations are modulated by injecting external signals from other features. In this way, the modulated features contain original content while capturing external semantic layouts. Following this philosophy, we apply this to few-shot image generation and develop a two-branch textural modulation to fuse local features in a more fine-grained manner. By incorporating internal textural content with external semantic representations through learnable modulating parameters, our model promotes a more diverse generation. Details will be given in the next section.
## 3. Methodology
In this section, we present the technical detail the proposed methods, namely structural discriminator (StructD) and textural modulation (TexMod). The formulation of few-shot image generation and our overall framework are presented in Sec. 3.1, followed by the description of our StructD and TexMod respectively in Sec. 3.3 and Sec. 3.2. Finally, Sec. 3.4 presents the optimization objectives.
### Preliminary and Overview
**Preliminary.** Fig. 1 shows the setting of few-shot image generation. Concretely, the model is first trained on seen classes \(C_{S}\) in an episodic manner. Episodic training is achieved by feeding \(N\)-way-\(K\)-shot images as input for each iteration, where \(N\) denotes the number of classes and \(K\) is the number of images for each class. Such a paradigm makes the model capture transferable ability for image generation. Then, the model is expected to produce novel images given several images from unseen classes \(C_{u}\) (\(C_{u}\cap C_{S}=\emptyset\)).
**Overall framework.** Fig. 2 illustrates the overall framework of our proposed model. The generator consists of one encoder (\(E\)) and one decoder (\(M\)), the former projects input images to latent features and the latter decodes the modulated representations to produce new images. Textural modulation (TexMod) enables more detailed fusion by injecting the outer semantic layout into inner textures with learnable parameters. Besides, by leveraging the Laplacian representations as a global guidance, the model can eliminate productions with discordant structures.
### Textural Modulation
Textural modulation (TexMod) injects external semantic information into internal features. Fig. 2 shows the pipeline of TexMod given three input images from each category. Firstly, \(K\) features \(\mathbf{F}=\mathcal{F}_{\mathbf{k}}|_{k=1}^{K},\mathcal{F}_{\mathbf{k}}\in \mathcal{R}^{n\times h\times c}(K=3\text{ here})\), where \(w,h,c\) denote the feature dimensions, are obtained from the encoder \(E\). Then, one feature \(\mathcal{T}_{mod}\) for modulation is randomly chosen and the other referenced features \(\mathcal{F}_{ref}\) are used for injection. Finally, the modulated feature is obtained following a two-stage injection mechanism.
**First-stage injection.** In order to obtain reasonable modulation weights for semantic injection, we perform 2d convolutions on the chosen feature \(\mathcal{T}_{mod}\) and the sum of reference features \(\mathcal{F}_{ref}\) respectively, obtaining two sets of modulation parameters (\(\alpha_{1}\), \(\beta_{1}\)) and (\(\alpha_{2}\), \(\beta_{2}\)). The 2d convolution here encodes semantic information of local features and generates learnable parameters, enabling more controllable and fine-grained fusion. The first stage semantic of injection is then accomplished by
\[\alpha_{\text{o}}=(1+\beta_{1})\bigodot\alpha_{2}+\alpha_{1}, \tag{1}\]
where \(\bigodot\) denotes the element-wise multiplication and \(\alpha_{o}\) is the obtained parameter for the second-stage modulation. All parameters share the same dimension with the chosen feature \(\mathcal{F}_{mod}\).
**Second-stage injection.** Stage one injects the semantic representations of referenced features into that of the chosen feature \(\mathcal{F}_{mod}\). However, the overall texture might be overridden by semantic fusion. Accordingly, we first obtain the normalized feature (\(\mathcal{F}_{mod}\)) by normalizing the chosen feature. Then, the modulated parameter \(\alpha_{o}\) and \(\beta_{2}\) are leveraged for a second-stage injection on \(\mathcal{F}_{mod}\):
\[\hat{\mathcal{F}}_{mod}=(1+\beta_{2})\bigodot\mathcal{F}_{mod}+\alpha_{o}, \tag{2}\]
where \(\hat{\mathcal{F}}_{mod}\) is the output feature which maintains the texture of \(\mathcal{F}_{mod}\) meanwhile encodes rich semantic details of referenced features \(\mathcal{F}_{ref}\). Additionally, the feature for modulation is randomly chosen at each training episodic, involving more semantic variance for injection. Finally, the modulated feature \(\hat{\mathcal{F}}_{mod}\) is forwarded into the decoder \(M\) to synthesize new images. Through the proposed two-stage modulation, more fine-grained semantic injection is achieved since all semantic information of referenced features is integrated into semantic fusion, improving the diversity. Moreover, the modulation weights are optimized following the feedback of the discriminator, ensuring fidelity is not compromised.
### Structural and Frequency Discriminator
Typically, existing approaches perform adversarial loss and classification loss to penalize the discriminator. However, the overall structure and outline of generated images might be arbitrary without explicit global guidance. We ameliorate this by enforcing the discriminator to capture global structural information. Specifically, the Laplacian operation is first leveraged to extract the global structural signals (_e.g.,_ contour edges and object boundaries).Laplacian operation is accomplished via a convolutional layer with the Laplacian kernel:
\[\text{Kernel}_{Laplacian}=\left[\begin{array}{ccc}0&-1&0\\ -1&4&-1\\ 0&-1&0\end{array}\right]. \tag{3}\]
The Laplacian kernel is utilized to project input images to Laplacian representations, then a structural discriminator (StructD) is employed to encode global signals. The losses of StructD are defined as:
\[\begin{split}\mathcal{L}_{str}^{D}&=\max(0,1-D_{str}(\mathbf{x }))+\max(0,1+D_{str}(\hat{\mathbf{x}})),\\ \mathcal{L}_{str}^{G}&=-D_{str}(\hat{\mathbf{x}}),\end{split} \tag{4}\]
where \(D_{str}\) represents StructD. \(\mathbf{x}\) and \(\hat{\mathbf{x}}\) is input real and generated images, respectively. Akin to conventional discriminators, StretD is comprised of convolutional and activation layers. Notably, only encoding structural signals, our StretD is lightweight and introduces negligible(see Tab. 3) additional computation burdens.
**Frequency discriminator.** In order to mitigate the model's frequency bias, we employ wavelet transformation on the extracted features and obtain high-frequency signals of input images. Then we encourage the model to distinguish high-frequency signals of
Figure 2. The overall pipeline of our model. Textural modulation (TexMod) enables more fine-grained semantic fusion via injecting the outer semantic information into the inner representations. Structural discriminator (StructD) explicitly encourages the model to capture the global structural signals, ensuring more reliable and reasonable synthesis.
real images from that of generated samples, forming a frequency discriminator which improves the frequency awareness of our model. The frequency losses are given by
\[\mathcal{L}^{D}_{fre} =\max(0,1-D_{free}(\mathcal{H}(F(\mathbf{x}))))+\max(0,1+D_{fre}( \mathcal{H}(F(\hat{\mathbf{x}})))),\] \[\mathcal{L}^{G}_{fre} =-D_{free}(\mathcal{H}(F(\hat{\mathbf{x}}))), \tag{5}\]
where \(F\) is the feature extractor, and \(\mathcal{H}\) represents the Haar wavelet transformation (Feng et al., 2017) that decomposes features into different frequency components. The obtained high-frequency signals are then forwarded into the frequency discriminator \(D_{free}\), which contains an adaptive-average-pool and a Conv2D layer for calculation.
### Optimization
Two subnetworks are involved for optimization in our model, namely generator (\(G\)) and discriminator (\(D\)), and \(G\) and \(D\) are optimized alternatively in an adversarial manner. Formally, let \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},...\}\) demotes the input real images and \(\mathbf{c}(\mathbf{x}_{i})\) is the corresponding labels for \(\mathbf{x}_{i}\) (only for \(\mathcal{C}_{s}\)). Image produced by G is denoted as \(\hat{\mathbf{x}}=G(\mathbf{X})\), which \(D\) seeks to distinguish from real images by computing \(D(\mathbf{X})\).
**Adversarial loss.** The hinge version of adversarial loss is employed for training. \(D\) tries to assign higher scores for real images while lower ones for generated samples, and \(G\) seeks to produce plausible images to fool \(D\):
\[\mathcal{L}^{D}_{adv} =\max(0,1-D(X))+\max(0,1+D(\hat{x})),\] \[\mathcal{L}^{G}_{adv} =-D(\hat{x}). \tag{6}\]
**Classification loss** ensures the model to capture the class distribution of training sets (_i.e._, seen classes \(C_{s}\)). Such that, the model could produce images for one category given the labeled class. Formally, classification loss is calculated by
\[\mathcal{L}^{D}_{cls} =-\log P(c(\mathbf{x})\mid\mathbf{x}),\] \[\mathcal{L}^{G}_{cls} =-\log P(c(\hat{\mathbf{x}})\mid\hat{\mathbf{x}}), \tag{7}\]
where \(P(\cdot)\) denotes the sample's probability of belonging to class \(c\).
Consequently, the generator \(G\) and the discriminator \(D\) are respectively trained by combining the above losses linearly.
\[\mathcal{L}_{D} =\mathcal{L}^{D}_{adv}+\mathcal{L}^{D}_{cls}+\lambda_{fre}\mathcal{ L}^{D}_{fre}+\lambda_{str}\mathcal{L}^{D}_{str},\] \[\mathcal{L}_{G} =\mathcal{L}^{G}_{adv}+\mathcal{L}^{G}_{cls}+\lambda_{fre} \mathcal{L}^{G}_{fre}+\lambda_{str}\mathcal{L}^{G}_{str}. \tag{8}\]
Note that in our implementation, \(\lambda_{fre}=\lambda_{str}=1\), and the detailed comparisons are presented in Sec. 4.5.
## 4. Experiments
### Experimental Setup
**Datasets.** We evaluate the effectiveness of the proposed method on three popular datasets, namely Flowers (Wang et al., 2017), Animal Faces (Wang et al., 2017), and VGGFace (Chen et al., 2017). These datasets are divided into seen (\(C_{s}\)) and unseen (\(C_{u}\)) classes respectively for training and testing as in (Chen et al., 2017; Wang et al., 2017; Wang et al., 2017). Tab. 1 provides the detailed splits of these datasets.
**Evaluation metrics and baselines.** Frechet Inception Distance (FID) (Feng et al., 2017) and Learned Perceptual Image Patch Similarity (LPIPS) (Feng et al., 2017) serve as the quantitative metric for comparison. FID reflects the synthesis quality via computing the similarity between the generated distribution and the real distribution, and lower FID indicates better performance. LPIPS delivers sample diversity by capturing the variation of generated images, and higher LPIPS means better diversity. Moreover, we leverage LoFGAN (Chen et al., 2017) and WaveGAN (Wang et al., 2017) as baselines and implement our proposed techniques upon their official code for evaluation. Noticeably, all evaluations strictly follow the prior arts (Chen et al., 2017; Wang et al., 2017; Wang et al., 2017) for a fair comparison.
**Implementation Details.** TexMod is implemented with four convolutional layers as shown in Fig. 2 to obtain modulated parameters. The input and output of each convolutional layer have the same dimension, facilitating the injection of semantic features. As for the StrctD, two convolutional layers and one adaptive-average-pooling layer are employed to encourage the model to capture the global layout and outline of images. The model is trained for \(100K\) iterations and the last checkpoint is used for evaluation. For each iteration, \(K\) (_e.g._, 1, 3) conditional images from one category randomly sampled from seen classes \(C_{s}\) are used for training. Adam optimizer (Kingmae et al., 2014) is used and the batchsize is 8. The learning rates for \(G\) and \(D\) are set to \(0.0001\) for the first half iterations, and decay to 0 linearly for the next \(50K\) iterations. All experiments are conducted on one NVIDIA 3090 with 24G memory and implemented with the PyTorch framework.
### Quantitative Results
**Three-shot image generation.** The upper part of Tab. 2 presents the comparison on 3-shot image generation tasks. Obviously, our proposed techniques bring consistent performance boosts under all tested datasets and baselines. For instance, our proposed techniques improve the FID and LPIPS scores of LoFGAN (_resp._, WaveGAN) on VGGFace from 20.31 (_resp._, 4.96) to 12.28 (_resp._, 3.96) and from 0.2869 (_resp._, 0.3255) to 0.3203 (_resp._, 0.3346). Despite being evaluated on different baselines, _i.e._, WaveGAN and LoFGAN, the proposed approach continuously improves the synthesis quality. For instance, by integrating our proposed techniques with WaveGAN, new state-of-the-art FID scores on all tested datasets are established, _i.e._, 39.51, 26.65, and 3.96 respectively on Flowers, Animal Faces, and VGGFace. Regarding the LPIPS score, our proposed techniques also consistently gain improvements with respect to different datasets and baselines. Such observations indicate the positive potential of our method for few-shot image generation.
**One-shot image generation.** When it comes to one-shot image generation, the fusion strategy might not work since only one input image is employed for generating novel images. We continue to use the implementations of LoFGAN and WaveGAN without fusion blocks for one-shot image generation tasks. The bottom part of Tab. 2 shows the quantitative results. Still, the synthesis performance under one-shot settings is substantially improved by our proposed techniques. Concretely, on WaveGAN, our method
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{Seen} & \multicolumn{2}{c}{Unseen} \\ & \#cls & \#img & \#cls & \#img \\ \hline Flowers & 85 & 3400 & 17 & 680 \\ Animal Faces & 119 & 11900 & 30 & 3000 \\ VGGFace & 1802 & 180200 & 552 & 55200 \\ \hline \hline \end{tabular}
\end{table}
Table 1. The splits of seen/unseen images (“img”) and classes (“cls”) on three datasets.
improves the FID from 55.28 to 52.89 (\(\downarrow\) 4.3%), 53.95 to 50.05 (\(\uparrow\) 7.2%), and 12.28 to 9.27 (\(\downarrow\) 24.51%) on Flower, Animal Faces and VGGFace respectively. Additionally, LPIPS scores also gain effective improvements under all settings, further demonstrating the effectiveness of our method.
The effectiveness of our proposed method is identified via combining them with different baselines (_i.e._, LoFGAN and WaveGAN) for different tasks (_i.e._, three-shot and one-shot generation). Our method consistently gains substantial boosts on synthesis fidelity and diversity under all settings. Namely, the proposed techniques indeed improve the synthesis quality and are complementary to existing approaches, which further manifests the compatibility.
**Computational cost**. Tab. 3 provides the computational burdens of our method with respect to the parameter amount, FLOPS, and training time. Clearly, our method introduces negligible costs (\(\uparrow\) %2.47) compared with LoFGAN and WaveGAN, while significantly improving the synthesis quality under various settings.
### Qualitative results
Here we qualitatively investigate the synthesis quality of our model. To be specific, after trained on seen classes \(\mathcal{C}_{\text{s}}\) in an episode way (_i.e._, providing \(K\) images from each class for training), the model is expected to produce novel images for a category given a few images from this category. Both three-shot and one-shot generation tasks are involved for a more reliable evaluation.
Fig. 1 and Fig. 3 provide the generated images of our method for one-shot and three-shot generation tasks respectively. It can be seen that our model could generate diverse and photorealistic images, even when only one input image is available. Besides, compared with images generated by LoFGAN, the overall outline, and structure of images synthesized by our model are more reasonable and plausible. For instance, our model performs significantly better regarding the outline and shape of petals and the coherence of Animal Faces. Furthermore, our model could produce images with rich semantic variances in terms of color, style, and texture, facilitating more diverse output. Namely, with delicate designs toward the global structure and textural modulation, our model gains convincing improvements in generation quality. More results can be found in the appendix.
### Augment for Downstream Classification
We further evaluate the synthesis quality by augmenting the training sets with generated images for downstream classification problems. Firstly, a ResNet-18 model is pre-trained on seen classes. Then, the unseen classes are divided into \(\mathcal{D}_{train}\), \(\mathcal{D}_{val}\), and \(\mathcal{D}_{test}\) respectively. The pre-trained ResNet-18 is further trained on \(\mathcal{D}_{train}\)
Figure 3. Qualitative comparison results of our method with LoFGAN. Images produced by our model performs better in term of the global structure (_e.g._, the outline and shape of petals and the coherence of Animal Faces) and semantic variance (_e.g._, different hair colors of Animal Faces and various expression of Human Faces).
(_i.e._, Base in Tab. 4) and tested on \(\mathcal{D}_{test}\). Finally, we augment \(\mathcal{D}_{train}\) by generating samples with our model to obtain \(\mathcal{D}_{aug}\) for comparison, the augmentation amount for Flowers, Animal Faces, and VGGFace are respectively 30, 50, and 50.
Tab. 4 showcases the classification results. As could be seen from the results, our model achieves higher accuracy (_i.e._, 86.09, 33.38, and 79.17 respectively on Flowers, Animal Faces, and VGGFace) for image classification when used as data augmentation. Together with the aforementioned qualitative and quantitative comparisons, the effectiveness and versatility of our method are further identified.
### Ablation Studies and Parameter Sensitivities
In this part, we ablate different modules to testify the efficacy of each component and investigate the loss weights of \(\lambda_{str}\) and \(\lambda_{freq}\).
**Module ablation.** We mute each module and keep other settings unchanged to probe their impacts. Tab. 5 presents the qualitative results. Despite being evaluated on different baselines (_i.e._, LoFGAN and WaveGAN) and datasets (_i.e._, Flowers and Animal Faces), the empirical results consistently reflect the efficacy of our proposed techniques. More precisely, the proposed StrcutD and FreD mainly contribute to the FID score (_e.g._, from 78.11 or 75.80 to 74.08 on Flowers, respectively), matching our goal to improve overall faithfulness. By contrast, TexMod pours more attention into
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Methods & \# params & FLOPS & training time(h) \\ \hline LoFGAN & 39.35M & 139.47G & 23.28 \\ WaveGAN & 39.33M & 139.24G & 23.17 \\ +Ours & 40.30M & 143.12G & 23.63 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Classification accuracy of augmentation. “Base” denotes no augmentation is performed.
\begin{table}
\begin{tabular}{l|l|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Setting} & \multicolumn{2}{c|}{Flowers} & \multicolumn{2}{c|}{Animal Faces} & \multicolumn{2}{c}{VGGFace} \\ & & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) \\ \hline FIGR [(6)] & 3-shot & 190.12 & 0.0634 & 211.54 & 0.0756 & 139.83 & 0.0834 \\ GMN [(2)] & 3-shot & 200.11 & 0.0743 & 220.45 & 0.0868 & 136.21 & 0.0902 \\ DAWSON [(32)] & 3-shot & 188.96 & 0.0583 & 208.68 & 0.0642 & 137.82 & 0.0769 \\ DAGAN [(1)] & 3-shot & 151.21 & 0.0812 & 155.29 & 0.0892 & 128.34 & 0.0913 \\ MatchingGAN [(17)] & 3-shot & 143.35 & 0.1627 & 148.52 & 0.1514 & 118.62 & 0.1695 \\ F2GAN [(20)] & 3-shot & 120.48 & 0.2172 & 117.74 & 0.1831 & 109.16 & 0.2125 \\ DeltaGAN [(18)] & 3-shot & 104.62 & 0.4281 & 87.04 & 0.4642 & 78.35 & 0.3487 \\ FUNIT [(34)] & 3-shot & 100.92 & 0.4717 & 86.54 & 0.4748 & - & - \\ DiscoFUNIT [(19)] & 3-shot & 84.15 & **0.5143** & 66.05 & 0.5008 & - & - \\ SAGE [(9)] & 3-shot & 41.35 & 0.4330 & 27.56 & **0.5451** & 32.89 & 0.3314 \\ \hline LoFGAN [(15)] & 3-shot & 79.33 & 0.3862 & 112.81 & 0.4964 & 20.31 & 0.2869 \\ + Ours & 3-shot & **74.08** & **0.3983** & **96.74** & **0.5028** & **12.28** & **0.3203** \\ \hline WaveGAN [(57)] & 3-shot & 42.17 & 0.3868 & 30.35 & 0.5076 & 4.96 & 0.3255 \\ + Ours & 3-shot & **39.51** & **0.3970** & **26.65** & **0.5109** & **3.96** & **0.3346** \\ \hline DAGAN [(1)] & 1-shot & 179.59 & 0.0496 & 185.54 & 0.0687 & 134.28 & 0.0608 \\ DeltaGAN [(18)] & 1-shot & 109.78 & 0.3912 & 89.81 & 0.4418 & 80.12 & 0.3146 \\ FUNIT [(34)] & 1-shot & 105.65 & 0.4221 & 88.07 & 0.4362 & - & - \\ DiscoFUNIT [(41)] & 1-shot & 90.12 & **0.4436** & 71.44 & 0.4411 & - & - \\ \hline LoFGAN [(15)] & 1-shot & 137.47 & 0.3868 & 152.99 & 0.4919 & 26.89 & 0.3208 \\ + Ours & 1-shot & **124.74** & **0.3900** & **147.87** & **0.4925** & **25.17** & **0.3267** \\ \hline WaveGAN [(57)] & 1-shot & 55.28 & 0.3876 & 53.95 & 0.4948 & 12.28 & 0.3203 \\ + Ours & 1-shot & **52.89** & **0.3924** & **50.04** & **0.5002** & **9.27** & **0.3214** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Comparisons of FID (\(\downarrow\)) and LPIPS (\(\uparrow\)) scores on images generated by different methods for unseen categories. The marked results marked with different colors denote we evaluate our methods based on the top of their official implementations.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Flowers} & \multicolumn{2}{c}{Animal Faces} \\ & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) \\ \hline LoFGAN + “full” & **74.08** & **0.3983** & **96.74** & **0.5028** \\ w/o TextMod & 74.41 & 0.3882 & 97.43 & 0.4970 \\ w/o StructD & 78.11 & 0.3952 & 109.43 & 0.5001 \\ w/o PreD & 75.80 & 0.3928 & 98.32 & 0.5010 \\ \hline WaveGAN + “full” & **39.51** & **0.3970** & **26.65** & **0.5109** \\ w/o TextMod & 40.23 & 0.3859 & 26.90 & 0.5069 \\ w/o StructD & 41.28 & 0.3956 & 29.82 & 0.5096 \\ w/o FreD & 42.04 & 0.3942 & 27.05 & 0.5100 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Ablation studies to probe the efficacy of our proposed techniques. “full” denotes all proposed modules are used.
improving the synthesis diversity. Namely, removing TexMod leads to severe degradation in the LPIPS score (_e.g.,_ from 0.3970 to 0.3859 on Flowers). Additionally, by combining these techniques, we obtain the best synthesis quality in terms of FID and LPIPS scores. That is, they complement each other for further improvements.
**Constraint strength.** Recall that StructD and FreD are involved as loss terms for optimization in implementation. Therefore, here we further perform ablative comparisons on their constraint strength to investigate the parameter sensitivities. Specifically, we first set \(\lambda_{str}\) and \(\lambda_{fre}\) to zero to obtain the baseline FID score on the VGGFace dataset. Then we investigate a proper value for \(\lambda_{str}\) in [0.1, 1, 10, 100], wherein \(\lambda_{fre}\) is set to 0. After obtaining an appropriate coefficient for \(\lambda_{str}\), we turn to explore \(\lambda_{fre}\) in [0.1, 1, 10, 100]. Finally, suitable choices for both \(\lambda_{str}\) and \(\lambda_{fre}\) could be derived. Notably, TexMod is not used here to avoid unnecessary impacts.
Tab. 6 presents the quantitative results. We could tell that \(\lambda_{str}=\lambda_{str}=1\) fit best to our goal. Too small or strong coefficients might either fail to enforce the model to capture corresponding information or surpass other constraints thus leading to imbalanced training. More results can be found in the appendix.
### Comparison of Various Numbers of Shots
In order to investigate the performance of our model under different numbers of input images, we evaluate our model with different numbers of input images, _i.e.,_\(K\in\)[3, 5, 7, 9]. We add our techniques on LoFGAN and test on the Flowers dataset here.
Fig. 4 presents the FID scores under different \(K\)-shot generation tasks. We could tell that better synthesis performance could be gained via 1) involving more input images for training, or 2) increasing the number of testing images for evaluation. Such observation is reasonable as more images provide more semantic variances and meaningful representations for the synthesis.
### Cross-domain Generation
Recall that the model is expected to capture the knowledge of learning how to produce novel images instead of mimicking the training distribution. To further evaluate how well the model could transfer learned knowledge to irrelevant domains, we perform a cross-domain generation here. Concretely, the model is first trained on the VGGFace dataset. Then, we input a few images from the Animal Faces dataset for testing.
Fig. 5 shows the qualitative results. Interestingly, although the synthesis quality slightly drops, our model can still produce acceptable images under such a setting, demonstrating that the model indeed captures the ability of generating rather than memorizing training images. Quantitative results are provided in appendix.
## 5. Conclusion
In this work, we propose a general few-shot image generation model with two delicate designs, namely textural modulation (TexMod) and structural discrimination (StructD). Firstly, the representative ability and structural awareness of the discriminator are improved by explicitly providing global guidelines to it, facilitating a more faithful generation. Secondly, we achieve more fine-grained representation fusion by injecting external semantic layouts into internal textures. Additionally, being parameterized by the discriminator's feedback, TexMod is capable of maintaining the synthesis fidelity. As a result, our model could produce high-quality samples with superior diversity and faithfulness, and the generated images could be leveraged as augmentation for improving downstream classification tasks. Furthermore, our proposed techniques complement existing approaches and facilitate cross-domain generation.
###### Acknowledgements.
This work is supported by Shanghai Science and Technology Program under Grant No. 21511100800, Natural Science Foundation of China under Grant No. 62076094, Shanghai Science and Technology Program under Grant No. 20511100600, and Natural Science Foundation of China under Grant No. 62002193.
\begin{table}
\begin{tabular}{c c c} \hline \hline \(\lambda_{str}\) & \(\lambda_{fre}\) & FID (l) \\ \hline
0 & 0 & 4.96 \\ \hline
0.1 & 0 & 4.89 \\
1 & 0 & **4.37** \\
10 & 0 & 5.01 \\
100 & 0 & 49.12 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c} \hline \hline \(\lambda_{str}\) & \(\lambda_{fre}\) & FID (l) \\ \hline
0 & 1 & 4.72 \\ \hline
1 & 0.1 & 4.35 \\
1 & 1 & **4.03** \\
1 & 10 & **4.29** \\
1 & 100 & 8.52 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Ablation studies on the loss weights \(\lambda_{str}\) and \(\lambda_{fre}\).
Figure 4. Comparison results under different shots. The dotted lines represent the average slope, demonstrating the overall trend of the FID scores as the sample size increases.
Figure 5. Cross-domain generation results. The model is trained on VGGFace dataset while tested on Animal Faces dataset. |
2303.14475 | Informed Machine Learning, Centrality, CNN, Relevant Document Detection,
Repatriation of Indigenous Human Remains | Among the pressing issues facing Australian and other First Nations peoples
is the repatriation of the bodily remains of their ancestors, which are
currently held in Western scientific institutions. The success of securing the
return of these remains to their communities for reburial depends largely on
locating information within scientific and other literature published between
1790 and 1970 documenting their theft, donation, sale, or exchange between
institutions. This article reports on collaborative research by data scientists
and social science researchers in the Research, Reconcile, Renew Network (RRR)
to develop and apply text mining techniques to identify this vital information.
We describe our work to date on developing a machine learning-based solution to
automate the process of finding and semantically analysing relevant texts.
Classification models, particularly deep learning-based models, are known to
have low accuracy when trained with small amounts of labelled (i.e.
relevant/non-relevant) documents. To improve the accuracy of our detection
model, we explore the use of an Informed Neural Network (INN) model that
describes documentary content using expert-informed contextual knowledge. Only
a few labelled documents are used to provide specificity to the model, using
conceptually related keywords identified by RRR experts in provenance research.
The results confirm the value of using an INN network model for identifying
relevant documents related to the investigation of the global commercial trade
in Indigenous human remains. Empirical analysis suggests that this INN model
can be generalized for use by other researchers in the social sciences and
humanities who want to extract relevant information from large textual corpora. | Md Abul Bashar, Richi Nayak, Gareth Knapman, Paul Turnbull, Cressida Fforde | 2023-03-25T14:08:21Z | http://arxiv.org/abs/2303.14475v1 | An Informed Neural Network for Discovering Historical Documentation assisting the Repatriation of Indigenous Ancestral Human Remains
###### Abstract
Among the pressing issues facing Australian and other First Nations peoples is the repatriation of the bodily remains of their ancestors, which are currently held in Western scientific institutions. The success of securing the return of these remains to their communities for reburial depends largely on locating information within scientific and other literature published between 1790-1970 documenting their theft, donation, sale, or exchange between institutions. This article reports on collaborative research by data scientists and social science researchers in the Research, Reconcile, Renew Network (RRR) to develop and apply text mining techniques to identify this vital information. We describe our work to date on developing a machine learning-based solution to automate the process of finding and semantically analyzing relevant texts. Classification models, particularly deep learning-based models, are known to have low accuracy when trained with small amounts of labelled (i.e. relevant/non-relevant) documents. To improve the accuracy of our detection model, we explore the use of an Informed Neural Network (INN) model that describes documentary content using expert-informed contextual knowledge. Only a few labelled documents are used to provide specificity to the model, using conceptually related keywords identified by RRR experts in provenance research. The results confirm the value of using an INN network model for identifying relevant documents related to the investigation of the global commercial trade in Indigenous human remains. Empirical analysis suggests that this INN model can be generalized for use by other researchers in the social sciences and humanities who want to extract relevant information from large textual corpora.
Informed Machine Learning, Centrality, CNN, Relevant Document Detection, Repatriation of Indigenous Human Remains +
Footnote †: journal: Journal Tie
1
Footnote 1: Queensland University of Technology, Brisbane, Australia
2
Footnote 2: Australian National University, Canberra, Australia
3
Footnote 3: University of Tasmania, Hobart, Australia
## Introduction
Text mining is a research field that has made significant progress in developing techniques for uncovering and extracting knowledge from large collections of documents. However, these techniques have mostly been applied to pre-existing textual corpora. In the humanities and social sciences, there are many pressing research questions related to past human thought and behavior for which relevant collections of historical documents do not exist in machine-readable form. Researchers must therefore use search engines to locate, gather, and analyze documentary content about past people, places, and events. This presents a challenge, as search engine results may be biased by algorithms and indexing practices, and may not accurately reflect all relevant content.
Researchers of the Research, Reconcile, Renew Network (RRR) currently face a significant challenge as they work to assist First Nation communities in repatriating their ancestors' remains from Western scientific collections. The work involves confirming the identity, current location, and other information that is necessary for a successful reburial. To accomplish this, RRR researchers invest substantial time and intellectual energy in tracing the movement of the remains after they were stolen through as yet only partially identified networks of collectors, donors, private sellers, commercial dealers, and scientists associated with museums and other institutions with anthropological collections. Current museum catalogues provide some data for analysis, but they commonly mark the endpoints of the movement of remains through these networks, which were active from the turn of the 19th century and continue to exist today, using social media platforms, with direct messaging acting as a secure private communication channel (Huffer et al., 2019). Moreover, relying purely on the sum of information within present-day museum catalogues runs the risk of making potentially devastating mistakes in the identification of remains (Frode et al., 2020).
Many relevant books and scientific journals are now available digitally (usually in PDF format). So too are ephemeral documentary sources that could be said to be the historical equivalents of modern social media, such as newspaper reportage of museum donations and exhibitions, advertisements for auctions and accounts of the sale of private anthropological collections. All of these sources have proven invaluable in reconstructing the fate of
an ancestral remains in historically active networks (Knapman et al., 2020). However, the practicality of finding and thoroughly investigating these diverse sources for potentially crucial information for reparting communities is currently severely constrained by the fact that they are distributed across many different online locations.
Digital library initiatives by national and major research libraries since the mid-1990s have made it possible to systematically search large historical corpora of books, journals and newspapers. However, finding relevant information about the scientific then and uses of the ancestral remains of Australian and other First Nation peoples within these corpora is difficult. Often these corpora comprise publications, the content of which covers a wide spectrum of topics. This is especially true of newspapers. For RRR researchers, the usefulness of keyword-based search services provided by the creators of corpora has proven to greatly depend on how queries are formed. If the search terms used are too specific, few or no relevant documents will be returned. If terms are generic, large numbers of irrelevant documents will be returned. Successful searching thus relies heavily on expert 'human' users impractically investing time and intellectual energy in searching and filtering results - when pressing needs of repartiating communities whom they are assisting need addressing in relatively short time-frames. Assessing whether documents returned by a search engine are relevant to the needs of the user is consequently the foremost text mining problem to be solved.
The use of machine learning to automate the process of identifying relevant documents is the obvious solution, but the text classification task involved is challenging due to the historical nature of the documents. There is a high risk of noise and error occurring during the process of reproducing them as OCRed text due to their physical condition. Even if good digital transcripts are obtained, there is the need to account for linguistic changes, shifts in conceptual vocabularies, and the use of different medical and scientific nomenclatures, as well as the possibility of texts comprising several languages. Existing language models, such as BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019), are likely to perform poorly in recognizing the semantics and contextual relations within these documents because they are semantically and structurally different from the modern documents that are used to pre-train these models. Furthermore and most importantly, there is no labelled dataset to effectively train classification models, which is a fundamental necessity for building a text classifier.
In this paper, we describe work on designing a supervised learning model, namely an Informed Neural Network (INN) to automate the detection of relevant documents. A set of keywords (e.g. as given in Table 1) has been provided by RRR researchers. Based on their expert knowledge they have chosen keywords that are highly probable to appear in documents of interest. Besides, they have informed their expert knowledge of how the identified keywords are likely to appear in relevant documents. This provides the basis for a deep learning-based classification model trained with a small portion of labelled documents. Training a classification model, especially a deep learning-based classification model, with a small portion of documents will overfit the training data (Bashar et al., 2018, 2020, 2021). But we conjecture that by integrating expert knowledge within a classification model, we can reduce the number of labelled documents required for building an accurate model. We investigate how such prior knowledge can be integrated into the machine learning model for relevant document detection. Specifically, we investigate what kind of knowledge should be included, how this knowledge can be represented for a deep learning-based text classifier and wherein the machine learning pipeline this knowledge should be integrated. Our experimental results in Section 3 show that by integrating Expert Informed Knowledge into the classification model can significantly improve the accuracy while requiring a small number of labelled documents.
In sum, this paper makes four major contributions. 1) It proposes using machine-based deep learning as a timely and effective means of greatly improving the accuracy and timeliness of research to assist First Nation communities in the repartition and reburial of the remains of their ancestors. 2) It proposes a new technique of document representation using context and expert-informed knowledge to improve the performance of deep learning based text classifiers. 3) It proposes a novel Informed Neural Network model for relevant document detection. By including expert informed knowledge within classification boundary decisions, the model requires a significantly less number of labelled documents for training without sacrificing accuracy. 4) It proposes a new centrality measure, 'Keyword Centrality', for mathematically modelling expert knowledge of how keywords are likely to appear in relevant documents.
## Related Work
This section presents the related work in the areas of text classification and informed Machine Learning.
### Relevant Documents Detection: Text Classification
Detecting relevant documents returned by a search engine falls into the research area of text classification. Popular traditional text classification algorithms include Random Forest (RF) (Liaw et al., 2002), k-Nearest Neighbours (kNN) (Weinberger and Saul, 2009), Ridge Classifier (RC) (Hoerl and Kennard, 1970) and many others. Performance of these traditional machine learning algorithms depends on feature engineering and feature representation (Davidson et al., 2017; Xiang et al., 2012). Besides, these algorithms are based on bag-of-words representation. The bag-of-words approach is straightforward and usually has a high recall, but it results in a higher number of false positives because the presence of general words and keywords causes these documents to be misclassified as relevant (Kwok and Wang, 2013).
Detecting relevant documents, especially historical documents (e.g. relevant to Indigenous Human Remains) is a challenging text classification task because the words and language used in those documents are quite different from modern text documents. The problem is further challenged by the fact that most of the historical documents are OCRed for digitization that introduces a lot of noise (e.g. wrong character and word detection, failing to keep the right text structures, mixed-up multiple stories as OCR fails to separate adjacent stories in the paper face).
Text classification research applications use syntactic features to identify the relevant documents. For instance, if someone was researching historical examples of anti-semitism some relevant keywords would be _kill_ and _Jews_ (verb and noun occurrences) (Gitari et al., 2015) and the syntactic structures \(<\)intensity\(><\)user intent\(>\)\(<\)date target\(>\) (e.g. \(I\)_s*cking hate Jews people_) could be used for detecting hate speech (Silva et al., 2016) in documents. Such feature engineering requires both linguistic and domain expertise that is expensive.
Recently, neural network-based classifiers have become popular as they automatically learn abstract features from the given input feature representation (Badjatiya et al., 2017). Input to these algorithms can be various forms of feature encoding, including many of those used in traditional methods. Algorithm design in this category focuses on the selection of the network topology to automatically extract useful abstract features. Popular network architectures are Convolutional Neural Network (CNN), Recurrent Neural Networks (RNN) and Long Short-Term Memory network (LSTM). CNN is well known for extracting patterns similar to phrases and \(n\)Grams (Badjatiya et al., 2017). On the other hand, RNN and LSTM are effective for sequence learning such as order information in text (Badjatiya et al., 2017). The CNN model has been successfully used for sentence and noisy tweet classification (Kim, 2014; Bashar et al., 2018).
The most recent success of neural network-based classifiers comes from transfer learning (Bashar et al., 2020, 2021; Bashar and Nayak, 2021; Liu et al., 2019; Devlin et al., 2018; Yang et al., 2019). Transfer learning is an approach in machine learning where knowledge gained in one model, such as through pretraining, is reused in another model (Bashar et al., 2021, 2020). Common pretraining in text data is conducted through language models. Popular pretrained models are RoBERTa (Liu et al., 2019), BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019). However, experiments in Section show that existing transfer learning-based models such as RoBERTa (Liu et al., 2019) do not perform well on historical datasets. These results reveal that if there are not enough training data and the pretraining domain of transfer learning-based models is significantly different from the domain where the model will be applied, neural network and transfer learning-based models do not perform well.
In humanities and social sciences, large collections of labelled data do not exist to effectively train a deep learning-based classification model. The number of documents initially returned by the search engine such as Trove (Wha b) for the keyword set (as shown in Table 1) with OR operator is massive for two reasons: (a) relevant documents reside in the general span of any human remains while the researchers need documents of Indigenous human remains, e.g. news articles on any human remains collection, auction, selling, discovery, outcome scientific analysis of human remains. (b) It is difficult to select which sub-set of keywords should be used with AND operator in the search engine to perform specific searches. An incorrect or very specific combination can miss many important documents, while a generic combination can return a lot of documents including many non-relevant.
To the best of our knowledge, there exists no work that fits the mentioned purpose of finding relevant documents using a text classifier. The existing literature of digital humanities mostly uses (a) Named Entity Extraction (NER) for identifying key information from a document collection, (b) Word Embedding or word vector for finding semantically similar words or words that appear together, and (c) Topic modelling for finding subjects of discussion.
### Informed Machine Learning
Although machine learning has achieved great success, it has limits when dealing with a small set of training data (Bashar et al., 2018, 2020, 2021). Integration of prior knowledge such as that informed by experts into the training process can potentially address the problem of small training datasets (von Rueden et al., 2019). This integration process is known as informed machine learning and is becoming common in many applications (von Rueden et al., 2019). Informed machine learning investigates how to improve machine learning models by integrating prior knowledge into the learning process. It combines data- and knowledge-driven approaches.
A framework of Informed Machine Learning adopted in our work is shown in Figure 1. It enables experts to incorporate their domain knowledge into the machine learning process along with the datasets used for training the model. There is growing interest in informed machine learning in research areas such as engineering applications (Karniadakis et al., 2016) and computer vision (Marino et al., 2016). In these works, prior knowledge is used to define informative priors (Heckerman et al., 1995) that regularise or constraint learning algorithms. For example, logic rules (Diligenti et al., 2017; Akn) and algebraic equations (Daw et al., 2017; Stewart and Ermon, 2016) have been used to constraint loss functions. Knowledge graphs are used to enhance neural networks with information about relations between instances (Battaglia et al., 2016), as well as to improve classification through CNNs by capturing relations between detected objects in computer vision (Marino et al., 2016). Knowledge graphs can be integrated into the learning process either explicitly or implicitly (Battaglia et al., 2016; Marino et al., 2016; Jiang et al., 2018). Explicit systems use graph propagation and attention mechanisms, while implicit systems use graph neural networks with relational inductive bias (von Rueden et al., 2019).
Informed machine learning for text data is mostly based on the principle of combinatorial generalisation that states that we construct new inferences, predictions, and behaviours from known building blocks (Battaglia et al., 2018). Combinatorial generalisation works by biasing the learning towards structured representations and computations, and in particular, systems that operate on graphs. Human cognitive mechanism represents a complex system with compositions of entities and their interactions (Battaglia et al., 2018). Humans draw analogies by aligning the relational structure between two domains. They can draw inferences on one domain based on the corresponding structural knowledge of another (Hummel and Holyoak, 2003). Combinatorial generalisation is recommended as a top priority for advancing modern AI (Battaglia et al., 2018).
Inspired by the theory of combinatorial generalisation, we use four centrality measures, as described in Section, to represent the keywords and their word co-occurrence
network based interactions in a given document (as shown in Fig. 1). Although integrating knowledge into machine learning for text classification is done through feature engineering or incorporating a sub graph (ontology) from an external source, there is not enough work on informed machine learning that utilise internal graph or network present in the text. Feature engineering requires manual efforts, and ontologies are mostly biased toward specific domains. To the best of our knowledge, this will be the first work to integrate an (internal) word co-occurrence network based on expert defined keywords into a deep learning model for text representation and classification.
## 2 Relevant Document Detection: Problem Formulation
Assume a historian needs to collect relevant documents to investigate an event of national interest. For example, researchers of the RRR network require to collect documents that are relevant to Indigenous Human Remains. This is done to trace information about the location and provenance of ancestral remains held in museums and other collecting institutions. This assists them in understanding the history, effects and opportunities of repratiation and building an evidence base for the future. A set of keywords (e.g. as given in Table 1) is identified assuming what might appear in those documents. However, those keywords can also appear in other documents. As a result, a search engine (such as \(\text{Trove}^{*}\) (Wha b)) returns a large number of documents where most of the documents are non-relevant to the investigation. With limited resources, only a small portion of the documents can be checked and labelled whether they are relevant for the investigation or not. With the advances in text mining, a classification model trained on labelled data can be developed that can detect relevant documents within the rest of the returned documents.
Relevant document detection is a complex problem because relevance is determined by the implicit and explicit information of the domain that can be subtle and determined only through its context in the text and domain specific knowledge. Let \(X\) be a text dataset that contains \(N\) features or words and \(C\) classes. Let \(\mathbf{x}=\langle x_{1},\dots x_{n}\rangle\) be a vector representing an instance in \(X\). Let \(K=\{k_{1},k_{2},\dots,k_{n}\}\) be a set of keywords. Let \(Y\) be the set of \(C\) classes.
The relevant document detection is defined as a classification task that assigns an instance to a relevance class (or category) \(Y_{c}\) based on the feature vector \(\mathbf{x}\) and keywords \(K\); i.e. \(f\in\mathcal{F}:(X,K)\to Y\), where \(f(\mathbf{x},K)=\max_{Y_{c}}p(Y_{c}|\mathbf{x},K)\). This ascertains that we need to know \(p(Y_{c}|\mathbf{x},K)\) for the relevance detection task. The joint probability \(p(\mathbf{x},Y_{c},K)\) of \(\mathbf{x}\), \(Y_{c}\) and \(K\) can be written as
\[p(\mathbf{x},Y_{c},K)=p(Y_{c}|\mathbf{x},K)p(\mathbf{x},K) \tag{1}\]
where \(p(\mathbf{x},K)\) is the prior probability distribution. The prior probability \(p(\mathbf{x},K)\) can be seen as a regulariser for \(p(Y_{c}|\mathbf{x},K)\) that can regularise modelling of the associated uncertainties of \(p(\mathbf{x},Y_{c},K)\)(Bashar et al., 2020). As \(p(\mathbf{x},K)\) does not depend on \(Y_{c}\), this means that \(p(\mathbf{x},K)\) can be estimated independent of the class level \(Y_{c}\). That is, \(p(\mathbf{x},K)\) can be estimated from prior knowledge of the interaction between \(K\) and \(\mathbf{x}\), e.g. interaction of keywords \(K\) and \(\mathbf{x}\) when they are represented in a graph or network. Estimating \(p(\mathbf{x},K)\) from prior knowledge and integrating it into a learning algorithm (shown as Equation 1) can be considered a form of Informed Machine Learning (von Rueden et al., 2019). A framework of Informed Machine Learning adopted in this research is given in Figure 1. The joint probability \(p(\mathbf{x},K)\) can be written as
\[p(\mathbf{x},K)=p(K|\mathbf{x})p(\mathbf{x}) \tag{2}\]
where \(p(\mathbf{x})\) is the prior probability distribution of \(\mathbf{x}\). \(p(\mathbf{x})\) can be learned through transfer learning as described in our prior works (Bashar et al., 2020, 2021, 2020) or it can be assumed as uniform distribution for simplicity. The term \(p(K|\mathbf{x})\) can be interpreted as the probability of keyword set \(K\) interacting in the text instance \(\mathbf{x}\), while \(p(K)\) is the probability of keyword set \(K\) interacting in any text instance. Without considering the pattern of how \(K\) appears in \(\mathbf{x}\) versus other documents, \(p(K|\mathbf{x})\) and \(p(K)\) can be very similar as keywords can appear in both relevant and non-relevant documents. Hence, \(p(K|\mathbf{x})\) may not bring useful information to improve Equation 2 and thereby will not improve the classification task in Equation 1.
In this research, we propose and utilise four centrality measures to find patterns of the keyword set \(K\) in each text instance \(\mathbf{x}\) of the dataset \(X\). We denote the patterns of keyword set \(K\) in \(\mathbf{x}\) as \(\mathcal{K}\) and rewrite the Equations 2 and 1
Figure 1: Informed Machine Learning Framework Adopted in this Research
as Equations 3 and 4 respectively.
\[p(\mathbf{x},\mathcal{K})=p(\mathcal{K}|\mathbf{x})p(\mathbf{x}) \tag{3}\] \[p(\mathbf{x},Y_{c},\mathcal{K})=p(Y_{c}|\mathbf{x},\mathcal{K})p( \mathbf{x},\mathcal{K})=p(Y_{c}|\mathbf{x},\mathcal{K})p(\mathcal{K}|\mathbf{x })p(\mathbf{x}) \tag{4}\]
We propose a model, namely Informed Neural Network (INN), to integrate this Expert Informed Knowledge to the deep learning algorithm. We detail this model next.
## Informed Neural Network Model for Relevant Document Detection
We propose to improve the machine learning model by integrating prior or expert knowledge into the training process by combining data-and knowledge-driven approaches. We propose an Informed Neural Network (INN) model for relevant document classification. The INN model uses a hybrid information source that consists of data and prior knowledge. The prior knowledge is pre-existent and independent of learning algorithms. The proposed INN model is shown in Figure 2. It has two main parts: (a) Expert Informed Knowledge that constructs the _Expert Informed Knowledge_ matrix; and (b) Context text that constructs the _Embedded Text_ matrix.
### Expert Informed Knowledge
Expert knowledge can be considered to be intuitive knowledge that is held and implicitly validated by a particular group of experts (von Rueden et al., 2019). For a text collection, it is very difficult to source this knowledge manually as well as it poses the risk of subjectivity. In this paper, we use experts (i.e. information seekers) to identify a set of keywords that will be used to perform a search and collect the documents that may contain those keywords. We propose to explore the patterns of keywords that appear in each document and the collection, and use the patterns as expert knowledge.
We aim to capture the expert knowledge of how the information seekers discern relevant hits based on their accumulated experience in understanding keywords, relationships between those keywords and the context of those keywords in the article. The following issues are indicative of the problems in capturing this information:
* Many articles in _digitized_ (historical) newspapers appear as a composite of multiple articles (due to the ineffectiveness of OCR technologies in identifying borderlines). This means that keywords appear in the same article; however, many are unrelated articles and therefore do not signify a relevant hit.
* Not all keywords are equal. Some are more important than others. For example, an article with the keywords 'grave, funeral, spulcher, burial' will more than likely not be relevant. What is needed is keywords across the different categories of terms for _Identity_, _Ancestral Remains_, _Funerary Sites_ and _Mode of Acquisition_, as detailed in Table 1.
* Keywords can be used to expand the number of results rather than limit the number of results. The historical relationship of keywords needs to be coded as expert knowledge. Within the listed categories of keywords in Table 1, there are various terms of description that were historically more common or used in a particular way. For example, 'Aboriginal' or 'Australian' was more common in scientific literature to describe Australian Indigenous remains than 'rigger' or 'Black' although not exclusively. The latter words are associated with pejorative articles and in the case of 'Black' could generate many irrelevant hits. The experienced human researcher can make this determination very quickly.
As mentioned in Section, RRR researchers informed their prior expert knowledge of how keywords are likely to appear in relevant documents (i.e. keywords appearing patterns). In this paper, we attempt to model this prior knowledge by utilising three centrality scores. Centrality is a fundamental and popular topic in social network studies. Centrality ranks nodes within a network or graph according to their position in the network. Centrality is used in many applications such as identifying influential entities in a social network (Bonacich, 2015), key nodes in the network (Borgatti, 2005), super-spreaders of disease (Bauer and Lizier, 2012) and Semantic Brand Score analysis (Fronzetti Colladon, 2018).
We present each document as an undirected graph or network where each unique word is a node in the graph. Let us consider a word co-occurrence network (Colladon and Scettri, 2021) constructed from a document using a sliding window (i.e. words are assumed connected if they appear in the same sliding window). It is represented as an undirected graph \(G=(V,E)\), where \(V\) is the set of nodes corresponding to the unique words in the document and \(E=(a,b):a,b\in V\) is the set of edges where each edge represents two connected words in the document. Figure 3 shows an example of the undirected graph representing the word co-occurrence network of a document relevant to Indigenous human remains.
For a given keyword, we propose to use four centrality scores that reveal probabilistic interaction between a text and the patterns of a given keyword set. These four centrality scores are _Keyword Centrality_, _Betweenness Centrality_, _Degree Centrality_ and _Prevalence_. We propose a Keyword Centrality score based on the spreading activation theory (Collins and Loftus, 1975) and estimate it as described below. Degree Centrality and Betweenness Centrality are two of the most famous centrality estimates (Fronzetti Colladonid and Naldi, 2020). The fourth score Prevalence is commonly used in relevance discovery (Bashar and Li, 2018; Alharbi et al., 2018).
**Keyword Centrality**: Keyword Centrality is proposed to estimate the relative interaction importance of keywords based on their co-occurrence patterns in a document. In other words, it can indicate how relevant a document is to a specific keyword \(\hat{w}\) in a set of keywords \(\hat{W}\). The centrality of a node in a network strongly depends on its neighbouring nodes. For example, the popular _Page Rank_ (Wha a) measure calculates how 'important' a web page is according to its connections, a web page connected with important pages receives a higher score. From a social network point of view, a few connections with important nodes can be enough to make a node important (Fronzetti Colladonid and Naldi, 2020). We argue that the highly connected nodes can
be considered important, similar to the real world, where the social context collaboration between important persons makes them more influential. From the users' point of view, keywords are important indicators of the information that they are looking for in a document. Therefore, we consider keywords as important nodes in the co-occurrence network. We conjecture that a keyword connected to other keywords in the co-occurrence network should get higher centrality scores as compared to the keywords that are not.
We adopt the spreading activation theory (Collins and Lofus 1975) to estimate Keyword Centrality. According to this theory, a semantic search in a network can be viewed as an activation spreading from two or more nodes corresponding to the keywords until an intersection is found. The activation spreads from a node by decreasing gradient (Collins and Lofus 1975). That is, the activation is similar to a signal from a source that is attenuated as it travels outward.
Let us suppose that every keyword node spreads activation (i.e. we assume other words do not spread activation) along the co-occurrence network through its edges concurrently. The amount of activation reached to node \(\hat{w}\) from node \(\hat{w}\) per unit of distance is \(\frac{a}{d(\hat{w},\hat{w})}\), where \(a\) is the activation initiated at \(\hat{w}\) and \(d(\hat{w},\hat{w})\) is the shortest path distance between nodes \(\hat{w}\) and \(\hat{w}\) in the network. When there is no path in the network between \(\hat{w}\) and \(\hat{w}\), we assume \(d(\hat{w},\hat{w})=\infty\), and consistently, the amount of spreading activation is 0. The Keyword Centrality of \(\hat{w}\) in network \(G\) is the total amount
Figure 3: A word co-occurrence network generated from a document relevant to Indigenous human remains. Red nodes represent the keywords used in information seeking. Blue nodes represent the words present in the document.
Figure 2: The Proposed Informed Neural Network Model Architecture
of activation reached to \(\hat{w}\) over the network and its initial activation, which is estimated as follows.
\[KC(\hat{w})=a+\sum_{\hat{w}\in\hat{W}-\hat{w}}\frac{a}{d(\hat{w},\hat{w})}\]
This research uses initial activation \(a=1\). In this case, Keyword Centrality \(KC(\hat{w})\) of a keyword \(\hat{w}\) can be interpreted as its closeness (inverse of distance) to other keywords in the co-occurrence network, which is similar to proximity search (Tao and Zhai, 2007). Figure 4 shows an example of the calculation of KC. In Network 1, the initial activation for each blue, orange and green node is 1. The amount of activation reached from the blue node to the orange node is \(\frac{1}{2}\) and the amount of activation reached from the green node to the orange node is \(\frac{1}{2}\). The total amount of activation reached to the orange node over the network is \(\frac{1}{1}+\frac{1}{2}\). Therefore, KC for the orange node (keyword) in Network 1 is \(1+\frac{1}{1}+\frac{1}{2}=2.5\). Similarly, KC for the blue node and green node are \(1+\frac{1}{1}+\frac{1}{3}=2.33\) and \(1+\frac{1}{2}+\frac{1}{3}=1.83\) respectively.
The higher the closeness, the more important the keyword is. For example, KC for the orange node (keyword) in Network 1 of Figure 4 is \(1+\frac{1}{1}+\frac{1}{2}=2.5\). The higher the closeness, the more important the keyword is.
**Betweenness Centrality**: Betweenness Centrality \(BC(\hat{w})\) of keyword \(\hat{w}\) estimates its connectivity with respect to a general discourse (i.e. which social context the keyword is representing in the document) (Franetti Colladon, 2018). \(BC(\hat{w})\) represents the ability of keyword \(\hat{w}\) to act as a bridge between nodes in the co-occurrence network (Freeman, 1977) and show their dependence on the keyword \(\hat{w}\). Connectivity is widely used in social network analysis as a measure of influence or control of information that goes beyond direct links. It is estimated as
\[BC(\hat{w})=\sum_{j\neq k}\frac{s_{jk}(\hat{w})}{s_{jk}}\]
where \(s_{jk}\) is the number of the shortest paths linking any two nodes \(j\) and \(k\) in a co-occurrence network, and \(s_{jk}(\hat{w})\) is the number of shortest paths that contain the keyword \(\hat{w}\). For example, in Network 2 of Figure 4, BC for the green node (keyword) is \(\frac{0}{4}+\frac{1}{4}+\frac{1}{1}+\frac{1}{2}+\frac{0}{1}+\frac{1}{1}+\frac{ 1}{2}+\frac{1}{1}+\frac{1}{2}+\frac{1}{1}+\frac{1}{1}+\frac{1}{1}+\frac{0}{1}+ \frac{1}{2}=9\).
**Degree Centrality**: Degree Centrality \(DC(\hat{w})\) of keyword \(\hat{w}\) estimates the heterogeneity of words surrounding the keyword \(\hat{w}\)(Franzetti Colladon, 2018). The value of \(DC(\hat{w})\) is high when \(\hat{w}\) co-occurs with many different words. Whereas the value of \(DC(\hat{w})\) is low when \(\hat{w}\) co-occurs with a small set of words. Degree Centrality is estimated by counting the number of edges in a co-occurrence network directly connected to the keyword \(\hat{w}\)(Freeman, 1978).
\[DC(\hat{w})=\sum_{j}a_{j}(\hat{w})\]
where \(a_{j}(\hat{w})\) is the adjacency between keyword \(\hat{w}\) and node \(j\) in the co-occurrence network. Value of \(a_{j}(\hat{w})\) is 1 if \(\hat{w}\) and \(j\) are directly connected, otherwise 0. For example, in Network 3 of Figure 4, DC score for the blue node (keyword) is 7 as there are seven nodes directed connected to the blue node. A high degree centrality score implies that a keyword co-occurs with a larger than average number of words in a document.
**Prevalence**: Prevalence \(PREV(\hat{w})\) of keyword \(\hat{w}\) measures the number of times \(\hat{w}\) occurs in a document (Franzetti Colladon, 2018). Prevalence is associated with the idea of keyword awareness assuming that when the keyword occurs frequently, its recognition and recall are increased. \(PREV(\hat{w})\) of keyword \(\hat{w}\) is calculated as follows.
\[PREV(\hat{w})=f(\hat{w})\]
where \(f(\hat{w})\) is frequency of \(\hat{w}\) in the document.
**Expert Informed Knowledge Matrix**
A set of keywords (identified by domain experts or information seekers) will form the basis of an Expert Informed Knowledge matrix as shown in Figure 2. Expert Informed knowledge \(EIK(\hat{w})\) of keyword \(\hat{w}\) is represented as a vector of four scores namely Keyword Centrality, Betweenness Centrality, Degree Centrality and Prevalence as follows.
\[EIK(\hat{w})=[KC(\hat{w}),\ BC(\hat{w}),\ DC(\hat{w}),\ PREV(\hat{w})]\]
The centrality measures, Betweenness Centrality, Degree Centrality and Prevalence, are known to be effective for analysing topical keyword impact and Brand impact (Bashar et al., 2022)(Fronzetti Colladon, 2018). Whereas, Keyword Centrality has similar properties of well-known proximity search (Tao and Zhai, 2007) and Page Rank measure (Wha, a). We conjecture that the discerning process of information seekers for deciding on relevant documents can be included in the machine learning model by capturing and representing the relative interaction importance of keywords for observing how keywords appeared in historical documents.
### Context Text
In addition to including Expert Informed Knowledge, it is critical that we add data-led information to the deep neural network model to learn the underlying data distribution. Such data context helps machine learning models to learn patterns that are not easy to model mathematically. Context text is a document instance consisting of a sequence of \(m\) words or features, i.e. \(\mathbf{x}=\langle x_{1},\ldots x_{m}\rangle\). Word embedding maps each word \(x\) to a \(n\) dimensional vector of real numbers. We use word embedding to represent each word \(x\in\mathbf{x}\) in an \(n\)-dimensional word vector \(w\in\mathbb{R}^{n}\). An instance \(\mathbf{x}\) with \(m\) words is represented as an Embedded Text matrix \(\mathbf{x}\in\mathbb{R}^{m\times n}\) as shown in Figure 2. Let us assume that we have a collection of such instances labelled as relevant or irrelevant by domain experts or information seekers.
### Relevant Document Classification
Knowledge-driven inputs (i.e the Expert Informed Knowledge matrix) and data-driven inputs (i.e. the Embedded Text matrix) are the inputs to the Convolutional Neural Network (CNN) model. We propose to use the CNN model to deal with the OCRed historical documents as CNN is known
to be suitable for noisy text data that can detect location invariant patterns and sub patterns (Bashar et al., 2018). Besides, the way convolution filters operate is analogous to searching in the local proximity of a network, which makes the CNN model an ideal model for combining Expert Informed knowledge with context knowledge.
The convolution operation is applied to both the Expert Informed Knowledge matrix and Embedded Text matrix with one stride. Applying convolution on the Expert Informed Knowledge matrix facilitates the finding of patterns and interactions among the centrality scores. This allows emphasising the importance of keywords in the document considering all measures where each of them is measured in isolation. Convolution applied on the Embedded Text matrix is supposed to find patterns analogous to \(n\)Grams emphasising patterns in the data-driven inputs (Bashar et al., 2018). Each convolution operation applies a filter \(\mathbf{F}_{i}\in\mathbb{R}^{h\times n}\) of size \(h\), where \(n\) is the dimension of embedding or word vector. Empirically, based on the accuracy improvement in ten-fold cross-validation experiments, we used 128 filters for \(h=3\), 256 filters for \(h=4\) and 512 filters for \(h=5\) on the Embedded Text matrix. We used 512 filters for \(h=3\) on the Expert Informed Knowledge matrix.
Convolution is a function \(\mathcal{C}(\mathbf{F}_{i},\mathbf{M})=R(\mathbf{F}_{i}\cdot\mathbf{M}_{k:k+h- 1})\), where \(\mathbf{M}_{k:k+h-1}\) is the \(k\)th vertical slice of a matrix from position \(k\) to \(k+h-1\), \(\mathbf{F}_{i}\) is the given filter and \(R\) is a ReLU function. Function \(\mathcal{C}(\mathbf{F}_{i},\mathbf{M})\) produces a feature \(c_{k}\) for each slice \(k\), resulting in \(m-h+1\) features.
We apply the max-pooling operation over these features and take the maximum value, i.e. \(\hat{\mathcal{C}}_{i}=\max\mathcal{C}(\mathbf{F}_{i},\mathbf{M})\). Max-pooling is carried out to capture the most important feature of each filter. As there are a total of 1408 filters ((128+256+512)+512) in the proposed model, the 1408 most important features are learned from the convolution layer.
These features are passed to a fully connected hidden layer with 256 perceptrons that use the ReLU activation function. This fully connected hidden layer allows learning the complex non-linear interactions between the features from the convolution layer and generates the 256 higher-level new features to learn to distinguish between relevant and non-relevant documents. Finally, these 256 higher-level features are passed to the output layer with a single perceptron that uses the sigmoid activation function. The perceptron in this layer generates the probability of a document being relevant.
We randomly dropout a proportion of units from each layer except the output layer by setting them to zero. This is done to prevent co-adaptation of units in a layer and to reduce overfitting (Hinton et al., 2012). We empirically dropout 50% units from the input layer, the filters of size 3 and the fully connected hidden layer. We dropout only 20% units from the filters of sizes 4 and 5 and the filter used on the Expert Informed Knowledge matrix.
## Empirical Evaluation
Extensive experiments are conducted to evaluate the accuracy of the proposed method for relevant document detection. We used six standard classification evaluation measures (Bashar et al., 2020): Accuracy, Precision, Recall, F1 Score, Cohen Kappa Score (CK Score) and Area Under Curve (AUC). A description of evaluation measures is given in Appendix A.
### Data Collection
We used two data collections R3Trove and Reuters Corpus Volume I (RCV1) in our experiments. R3Trove has one topic and RCV1 has 50 topics used in the experiments. Each topic can be seen as an information-seeking need of practitioners/researchers for conducting analyses on the retrieved documents based on the search terms contained within the topic. The following subsections give a brief description of these datasets.
#### R3Trove
National Library of Australia (NLA) has collected and digitised relevant historical documents, images, and other cultural artefacts on a large-scale (Kutty et al., 2020). NLA contains over 6 billion digital items on various topics by aggregating its own and other digital collections in partnership with other Australian state libraries, museums art galleries, media, government and community organisations (Wha b). The goal is to advance public knowledge of history and heritage by providing free access to these collections to users who are interested in curating particular aspects of Australia's history, heritage, and culture. NLA has developed a faceted search engine named Trove (Wha b) to discover these items of interest.
The most remarkable collection indexed in Trove is the Australian Newspaper Service. By using optical character recognition (OCR), NLA has digitised Australia's surviving newspapers starting from the first years of colonisation (Kutty et al., 2020). We collected news articles relevant to Indigenous Human Remains from Trove in three iterations using the following four steps.
Figure 4: KC, BC and DC in three toy co-occurrence networks. Each node represents a word and each edge represents the co-occurrence of connected words. Blue, orange and green nodes are three keywords.
* STEP 1: Using the keywords in Table 1 with OR operator, collect news articles from Trove utilising its associated Application Programming Interface (API).
* STEP 2: Due to the generality of keywords, STEP 1 can return lots of news articles not relevant to Indigenous Human Remains. It is not possible to manually go through each document and assess them. We use an LSTM-based classification model, described in Section, to filter out non-relevant documents.
* STEP 3: Domain experts manually label the remaining (and manageable) news articles whether they are relevant to Indigenous Human Remains. Then the labels are cross checked by a coordinator for correctness.
* STEP 4: Retrain the LSTM model using the labelled news articles collected so far. Go to STEP 2 for the next iteration.
The collected labelled news articles, following this multi-step process, are called R3Trove. Both news title and news text are used as the content of an article, and each article constitutes a document. R3Trove has a total of 1432 documents of which 844 documents are labelled relevant and 588 documents are labelled non-relevant. Out of the R3Trove collection, 90% is used for training and validation of the proposed INN model and 10% is used for testing.
The LSTM is used in this labelling process due to its flexibility in accepting variable-length documents, given that documents in R3Trove come in various lengths. The use of an LSTM model (but not a CNN model) also ensures that the labelling process does not introduce any bias and results in favouring the proposed INN model that uses a CNN component.
#### 4.2.2 Reuters Corpus Volume I
Reuters Corpus Volume I (RCV1) is a standard data collection from the TREC-10/2001 filtering track (Robertson and Soboroff 2002) provided by Reuters Ltd. It has English language news stories that cover a large spectrum of topics and contemporary information written by journalists. Both the story title and text are used as the content of a story, and each story constitutes a document.
RCV1 has 100 topics. Each topic has a corresponding set of documents and a manual specification of information needs written by linguists. The documents in the first 50 topics are labelled by domain experts as either relevant to the topic specification or non-relevant. For each topic, domain experts divided the documents into a training set and a testing set. Buckley and Voorhees (Buckley and Voorhees 2000) showed that the first 50 topics are stable and sufficient for maintaining the accuracy of the evaluation measures. Therefore, the first 50 topics are used in this research. The set of words in each topic name (e.g. Child Custody Cases) found in the topic specification is used as the keywords set of the topic. There are a total of 2,704 documents in the training set with a minimum of 13 and a maximum of 198 documents in each topic. On the other hand, there is a total of 18,901 documents in the test set with a minimum of 199 and a maximum of 597 documents in each topic.
### Baseline Models
We have implemented 8 baseline models to compare the performance of the proposed INN model. For all neural network-based models, hyperparameters are manually tuned based on cross-validation.
* Feedforward Deep Neural Network (DNN) (Glorot and Bengio 2010): It has five hidden layers, each layer containing eighty units, 50% dropout applied to the input layer and the first two hidden layers, softmax activation and 0.04 learning rate. To keep this model basic, term frequency vector of Text is fed as input to this model.
* Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber 1997): Input to this model is Embedded Text Matrix. It has 100 units, 50% dropout, binary cross-entropy loss function, Adam optimiser and sigmoid activation.
* Convolutional Neural Network (CNN): This model uses only the Embedded Text Matrix part of INN. In this CNN model, the first layer is a convolution layer with 1024 filters followed by Max Pooling, the second layer is a fully connected layer with 256 perceptrons followed by ReLU activation function, the final layer is a classification layer with single perceptron followed by Sigmoid activation function. The rest of the hyperparameters of this CNN model is set as in (Bashar et al. 2018).
* RoBERTa (Liu et al. 2019): Input to this model is Embedded Text Matrix, but it uses the embedding defined by the state-of-the-art language model RoBERTa (Liu et al. 2019). RoBERTa is a retrained BERT with improved training methodology, more data and compute power. RoBERTa removes the _Next Sentence Prediction_ task from BERT's pretraining and introduces dynamic masking so that the masked token changes during each training epoch. RoBERTa used a total of 160 GB text data for pretraining in comparison to 13 GB text data used for training BERT\({}_{Large}\) and 126 GB text data for building XLNet\({}_{Large}\). RoBERTa used 1024 V100 Tesla GPUs running for a day during pretraining. As a result, RoBERTa is known to outperform both BERT\({}_{Large}\)(Devin et al. 2018) and XLNet\({}_{Large}\)(Yang et al. 2019).
* Expert Informed Knowledge Only Model (IKOM): This model is a variant of the proposed INN model. It uses only the Expert Informed Knowledge part of INN model and excludes the Embedded Text part as shown in Figure 2. In other words, it does not take the Text input, it takes only keywords as input.
* Non NN models including Random Forest (RF) (Liaw et al. 2002), k-Nearest Neighbours (kNN) (Weinberger and Saul 2009) and Ridge Classifier (RC) (Hoerl and Kennard 1970). Hyperparameters of all these models are automatically tuned using ten-fold cross-validation and GridSearch using sklearn library. The term frequency vector of Text is fed as input to these models.
### Experimental Results
#### Results on R3Move
Experimental results in Table 2 show that the proposed INN model performs better than all the baseline models. It achieves the best Accuracy (0.924), Precision (0.910), \(F_{1}\) Score (0.943), CK Score (0.828) and AUC (0.901). The best Recall is achieved by the DNN model. However, its Precision and \(F_{1}\) Score are the lowest of all models. When investigated, we observed that DNN predicts all documents as relevant, while many of them are not relevant, reflected by a poor AUC value. This is further emphasised by the CK Score of zero obtained from DNN, which means there are no agreements between ground truth and the prediction made by DNN.
The second-best performance is obtained by RF. Three models RoBERTa, CNN and RC achieve similar performance and they provide the third-best performance. Even though RoBERTa is well known for its performance in text classification, its performance on R3Trove is not the best. The R3TRove dataset represents old historical documents that have a very different distribution than the data used to pre-train RoBERTa for transfer learning. Besides, RoBERTa is a very large neural network model. Overfitting of large neural network models, when trained with a small dataset, is usually compensated by their transfer learning capacity. The inadequacy of transfer learning of RoBERTa in R3Trove can be attributed to its poor performance.
It is interesting to note that IKOM (knowledge information only) does not provide satisfactory performance as it lacks the context information based on the data distribution required for the training of the neural network. Similarly, CNN (context information only) does not provide a good performance due to the small training set. However, the proposed INN model (combining both knowledge and context information) achieves the best performance. This confirms the idea of informed machine learning, which combines prior knowledge with machine learning, can address small data problems for machine learning and lack of context problems for prior knowledge.
#### Ablation study
We conducted an ablation study to see the importance of each of the centrality measures in making the Expert Informed Knowledge for machine learning. Results of this ablation study in Table 3 show how important each of the centrality measures is in contributing to Expert Informed Knowledge. As described in Section 3, IKOM uses all four centrality measures KC, BC, DC and PREV. NPREV model assigns 0 to PREV value in IKOM, NDC assigns 0 to DC, NBC assigns 0 to BC and NRC assigns 0 to KC. Results in Table 3 show that IKOM is marginally affected by not including PREV and BC measures, but the performance is significantly affected by not including KC and DC. These results also show that each of the centrality measures can improve performance and the best performance is achieved when all four centrality measures are used together in INN as shown in Table 2.
#### Training Data Size
We also conducted experiments to see the impact of training dataset size. The experimental result in Figure 5 shows that INN can provide reasonable performance with a very small amount of data. The model can work with as low as 1% (12 documents) of the training data. With this amount of data, the model gives us Accuracy 0.660, Precision 0.655, Recall 1.00, F\({}_{1}\) Score 0.79, CK Score 0.05 and AUC 0.520. At this point CK score is low but with only 10% (128 documents) of data CK score reaches to 0.56 and other measures improve significantly. With 30% (386 documents) the model provides significantly high performance. At this point, it reaches Accuracy 0.910, Precision 0.917, Recall 0.946, F1 Measure 0.931, CK Score 0.800, AUC 0.895. After this, there is some fluctuation, but starting from 80% (1030 documents) data, the performance steadily increases and provides maximum performance with 100% (1288 documents) data. At this point, we get Accuracy 0.924, Precision 0.910, Recall 0.979, F\({}_{1}\) Score 0.943, CK Score 0.828, AUC 0.901.
Results on RCV1Figure 6 shows the pairwise comparison of INN and each baseline model over the 50 topics. The number of dots in the blue shade triangle indicates the number of topics where INN performs better than a baseline model for the most important measure of classification namely F\({}_{1}\) measure. Results show that INN outperforms baseline models in the majority of topics as indicated by more dots in the blue shade triangle. The well-known RoBERTa model did not also perform well in this dataset. Besides the small number of documents, this might be due to the document length of RCV1 dataset. RoBERTa can only consider 500 words in a document, the rest of the words in a document are ignored when fed into the model. The maximum document lengths in RCV1 are 7960 and 8119 for the training and testing sets respectively. Average document lengths in RCV1 are 421 and 520 for training and testing set respectively. It is interesting to note that on this dataset, non neural network models RC, RF and kNN perform significantly better than the neural network models except INN. This might be because neural network models are overfitting as many topics in RCV1 have a very small number of training documents, a minimum of 13 documents. We observed that for some topics, neural network models other than INN failed to detect any relevant documents. By introducing bias through the utilisation of L2 norm regularisation RC reduces overfitting and provides the second-best results. By introducing bias through the utilisation of Expert Informed Knowledge, INN can detect relevant documents even when the number of training documents is very small and provides the best results. When the number of documents increases, the detection performance improves.
Figure 7 shows the relative ranking of INN with baseline models by computing the cumulative ranking for all the models over the 50 topics. It is a method-wise accumulation of results measured on an evaluation criterion spanned across all topics. To compute the cumulative ranking, first, we rank the performance of each individual method for each topic based on a criterion and then sum up their ranks. Figure 7 shows that INN ranks higher than all baseline models for F\({}_{1}\) Score, Precision, Recall and CK Score. INN achieves the second-best Accuracy and AUC. The best Accuracy and AUC are achieved by RC. However, RC has a significantly lower Recall value when compared with INN. This means RC misses many relevant documents. Also, Precision, F\({}_{1}\) Score and CK Score of RC are lower than those of INN. A similar pattern is observed in average performance shown in Table 4.
Table 4 shows the average performance of all methods on 50 topics of the RCV1 dataset. Results show that the proposed model INN performs the best. It gives the best Precision (0.430), Recall (0.350) and CK Score (0.236). INN achieves the second-best Accuracy (0.635) and AUC (0.516). The best Accuracy (0.742) and AUC (0.548) are achieved by RC. However, its Recall (0.299) value is significantly lower than that of INN. This means RC misses many relevant documents to identify. Also, the Precision (0.384), F\({}_{1}\) Score (0.308) and CK Score (0.191) of RC are lower than INN.
Figure 8 shows an analysis of the INN model performance against the dataset size (i.e. the number of training documents per topic). This figure reveals that INN always performs better than at least one of the compared algorithms on any of the 50 topics. It has performed the best on 24 topics (i.e. outperformed all the 8 baseline models). The presence of the majority of dots (43 topics) appearing on the right side (6 or more baseline outperformed by INN) indicates INN outperforms the majority of the baseline models. Additionally, with the majority of the points (46) appearing on the upper part (trained with 25 or more documents) of this figure indicates that the performance of INN is better on relatively larger datasets than the baseline models. All these results indicate that Expert Informed Knowledge can generalise the machine learning model while training data can provide the context and specificity for learning. By combining data- and knowledge-driven approaches we can achieve the best of both worlds.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline & Accuracy & Precision & Recall & F1 Score & CK Score & AUC & Mean Score \\ \hline INN & **0.924** & **0.910** & 0.978 & **0.943** & **0.828** & **0.901** & **0.914** \\ IKOM & 0.792 & 0.812 & 0.882 & 0.845 & 0.528 & 0.755 & 0.769 \\ CNN & 0.868 & 0.870 & 0.935 & 0.902 & 0.702 & 0.840 & 0.853 \\ LSTM & 0.819 & 0.838 & 0.892 & 0.865 & 0.595 & 0.789 & 0.800 \\ DNN & 0.646 & 0.646 & **1.000** & 0.785 & 0.000 & 0.500 & 0.596 \\ RoBERTa & 0.875 & 0.887 & 0.925 & 0.905 & 0.722 & 0.855 & 0.861 \\ kNN & 0.861 & 0.869 & 0.925 & 0.896 & 0.688 & 0.835 & 0.846 \\ RC & 0.875 & 0.887 & 0.925 & 0.905 & 0.722 & 0.855 & 0.861 \\ RF & 0.889 & 0.889 & 0.946 & 0.917 & 0.750 & 0.865 & 0.876 \\ \hline \end{tabular}
\end{table}
Table 2: Experimental Results of INN and baseline models on the R3Trove data
\begin{table}
\begin{tabular}{l c c c c c c} \hline & Accuracy & Precision & Recall & F1 Score & CK Score & AUC \\ \hline IKOM & 0.792 & 0.812 & 0.882 & 0.845 & 0.528 & 0.755 \\ NPREV & 0.778 & 0.796 & 0.882 & 0.837 & 0.492 & 0.735 \\ NBC & 0.750 & 0.766 & 0.882 & 0.820 & 0.417 & 0.696 \\ NDC & 0.729 & 0.793 & 0.785 & 0.789 & 0.411 & 0.706 \\ NKC & 0.688 & 0.735 & 0.806 & 0.769 & 0.289 & 0.639 \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study showing the significance of each centrality measure as Expert Informed Knowledge used in IKOM: R3Trove Dataset
## Conclusion
This paper proposes the use of deep learning, specifically an Informed Neural Network model, to enhance investigations into the theft, trade, and exchange of ancestral bodily remains of Australian and other First Nations peoples. The model employs expert informed knowledge and centrality measures to learn the distribution patterns in the data collection and is able to identify relevant documents with high accuracy, using significantly fewer labelled documents for training.
The proposed INN model was tested using both data provided by RRR researchers and a publicly available generic dataset. The results of the experiments showed that the INN model is generalizable to other datasets, despite being designed specifically for detecting relevant documents related to Indigenous Human Remains. In terms of performance, the INN model outperformed baseline models on both datasets. However, on the RCV1 dataset for a small number of topics, other models demonstrated slightly better performance than INN, particularly when the number of training documents was small. In the future, this
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline & Accuracy & Precision & Recall & F1 Score & CK Score & AUC & Mean Score \\ \hline INN & 0.635 & **0.430** & **0.350** & **0.330** & **0.236** & 0.516 & **0.416** \\ IKOM & 0.581 & 0.409 & 0.256 & 0.274 & 0.202 & 0.443 & 0.361 \\ CNN & 0.140 & 0.125 & 0.141 & 0.111 & 0.067 & 0.139 & 0.120 \\ LSTM & 0.342 & 0.192 & 0.116 & 0.119 & 0.036 & 0.258 & 0.177 \\ DNN & 0.146 & 0.077 & 0.111 & 0.075 & 0.002 & 0.120 & 0.088 \\ RoBERTa & 0.272 & 0.195 & 0.159 & 0.153 & 0.087 & 0.225 & 0.182 \\ kNN & 0.548 & 0.374 & 0.189 & 0.217 & 0.135 & 0.400 & 0.311 \\ RC & **0.742** & 0.381 & 0.299 & 0.308 & 0.191 & **0.548** & 0.412 \\ RF & 0.197 & 0.182 & 0.119 & 0.128 & 0.082 & 0.168 & 0.146 \\ \hline \end{tabular}
\end{table}
Table 4: RCV1 Results (Average on 50 Topics.)
Figure 6: Pairwise performance comparison of INN against baseline models. Each dot indicates a topic in RCV1. A dot in the blue shade triangle indicates INN performs better than its competitor and dots in the white triangle indicate otherwise, with dots on the diagonal indicating the equal performance of both models.
Figure 7: The cumulative ranking obtained by all methods by individually ranking each model per topic per measure
research will further investigate these topics and modify the INN model accordingly. In addition, this study used only centrality measures as Expert Informed Knowledge. It may be useful to also investigate other types of prior knowledge, such as knowledge graphs, in order to improve the INN model. The keyword centrality measure proposed in this paper could also be used to model important influential nodes in social network analysis, which is an area that deserves further investigation.
## Acknowledgements
This class file was developed by Sunrise Setting Ltd, Brixham, Devon, UK.
Website: [http://www.sunrise-setting.co.uk](http://www.sunrise-setting.co.uk)
|
2301.05301 | A Minimal Formulation of Session Types | Session types are a type-based approach to the verification of
message-passing programs. They specify communication structures essential for
program correctness; a session type says what and when should be exchanged
through a channel. Central to session-typed languages are sequencing constructs
in types and processes that explicitly specify the order of actions in a
protocol.
In this paper we study session types without sequencing. The resulting
framework of minimal session types is arguably the simplest form of session
types one could conceive. In the context of a core process calculus with
sessions and higher-order concurrency (abstraction-passing), we establish two
main technical results. First, we prove that every process $P$ typable with
standard session types can be compiled down into a process $\mathcal{D}(P)$
typable with minimal session types. Second, we prove that $P$ and
$\mathcal{D}(P)$ are behaviorally equivalent. These results indicate that
having sequencing constructs in processes and session types is convenient but
redundant: only sequentiality in processes is truly indispensable, as it can
correctly codify sequentiality in types.
Our developments draw inspiration from work by Parrow on behavior-preserving
decompositions of untyped processes. By casting Parrow's results in the realm
of typed processes, our developments reveal a conceptually simple formulation
of session types and a principled avenue to the integration of session types
into programming languages without sequencing in types. | Alen ArslanagiÄ, Jorge A. Pérez, Dan Frumin | 2023-01-12T21:25:28Z | http://arxiv.org/abs/2301.05301v1 | # A Minimal Formulation of Session Types
###### Abstract
Session types are a type-based approach to the verification of message-passing programs. They specify communication structures essential for program correctness; a session type says what and when should be exchanged through a channel. Central to session-typed languages are _sequencing_ constructs in types and processes that explicitly specify the order of actions in a protocol.
In this paper we study session types without sequencing. The resulting framework of _minimal_ session types is arguably the simplest form of session types one could conceive. In the context of a core process calculus with sessions and higher-order concurrency (abstraction-passing), we establish two main technical results. First, we prove that every process \(P\) typable with standard session types can be compiled down into a process \(\mathcal{D}(P)\) typable with minimal session types. Second, we prove that \(P\) and \(\mathcal{D}(P)\) are behaviorally equivalent. These results indicate that having sequencing constructs in processes and session types is convenient but _redundant_: only sequentiality in processes is truly indispensable, as it can correctly codify sequentiality in types.
Our developments draw inspiration from work by Parrow on behavior-preserving decompositions of untyped processes. By casting Parrow's results in the realm of typed processes, our developments reveal a conceptually simple formulation of session types and a principled avenue to the integration of session types into programming languages without sequencing in types.
## 1 Introduction
Session types are a type-based approach to the verification of message-passing programs. A session type specifies what and when should be exchanged through a channel. This makes session types a useful tool to enforce safety and liveness properties related to communication correctness. Originated in the realm of concurrency theory, session types have had a significant impact on the foundations of programming languages [14], but also on their practice [1]. Our goal in this work is to understand to what extent session types can admit simpler, more fundamental formulations. This foundational question has concrete practical ramifications, as we discuss next.
In session-typed languages, _sequencing_ constructs in types and processes specify the intended structure of message-passing protocols. For example, in the session type \(S=?(\mathsf{int});?(\mathsf{int});!(\mathsf{bool});\mathsf{end}\), sequencing (denoted ';') allows us to specify a protocol for a channel that _first_ receives (?) two integers, _then_ sends (!) a boolean, and _finally_ ends. As such, \(S\) could type a service that checks for integer equality. Sequencing in types goes hand-in-hand with sequencing in processes, which is specified using prefix constructs (denoted '\(\cdot\)'). The \(\pi\)-calculus process \(P=s?(x_{1}).s?(x_{2}).s!\langle x_{1}=x_{2}\rangle.\mathbf{0}\) is an implementation of the equality service: it _first_ expects two values on name \(s\), _then_ outputs a boolean on \(s\), and _finally_ stops. Thus, name \(s\) in \(P\) conforms to the session type \(S\). Session types can also specify sequencing within labeled choices and recursion; these typed constructs are also in close match with their respective process expressions.
Session types have been originally developed as a typing discipline for \(\pi\)-calculus for the analysis of message-passing protocols between exactly two parties [12]. Since then session types have been extended in many directions. We find, for instance, multiparty session types [13], and extensions
with dependent types, assertions, exceptions, and time (cf. [8, 14] for surveys). All these extensions seek to address natural research questions on the expressivity and applicability of session types theories.
Here we address a different, if opposite, question: _is there a minimal formulation of session types?_ This is an appealing question from a theoretical perspective, but seems particularly relevant to the practice of session types: identifying the "core" of session types could enable their integration in languages whose type systems do not have certain advanced constructs present in session types, such as sequencing. For instance, the Go programming language offers primitive support for message-passing concurrency; it comes with a static verification mechanism which can only enforce that messages exchanged along channels correspond with their declared payload types--it cannot ensure essential correctness properties associated with the ordering of messages and the structure of the protocols. This observation has motivated the development of advanced static verification tools based on session types for Go programs; see, e.g., [20, 19].
This paper identifies and studies the properties of an elementary formulation of session types, which we call _minimal session types_. Minimal session types are session types without sequencing. That is, in session types such as '\(!(U)\);\(S\)' and '\(?(U)\);\(S\)', we stipulate that \(S\) can only correspond to end, the type of the terminated protocol.
Adopting minimal session types entails dispensing with sequencing, which is arguably the most distinctive feature of session types. While this may appear as a far too drastic restriction, it turns out that it is not: we show that for every process \(P\) typable under standard (non minimal) session types, there is a _decomposition_ of \(P\), denoted \(\mathcal{D}(P)\), a process that codifies the sequencing information given by the session types (protocols) of \(P\) using additional synchronizations, extracted from its protocols. Figure 1 illustrates the key idea of the decomposition using the process \(P\) and session type \(S\) motivated above. Because \(P\) contains three actions in sequence (as stipulated by \(S\)), its decomposition \(\mathcal{D}(P)\) consists of three processes in parallel--each of them implementing one action of \(P\)--as well as of mechanisms for orchestrating these parallel processes: the synchronizations on names \(c_{2},\ldots,c_{5}\) ensure that the sequencing in \(P\) is preserved and that received names are properly propagated. These three parallel processes are typable with minimal session types (in the figure, they are given below each process), which are obtained by suitably "slicing" \(S\).
Our main finding is that \(\mathcal{D}(P)\) satisfies two important properties: first, it is well-typed using minimal session types (_static correctness_); second, it is behaviorally equivalent to \(P\) (_dynamic correctness_). These properties ensure that having sequencing in both types and processes is convenient but _redundant_: only sequencing at the level of processes is truly indispensable.
Figure 1: The process decomposition, illustrated. Arrows in magenta indicate synchronizations orchestrated by the decomposition \(\mathcal{D}(P)\).
The definition of \(\mathcal{D}(P)\) is interesting on its own, as it draws inspiration from a known result by Parrow [21], who showed that any _untyped_\(\pi\)-calculus process can be decomposed as a collection of _trio processes_, i.e., processes with at most three nested prefixes [21].
The question of how to relate session types with other type systems has attracted interest in the past. Session types have been encoded into, for instance, generic types [9] and linear types [7, 5, 6]. As such, these prior studies concern the _relative expressiveness_ of session types, where the expressivity of session types stands with respect to that of some other type system. In sharp contrast, we study the _absolute expressiveness_ of session types: how session types can be explained in terms of themselves. To our knowledge, this is the first study of its kind.
Session types have been developed on top of different process languages (mainly, dialects of the \(\pi\)-calculus), and so choosing the target language for minimal session types is an important decision in our developments. In this paper, our target language is HO, the core process calculus for session-based concurrency studied by Kouzapas et al. [17, 18]. HO is a very small language, which only supports passing of abstractions (i.e., functions from names to processes) and lacks name-passing and recursion. Nonetheless, HO is very expressive, because both features can be encoded in HO in a fully abstract way. Moreover, HO has a well-developed theory of behavioral equivalences [18]. The combination of minimal number of features and expressivity makes HO an excellent candidate for studying a minimal formulation of session types. Indeed, as we will see, several aspects of our decomposition take advantage of the higher-order nature of HO. Being a higher-order language, HO is very different from the (untyped, first-order) \(\pi\)-calculus considered by Parrow [21]. Therefore, our technical results arise in a context very different from Parrow's.
Contributions & Outline.In summary, in this paper we present the following contributions:
1. We identify the class of _minimal session types_ (MST) as a simple fragment of standard session types for HO without sequencing that retains its absolute expressiveness (Definition 3.1).
2. We show how to decompose standard session types into minimal session types, and how to decompose processes typable with standard session types into processes typable with minimal session types. This is a result of static correctness (Theorem 3.1).
3. We show that the decomposition of a process is behaviorally equivalent to the original process. This is a result of _dynamic correctness_, formalized in terms of _MST bisimulations_, a typed behavioral equivalence that we introduce here (Theorem 4.1).
4. We develop optimizations and extensions of our decomposition that bear witness to its robustness.
The rest of the paper is organized as follows. In Section 2 we recall the preliminaries on the session type system for HO, which is the core process calculus for session-based concurrency on which we base our developments. In Section 3 we present _minimal session types_, and the decomposition of well-typed HO processes into minimal session types processes, accompanied by explanations and examples. In Section 4 we show the correctness of the decomposition, by establishing an _MST bisimulation_ between an HO process and its decomposition. In Section 5 we examine two optimizations of the decomposition that are enabled by the higher-order nature of our setting. In Section 6 we discuss extensions of our approach to consider constructs for branching and selection. Finally, in Section 7 we elaborate further on related work and in Section 8 we present some closing remarks. The appendix contains omitted definitions and proofs.
Differences with the conference version.An earlier version of this paper was presented at ECOOP 2019 [3]. The current paper revises the conference version, includes further examples, and incorporates a major addition: Section 4 on dynamic correctness, including the notion of an MST bisimulation and the constructed bisimulation relation, is completely new to this presentation.
Colors.Throughout the paper we use different colors (such as pink and green) to improve readability. However, the usage of colors is not indispensable, and the paper can be followed in black-and-white.
## 2 The Source Language
We start by recalling the syntax, semantics, and type system for HO, the higher-order process calculus for session-based concurrency studied by Kouzapas et al. [17, 18]. Our presentation of HO follows the aforementioned papers, which concern definitions and results for HO\(\pi\), the super-calculus of HO with name-passing, abstraction-passing, and recursion.
HO is arguably the simplest language for session types: it supports passing of abstractions (functions from names to processes) but does not support name-passing nor process recursion. Still, HO is very expressive: it can encode name-passing, recursion, and polyadic communication via type-preserving encodings that are fully-abstract with respect to contextual equivalence [17].
### Syntax and Semantics
**Definition 2.1** (HO processes).: The syntax of names, variables, values, and HO processes is defined as follows:
\[n,m ::=\;a,b\;\mid\;s,\overline{s} u,w ::=\;n\;\mid\;x,y,z V,W ::=\;x,y,z\;\mid\;\lambda x.\,P\] \[P,Q ::=\;u!(V).P\;\mid\;u?(x).P\;\mid\;V\,u\;\mid\;P\mid Q\;\mid\;(\nu \,n)\,P\;\mid\;\mathbf{0}\]
We use \(a,b,c,\dots\) to range over _shared names_, and \(s,\overline{s},\dots\) to range over _session names_. Shared names are used for unrestricted, non-deterministic interactions; session names are used for linear, deterministic interactions. We write \(n,m\) to denote session or shared names, and assume that the sets of session and shared names are disjoint. The _dual_ of a name \(n\) is denoted \(\overline{n}\); we define \(\overline{\overline{s}}=s\) and \(\overline{a}=a\), i.e., duality is only relevant for session names. Variables are denoted with \(x,y,z,\dots\). An abstraction \(\lambda x.\,P\) is a process \(P\) with parameter \(x\). _Values_\(V,W,\dots\) include variables and abstractions, but not names.
Process \(V\,u\) is the application which substitutes name \(u\) on abstraction \(V\). Constructs for inaction \(\mathbf{0}\), parallel composition \(P_{1}\mid P_{2}\), and name restriction \((\nu\,n)\,P\) are standard. HO lacks name-passing and recursion, but they are expressible in the language (see Example 2.1 below).
To enhance readability, we often omit trailing \(\mathbf{0}\)'s, so we write, e.g., \(u!(V)\) instead of \(u!(V).\mathbf{0}\). Also, we write \(u!(\rangle.P\) and \(u?().P\) whenever the exchanged value is not relevant (cf. Remark 3.2).
Restriction for shared names \((\nu\,a)\,P\) is as usual; session name restriction \((\nu\,s)\,P\) simultaneously binds session names \(s\) and \(\overline{s}\) in \(P\). Functions \(\mathtt{fv}(P)\), \(\mathtt{fn}(P)\), and \(\mathtt{fs}(P)\) denote, respectively, the sets of free variables, names, and session names in \(P\), and are defined as expected. If \(\mathtt{fv}(P)=\emptyset\), we call \(P\)_closed_. We write \(P\{\!u/y\!\}\) (resp., \(P\{\!V/y\!\}\)) for the capture-avoiding substitution of name \(u\) (resp., value \(V\)) for \(y\) in process \(P\). We identify processes up to consistent renaming of bound names, writing \(\equiv_{\alpha}\) for this congruence. We shall rely on Barendregt's variable convention, under which free and bound names are different in every mathematical context.
The operational semantics of HO is defined in terms of a _reduction relation_, denoted \(\longrightarrow\). Reduction is closed under _structural congruence_, denoted \(\equiv\), which is defined as the smallest congruence on processes such that:
\[P\mid\mathbf{0}\equiv P P_{1}\mid P_{2}\equiv P_{2}\mid P_{1} \quad P_{1}\mid(P_{2}\mid P_{3})\equiv(P_{1}\mid P_{2})\mid P_{3}\quad(\nu\,n )\,\mathbf{0}\equiv\mathbf{0}\] \[P\mid(\nu\,n)\,Q\equiv(\nu\,n)\,(P\mid Q)\,\,(n\notin\mathtt{fn} (P))\quad P\equiv Q\text{ if }P\equiv_{\alpha}Q\]
We assume the expected extension of \(\equiv\) to values \(V\). The reduction relation expresses the behavior
of processes; it is defined as follows:
\[(\lambda x.\,P)\,u \longrightarrow P\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In HO, polyadicity appears in session synchronizations and applications, but not in synchronizations on shared names. This entails having the following reduction rules:
\[(\lambda\widetilde{x}.\,P)\,\widetilde{u} \longrightarrow P\{\widetilde{u}/\widetilde{x}\}\] \[s!\langle\widetilde{V}\rangle.P\mid\overline{s}?(\widetilde{x}).Q \longrightarrow P\mid Q\{\widetilde{V}/\widetilde{x}\}\]
where the simultaneous substitutions \(P\{\widetilde{u}/\widetilde{x}\}\) and \(P\{\widetilde{V}/\widetilde{x}\}\) are as expected. This polyadic HO can be readily encoded into (monadic) HO[18]; for this reason, by a slight abuse of notation we will often write HO when we actually mean "polyadic HO".
We discuss two simple examples that illustrate how HO can implement mechanisms resembling servers and forms of partial instantiation; these mechanisms shall come in handy later, when defining the process decomposition in Section 3.
**Example 2.2** (A Server of a Kind).: Let \(S_{a}\) denote the process \(a?(x).(x\,r)\), which receives an abstraction on the shared name \(a\) and then applies it to \(r\). Consider the following process composition:
\[P =(\nu\,r)\,(\nu\,a)\,\big{(}a!\langle V\rangle.\mathbf{0}\mid a! \langle W\rangle.\mathbf{0}\mid S_{a}\mid\overline{r}?(x_{1}).\overline{r}?(x _{2}).Q\big{)}\] \[V =\lambda y.\,(y!\langle V^{\prime}\rangle.S_{a}\{y\!/r\})\] \[W =\lambda z.\,(z!\langle W^{\prime}\rangle.S_{a}\{z/r\})\]
where \(V^{\prime}\) and \(W^{\prime}\) are some unspecified shared values. In \(P\), process \(S_{a}\) operates as a server that provides \(r\) upon an invocation on \(a\). Dually, the outputs on \(a\) are requests to this server. One possible reduction sequence for \(P\) is the following:
\[P \longrightarrow(\nu\,r)\,(\nu\,a)\,\big{(}a!\langle W\rangle. \mathbf{0}\mid V\,r\mid\overline{r}?(x_{1}).\overline{r}?(x_{2}).Q\big{)}\] \[\longrightarrow(\nu\,r)\,(\nu\,a)\,\big{(}a!\langle W\rangle. \mathbf{0}\mid r!\langle V^{\prime}\rangle.S_{a}\mid\overline{r}?(x_{1}). \overline{r}?(x_{2}).Q\big{)}\] \[\longrightarrow(\nu\,r)\,(\nu\,a)\,\big{(}a!\langle W\rangle. \mathbf{0}\mid S_{a}\mid\overline{r}?(x_{2}).Q\{V^{\prime}/x_{1}\}\big{)}=P^{\prime}\]
In this reduction sequence, the value \(V\) in the first request is instantiated with the name \(r\) by consuming a copy of \(S_{a}\) available in \(P\). However, a copy of the server \(S_{a}\) is restored through the value \(V\), after an communication on \(r\). This way, in \(P^{\prime}\) the exchange of \(W^{\prime}\) on \(r\) can take place:
\[P^{\prime}\longrightarrow^{*}(\nu\,r)\,(\nu\,a)\,\big{(}S_{a}\mid Q\{V^{ \prime}/x_{1}\}\{W^{\prime}/x_{2}\}\big{)}\]
**Example 2.3** (Partial Instantiation).: Let \(S_{a}\) and \(S_{b}\) be servers as defined in the previous example:
\[S_{a}=a?(x).(x\,r)\qquad\qquad S_{b}=b?(x).(x\,v)\]
Further, let \(R\) be a process in which requests to \(S_{a}\) and \(S_{b}\) are nested within abstractions:
\[R=a!\big{\langle}\lambda y.\,b!\langle\lambda z.\,V\,(y,z)\rangle\big{\rangle}\]
Notice how the polyadic application '\(V\,(y,z)\)' is enclosed in the innermost abstraction. Now consider the following composition:
\[P=(\nu\,a,b)\,R\mid S_{a}\mid S_{b}\]
The structure of \(R\) induces a form of partial instantiation for \(y,z\), implemented by combining synchronizations and \(\beta\)-reductions. To see this, let us inspect one possible reduction chain for \(P\):
\[P \longrightarrow(\nu\,b)\,\left(\lambda y.\,b!\langle\lambda z.\,V \,(y,z)\rangle\,r\right)\,\mid S_{b}\] \[\longrightarrow(\nu\,b)\,b!\langle\lambda z.\,V\,(r,z)\rangle\mid S _{b}=P^{\prime}\]
The first request of \(R\), aimed to obtain name \(r\), is realized by the first reduction, i.e., the communication with \(S_{a}\) on name \(a\): the result is the application of the top-level abstraction to \(r\). Subsequently, the application step substitutes \(y\) with \(r\). Hence, in \(P^{\prime}\), names in the nested application are only _partially instantiated_: at this point, we have '\(V\,(r,z)\)'.
Process \(P^{\prime}\) can then execute the same steps to instantiate \(z\) with name \(v\) by interacting with \(S_{b}\). After two reductions, we obtain the fully instantiated application \(V\,(r,v)\):
\[P^{\prime}\longrightarrow\lambda z.\,V\,(r,z)\,v\longrightarrow V\,(r,v)\]
### Session Types for \(\mathsf{HO}\)
We give essential definitions and properties for the session type system for \(\mathsf{HO}\), following [18].
**Definition 2.2** (Session Types for \(\mathsf{HO}\)).: Let us write \(\diamond\) to denote the process type. The syntax of value types \(U\), channel types \(C\), and session types \(S\) for \(\mathsf{HO}\) is defined as follows:
\[U ::=\,C\!\rightarrow\!\diamond\;\;|\;\;C\!\rightarrow\!\diamond\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;C \::=\;S\;\;|\;\;\langle U\rangle\] \[S ::=!\langle U\rangle;\!S \;|\;\;?(U);\!S\;|\;\;\mu\mathsf{t}.S\;|\;\;\mathsf{t}\;|\;\; \mathsf{end}\]
As we have seen, \(\mathsf{HO}\) only admits the exchange of abstractions; accordingly, value types include \(C\!\rightarrow\!\diamond\) and \(C\!\rightarrow\!\diamond\), which denote _shared_ and _linear_ higher-order types, respectively. Channel types include session types and the shared types \(\langle U\rangle\).
Session types follow the standard binary session type syntax [12], in which sequencing specifies communication structures. This way, the _output type_\(!(U);\!S\) describes a session in which first a value of type \(U\) is sent, and then the session proceeds as \(S\). Dually, the _input type_\(?(U);\!S\) describes a session in which first a value of type \(U\) is received, and then the session proceeds as \(S\). In examples, we often assume basic types (such as \(\mathsf{int}\), \(\mathsf{bool}\), \(\mathsf{str}\)) are exchanged in communications. Session types also include _recursive types_\(\mu\mathsf{t}.S\), in which the variable \(\mathsf{t}\) is assumed to occur guarded in \(S\), i.e., types such as \(\mu\mathsf{t}.\mathsf{t}\) are not allowed. In most cases, recursive types will be _tail-recursive_, although instances of _non-tail-recursive_ session types will also be relevant (cf. Example 3.3). Finally, type \(\mathsf{end}\) is the type of the terminated protocol.
**Notation 2.1**.: _As mentioned in the introduction, we shall study session types in which the continuation \(S\) in \(!(U);\!S\) and \(?(U);\!S\) is always \(\mathsf{end}\). Given this, we may sometimes omit trailing \(\mathsf{end}\)'s and write \(!(U)\) and \(?(U)\) rather than \(!(U);\!\mathsf{end}\) and \(?(U);\!\mathsf{end}\), respectively._
In theories of session types _duality_ is a key notion: implementations derived from dual session types will respect their protocols at run-time, avoiding communication errors. Intuitively, duality is obtained by exchanging \(!\) by \(?\) (and vice versa), including the fixed point construction. We write \(S\)\(\mathsf{dual}\)\(T\) if session types \(S\) and \(T\) are dual according to this intuition; the formal definition is coinductive, and given in [18] (see also [10]).
We consider shared, linear, and session _environments_, denoted \(\Gamma\), \(\Lambda\), and \(\Delta\), resp.:
\[\Gamma ::=\;\emptyset\;\;|\;\;\Gamma,x:C\!\rightarrow\!\diamond\;\;|\;\; \Gamma,u:\langle U\rangle\] \[\Lambda ::=\;\emptyset\;\;|\;\;\Lambda,x\!:C\!\rightarrow\!\diamond\] \[\Delta ::=\;\emptyset\;\;|\;\;\Delta,u\!:\!S\]
\(\Gamma\) maps variables and shared names to value types; \(\Lambda\) maps variables to linear higher-order types. \(\Delta\) maps session names to session types. While \(\Gamma\) admits weakening, contraction, and exchange principles, both \(\Lambda\) and \(\Delta\) are only subject to exchange. The domains of \(\Gamma,\Lambda\), and \(\Delta\) are assumed pairwise distinct. We write \(\Delta_{1}\cdot\Delta_{2}\) to denote the disjoint union of \(\Delta_{1}\) and \(\Delta_{2}\).
We write \(\Gamma\backslash x\) to denote the environment obtained from \(\Gamma\) by removing the assignment \(x:C\!\rightarrow\!\diamond\), for some \(C\). Notations \(\Delta\backslash u\) and \(\Gamma\backslash\widetilde{x}\) are defined similarly and have the expected readings. With a slight abuse of notation, given a tuple of variables \(\widetilde{x}\), we sometimes write \((\Gamma,\Delta)(\widetilde{x})\) to denote the tuple of types assigned to the variables in \(\widetilde{x}\) by the environments \(\Gamma\) and \(\Delta\).
The typing judgements for values \(V\) and processes \(P\) are denoted
\[\Gamma;\Lambda;\Delta\vdash V\triangleright U\qquad\text{and}\qquad\Gamma; \Lambda;\Delta\vdash P\triangleright\diamond\]
Figure 3 shows the typing rules; we briefly describe them and refer the reader to [18] for a full account. The shared type \(C\!\rightarrow\!\diamond\) is derived using Rule (Prom) only if the value has a linear type with an empty linear environment. Rule (EProm) allows us to freely use a shared type variable as linear. Abstraction values are typed with Rule (Abs). Application typing is governed by Rule (App): the type \(C\) of an application name \(u\) must match the type of the application variable \(x\) (\(C\!\rightarrow\!\diamond\) or
\(C\!\rightarrow\!\diamond\)). Rules (Req) and (Acc) type interaction along shared names; the type of the sent/received object \(V\) (i.e., \(U\)) should match the type of the subject \(s\) (\(\langle U\rangle\)). In Rule (Send), the type \(U\) of the value \(V\) should appear as a prefix in the session type \(!\langle U\rangle\);\(S\) of \(u\). Rule (Rcv) is its dual.
To state type soundness, we require two auxiliary definitions on session environments. First, a session environment \(\Delta\) is _balanced_ (written \(\mathsf{balanced}(\Delta)\)) if whenever \(s:S_{1},\overline{s}:S_{2}\in\Delta\) then \(S_{1}\mathsf{dual}\ S_{2}\). Second, we define the reduction relation \(\longrightarrow\) on session environments as:
\[\Delta,s:!\langle U\rangle;S_{1},\overline{s}:?(U);S_{2} \longrightarrow \Delta,s:S_{1},\overline{s}:S_{2}\] \[\Delta,s:\oplus\{l_{i}:S_{i}\}_{i\in I},\overline{s}:\&\{l_{i}:S ^{\prime}_{i}\}_{i\in I} \longrightarrow \Delta,s:S_{k},\overline{s}:S^{\prime}_{k}\ (k\in I)\]
**Theorem 2.1** (Type Soundness [18]).: Suppose \(\Gamma;\emptyset;\Delta\vdash P\triangleright\diamond\) with \(\mathsf{balanced}(\Delta)\). Then \(P\longrightarrow P^{\prime}\) implies \(\Gamma;\emptyset;\Delta^{\prime}\vdash P^{\prime}\triangleright\diamond\) and \(\Delta=\Delta^{\prime}\) or \(\Delta\longrightarrow\Delta^{\prime}\) with \(\mathsf{balanced}(\Delta^{\prime})\).
Figure 3: Typing Rules for \(\mathsf{HO}\).
_Remark 2.2_ (Typed Polyadic Communication).: When using processes with polyadic communication (cf. Remark 2.1), we shall assume the extension of the type system defined in [18].
**Example 2.4** (Typing name-passing constructs).: In Example 2.1 we recalled how to encode name-passing constructs in HO; now we show that this translation is typed. Following the name-passing encoding from [17] we define a syntactic sugar for types. The following typing rules for name-passing are derivable:
\[\text{(SendN)}\ \frac{\Gamma;\Lambda_{1};\Delta_{1}\vdash P\triangleright \diamond\quad\Gamma;\Lambda_{2};\Delta_{2}\vdash\widetilde{b}\triangleright \widetilde{C}}{\Gamma;\Lambda_{1},\Lambda_{2};\Delta_{1},\Delta_{2},t:!(^{r} \widetilde{C}^{\top});\!\text{end}\vdash t!(^{r}\widetilde{b}^{\top}).P \triangleright\diamond\]
\[\text{(RcvN)}\ \frac{\Gamma;\Lambda_{1};\Delta_{1}\vdash P\triangleright \diamond\quad\Gamma;\Lambda_{2};\Delta_{2}\vdash\widetilde{x}\triangleright \widetilde{C}}{\Gamma\backslash x;\Lambda_{1}\backslash\Lambda_{2};\Delta_{1} \backslash\Delta_{2},t:?(^{r}\widetilde{C}^{\top});\!\text{end}\vdash t?(^{r} \widetilde{x}^{\top}).P\triangleright\diamond\]
**Example 2.5** (Typing Recursive Servers).: Here we show how to type the processes from Example 2.2. Let us define:
\[T=\mu\mathsf{t}.!\langle U\rangle;\!\mathsf{t}\qquad C=\langle T\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In the following derivation tree the right-hand side is shown similarly to (8) using assumption (3) instead of (2):
\[\begin{split}\text{(Par)}\ \frac{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq
\[\text{(Req)}\xrightarrow{a:C_{1},b:C_{2};\emptyset;\emptyset\vdash\mathbf{0}\triangleright \diamond}\qquad\text{(Sh)}\xrightarrow{a:C_{1},b:C_{2};\emptyset;\emptyset \vdash a\triangleright\langle S_{1}\!\mathbin{\neg}\!\circ\!\rangle}\qquad\text{ (15)}\] \[\qquad\text{ (16)}\]
Finally we have:
\[\text{(Par)}\xrightarrow{\text{(16)}}\xrightarrow{a:C_{1},b:C_{2};\emptyset; \emptyset\vdash S_{a}\triangleright\diamond}\qquad\text{ (Res)}\xrightarrow{a:C_{1},b:C_{2};\emptyset;\emptyset\vdash\text{$R$ }\mid S_{a}\mid S_{b} \triangleright\diamond}\qquad\text{ (Res)}\xrightarrow{a:C_{1},b:C_{2};\emptyset;\emptyset\vdash\text{$R$ }\mid S_{a}\mid S_{b} \triangleright\diamond}\qquad\text{ (Res)}\xrightarrow{a:C_{1};\emptyset;\emptyset\vdash\text{$(\nu\,b:C_{2})\,R$ }\mid S_{a}\mid S_{b} \triangleright\diamond}\qquad\text{ ($R$ }\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Key Ideas
Consider a process \(P_{1}\) that implements the (standard) session type \(S=?(\mathsf{str});?(\mathsf{int});!(\mathsf{bool});\mathsf{end}\) along name \(u\). In process \(P_{1}\), name \(u\) is not a single-use resource; rather, it is used several times to implement the communication actions in \(S\); Figure 4 (top) graphically depicts the actions and the corresponding states.
The decomposition \(\mathcal{D}(P_{1})\) is illustrated in the bottom part of Figure 4: it is defined as the parallel composition of four processes \(Q_{i}\) (for \(i\in\{1,\ldots,4\}\)). Each process \(Q_{1}\), \(Q_{2}\), and \(Q_{3}\) mimic one action of \(P_{1}\) on an indexed name \(u_{i}\), while \(Q_{4}\) simulates the termination of the session. This way, a single name \(u\) in \(P_{1}\) is decomposed into a sequence of names \(u_{1},u_{2},u_{3}\) in \(\mathcal{D}(P_{1})\).
The processes \(Q_{1}\), \(Q_{2}\), \(Q_{3}\), and \(Q_{4}\) are composed in parallel, but we would like to retain the same sequentiality of actions on the channels \(u_{i}\) as we have on the channel \(u\). To that end, each process \(Q_{i}\), with the exception of \(Q_{1}\), does not perform its designated action on \(u_{i}\) until it gets activated by the previous process. In turn, after \(Q_{i}\) performs an action on \(u_{i}\) it evolves to a state \(Q_{i}^{\prime}\), which is responsible for activating the next process \(Q_{i+1}\). In Figure 4, the activations are indicated by red arrows. In general, the decomposition orchestrates the activation of sub-processes, following the sequencing prescribed by the session types of the given process. Therefore, assuming a well-typed source process, our decomposition codifies the sequentiality in session types into the process level.
The activation mechanism includes the _propagation_ of values across sub-processes (cf. the labels on red arrows). This establishes a flow of values from sub-processes binding them to those that use them (i.e., it makes variable bindings explicit). For example, in \(P_{1}\), the Boolean value being sent over as part of the session \(S\) might depend on the previously received string and integer values. Therefore, both of those values have to be propagated to the process \(Q_{3}\), which is responsible for sending out the Boolean.
In this example a single name \(u:S\) is decomposed into a sequence \(\widetilde{u}=(u_{1},\ldots,u_{n})\): each \(u_{i}\in\widetilde{u}\) is a single-use resource, as prescribed by its minimal session type. Such is the case for non-recursive types \(S\). When \(S\) is recursive, the situation is more interesting: each action of \(S\) can be repeated many times, and therefore the names \(\widetilde{u}\) should be propagated across trios to enable potentially many uses. As an example, consider the recursive session type \(S=\mu\mathsf{t}.?(\mathsf{int});!(\mathsf{int});\mathsf{t}\), in which an input and an output actions are repeated indefinitely. Consider the following process
\[R_{1}=\underbrace{r?(z).\,r!(-z).\,r?(z)}_{T_{1}}\underbrace{r?(z).\,r!(z)}_{T _{2}}\underbrace{V\,r}_{T_{3}}\]
which makes use of the channel \(r:S\) and where \(V\) has type \(S\!\mathop{\rightarrow}\!\diamond\). Figure 5 (top) gives the first four
Figure 4: Our decomposition function \(\mathcal{D}(-)\), illustrated. Nodes represent process states, ‘\(\parallel\)’ represents parallel composition of processes, black arrows stand for actions, and red arrows indicate synchronizations that preserve the sequentiality of the source process by activating trios and propagating (bound) values.
actions of \(R_{1}\) and the corresponding sates: the body of type \(S\) prescribes two actions on name \(r\), performed sequentially in \(R_{1}\) and \(R_{2}\); subsequent actions (enabled in \(R_{3}\) and \(R_{4}\)) correspond to a "new instance" of the body of \(S\).
The decomposition \(\mathscr{D}(R_{1})\), depicted in Figure 5 (bottom), generates a trio process for each prefix in \(R_{1}\); we denote prefixes with their corresponding trios \(T_{1},\ldots,T_{5}\). The type decomposition function on types, \(\mathscr{G}(-)\), slices \(S\) into two _minimal tail-recursive types_: \(M_{1}=\mu\)t.?(int);t and \(M_{2}=\mu\)t.!(int);t.
In the recursive case, a key idea is that trios that mimic actions prescribed by a recursive session types should reuse names, which should be propagated across trios. This way, for instance, trios \(T_{1}\) and \(T_{3}\) mimic the same (input) action, and so they both should use the same name (\(r_{1}\)). To achieve this, we devise a mechanism that propagates names with tail-recursive types (such as \((r_{1},r_{2})\)) through the trios. These propagation actions are represented by blue arrows in Figure 5 (bottom). In our example, \(T_{3}\) gathers the complete decomposition of names from preceding trios \((r_{1},r_{2})\); it mimics an input action on \(r_{1}\) and makes \((r_{1},r_{2})\) available to future trios (i.e., \(T_{4}\) and \(T_{5}\)).
Since the same tail-recursive names can be (re)used infinitely often, we propagate tail-recursive names through the following process. All the names \(\widetilde{r}\) corresponding to the decomposition of a tail-recursive name \(r\) are bound in the process
\[c^{r}?(x).x\,\widetilde{r},\]
which is similar to the servers discussed in Example 2.2. We call these processes _recursive propagators_, and each tail-recursive name in the original process \(P\) has a dedicated propagator in \(\mathscr{D}(P)\) on the channel \(c^{r}\). Whenever a trio has to perform an action \(\alpha(r_{i})\) on one of the decomposed tail-recursive names (i.e., a decomposition of an input action '\(r?(y)\).' or an output action '\(r!\langle V\rangle\).' on the name \(r\)), it first has to request the name from the corresponding recursive propagator by performing an output action \(c^{r}!\langle N\rangle\), where value \(N\) is the abstraction
\[N=\lambda\widetilde{z}.\,\alpha(z_{i}).\big{(}\overline{c_{k+1}!}\langle \widetilde{w}\rangle\mid c^{r}?(x).x\,\widetilde{z}\big{)}.\]
A synchronization on \(c^{r}\) will result in the reduction:
\[c^{r}?(x).x\,\widetilde{r}\mid c^{r}!\langle N\rangle\longrightarrow\alpha(r _{i}).\big{(}\overline{c_{k+1}!}\langle\widetilde{w}\rangle\mid c^{r}?(x).x \,\widetilde{r}\big{)}.\]
The resulting process first simulates \(\alpha(r)\) and subsequently reinstates the recursive propagator on \(c^{r}\), for the benefit of the other trios requiring access to the names \(\widetilde{r}\). See Examples 3.9 and 3.10 below (Page 24) for further illustration of this method.
This decomposition strategy handles \(\mathsf{HO}\) processes with recursive types which are _simple_ and _contractive_. That is, recursive types of the form \(\mu\)t.\(S\), where the body \(S\neq\mathsf{t}\) does not itself contain recursive types. Unless stated otherwise, we consider _tail-recursive_ session types such as, e.g., \(S=\mu\)t.?(int);?(bool);!(bool);t. Non-tail-recursive session types such as \(\mu\)t.?(\((\widetilde{T},\mathsf{t})\!\!\rightarrow\!\circ);end,\) used in the fully-abstract encoding of \(\mathsf{HO}\pi\) into \(\mathsf{HO}\)[17], can also be accommodated; see Example 3.3 below.
### The Decomposition
Here we formally present the decomposition of \(\mathsf{HO}\) processes. We start introducing some preliminary definitions, including the definition of an auxiliary function, called the _breakdown function_.
Following Parrow [21] we adopt some useful terminology and notation on trios. The _context_ of a trio is a tuple of variables \(\widetilde{x}\), possibly empty, which makes variable bindings explicit. We use a reserved set of _propagator names_ (or simply _propagators_), denoted with \(c_{k},c_{k+1},\ldots\), to carry contexts and trigger the subsequent trio. A process with less than three sequential prefixes is called a _degenerate trio_. Also, a _leading trio_ is the one that receives a context, performs an action, and triggers the next trio; a _control trio_ only activates other trios.
The breakdown function works on both processes and values. The breakdown of process \(P\) is denoted by \(\mathcal{B}^{k}_{\widetilde{x}}(P)\), where \(k\) is the index for the propagators \(c_{k}\), and \(\widetilde{x}\) is the context to be received by the previous trio. Similarly, the breakdown of a value \(V\) is denoted by \(\mathscr{V}_{\widetilde{x}}(V)\).
#### 3.2.1 Minimal Session Types and Decomposing Types
We start by introducing minimal session types as a fragment of Definition 2.2:
**Definition 3.1** (Minimal Session Types).: The syntax of _minimal session types_ for \(\mathsf{HO}\) is defined as follows:
\[U ::=\;\widetilde{C}\!\rightarrow\!\infty\;\mid\;\widetilde{C}\! \rightarrow\!\infty\] \[C ::=\;M\;\mid\;\langle U\rangle\] \[M ::=\;\gamma\;\mid\;!\langle\!\widetilde{U}\rangle;\gamma\;\mid\;?( \widetilde{U});\gamma\;\mid\;\mu\mathsf{t}.M\] \[\gamma ::=\;\mathsf{end}\;\mid\;\mathsf{t}\]
The above definition is minimal in its use of sequencing, which is only present in recursive session types such as \(\mu\mathsf{t}.!\langle U\rangle;\mathsf{t}\) and \(\mu\mathsf{t}.?(U);\mathsf{t}\)--these are tail-recursive session types with exactly one session prefix. Clearly, this minimal type structure induces a reduced set of typable \(\mathsf{HO}\) processes. A type system for \(\mathsf{HO}\) based on minimal session types can be straightforwardly obtained by specializing the definitions, typing rules, and results summarized in Section 2.2. We refer to \(\mathsf{HO}\) processes and terms typeable with minimal session types as _MST processes and terms_, respectively.
We now define how to "slice" a standard session type into a _list_ of minimal session types. We need the following auxiliary definition.
**Definition 3.2** (Predicates on Types and Names).: Let \(C\) be a channel type.
* We write \(\mathsf{tr}(C)\) to indicate that \(C\) is a tail-recursive session type.
* Given \(u:C\), we write \(\mathsf{lin}(u)\) if a session type (i.e. \(C=S\) for some \(S\)) that is not tail recursive.
With a slight abuse of notation, we write \(\mathsf{tr}(u)\) to mean \(u:C\) and \(\mathsf{tr}(C)\) (and similarly for \(\neg\mathsf{tr}(u)\)).
**Definition 3.3** (Decomposing Session Types).: Given the session, higher-order, and shared types of Definition 2.2, the _type decomposition function_\(\mathcal{G}(-)\) is defined using the auxiliary function \(\mathcal{R}(-)\) as in Figure 6. We write \(|\mathcal{G}(S)|\) to denote the length of \(\mathcal{G}(S)\) (and similarly for \(\mathcal{R}(-)\)).
The decomposition is self-explanatory; intuitively, if a session type \(S\) contains \(k\) input/output actions, the list \(\mathcal{G}(S)\) will contain \(k\) minimal session types. For a tail recursive \(\mu\mathsf{t}.S\), \(\mathcal{G}(\mu\mathsf{t}.S)\) is a list of minimal recursive session types, obtained using the auxiliary function \(\mathcal{R}(-)\) on \(S\): if \(S\) has \(k\) prefixes then the list \(\mathcal{G}(\mu\mathsf{t}.S)\) will contain \(k\) minimal recursive session types.
We illustrate Definition 3.3 with three examples.
Figure 5: Decomposition of processes with recursive session types, illustrated. Dashed blue arrows represent the propagation of tail-recursive names (\(r_{1}\),\(r_{2}\)) across trios.
**Example 3.1** (Decomposition a Non-recursive Type).: Let \(S=?(\mathsf{int});?(\mathsf{int});!(\mathsf{bool});\mathsf{end}\) be the session type given in Section 1. Then \(\mathcal{G}(S)\) denotes the list \(?(\mathsf{int}),?(\mathsf{int}),!(\mathsf{bool})\). \(\lhd\)
**Example 3.2** (Decomposing a Recursive Type).: Let \(S=\mu\mathsf{t}.S^{\prime}\) be a recursive session type, with \(S^{\prime}=?(\mathsf{int});?(\mathsf{bool});!(\mathsf{bool});\mathsf{t}\). By Definition 3.3, since \(S\) is tail-recursive, \(\mathcal{G}(S)=\mathcal{R}(S^{\prime})\). Further, \(\mathcal{R}(S^{\prime})=\mu\mathsf{t}.?(\mathcal{G}(\mathsf{int}));\mathsf{t},\mathcal{R}(?(\mathsf{bool});!(\mathsf{bool});\mathsf{t})\). By definition of \(\mathcal{R}(-)\), we obtain
\[\mathcal{G}(S)=\mu\mathsf{t}.?(\mathsf{int});\mathsf{t},\ \mu\mathsf{t}.?( \mathsf{bool});\mathsf{t},\ \mu\mathsf{t}.!(\mathsf{bool});\mathsf{t},\mathcal{R}(t)\]
(using \(\mathcal{G}(\mathsf{int})=\mathsf{int}\) and \(\mathcal{G}(\mathsf{bool})=\mathsf{bool}\)). Since \(\mathcal{R}(\mathsf{t})=\epsilon\), we obtain
\[\mathcal{G}(S)=\mu\mathsf{t}.?(\mathsf{int});\mathsf{t},\ \mu\mathsf{t}.?( \mathsf{bool});\mathsf{t},\ \mu\mathsf{t}.!(\mathsf{bool});\mathsf{t}\]
In addition to tail-recursive types that are handled by \(\mathcal{R}(-)\), we need to support non-tail-recursive types of form \(\mu\mathsf{t}.?((\widetilde{T},\mathsf{t})\!\to\!\circ);\mathsf{end}\) that are essential for the encoding of recursion in \(\mathsf{HO}\pi\) into \(\mathsf{HO}\). The following example illustrates such a decomposition.
**Example 3.3** (Decomposing a Non-tail-recursive Type).: Let \(S=\mu\mathsf{t}.?((?(\mathsf{str});!(\mathsf{str});\mathsf{end},\mathsf{t}) \!\to\!\circ);\mathsf{end}\) be a non-tail-recursive type. We obtain the following decomposition:
\[\mathcal{G}(S) =\mu\mathsf{t}.\mathcal{G}(?((?(\mathsf{str});!(\mathsf{str}); \mathsf{end},\mathsf{t})\!\to\!\circ);\mathsf{end})\] \[=\mu\mathsf{t}.?((?(\mathsf{str});!(\mathsf{str});\mathsf{end}, \mathsf{t})\!\to\!\circ))\] \[=\mu\mathsf{t}.?((?(\mathsf{str}),!(\mathsf{str}),\mathsf{t})\! \to\!\circ)=M\]
We can see that we have generated minimal non-tail-recursive type \(M\). \(\lhd\)
Now, we illustrate the encoding of \(\mathsf{HO}\pi\) recursive processes into \(\mathsf{HO}\) from [17] using the non-tail-recursive type \(S\) given in the above example.
**Example 3.4** (Encoding Recursion).: Consider the process \(P=\mu X.a?(m).a!\langle m\rangle.X\), which contains recursion and so it is not an \(\mathsf{HO}\) process. Still, \(P\) can be encoded into \(\mathsf{HO}\) as follows [17]:
\[[P]=a?(m).a!\langle m\rangle.(\nu\,s)\,(V\,(a,s)\mid\overline{s}!\langle V\rangle)\]
Figure 6: Decomposing session types into minimal session types (Definition 3.3)
where the value \(V\) is an abstraction that potentially reduces to \(\left\lceil P\right\rceil\):
\[V=\lambda(x_{a},y_{1}).\,y_{1}?(z_{x}).x_{a}?(m).x_{a}!\langle m\rangle.(\nu\,s) \left(z_{x}\left(x_{a},s\right)\mid\mathfrak{F}!\langle z_{x}\rangle.\mathbf{0}\right)\]
As detailed in [17], this encoding relies on non-tail-recursive types. In particular, the bound name \(s\) in \(\left\lceil P\right\rceil\) is typed with the following type, discussed above in Example 3.3:
\[S=\mu\mathtt{t}.?((?(\mathtt{str})!\langle\mathtt{str}\rangle;\mathtt{end}, \mathtt{t})\!\rightarrow\!\circ);\mathtt{end}\]
We compose \(\left\lfloor P\right\rceil\) with an appropriate client process to illustrate the encoding of recursion. Below \(R\) stands for some unspecified process such that \(a\in\mathtt{rn}(R)\):
\[\left\lceil P\right\rceil\mid a!\langle W\rangle.a?(b).R \longrightarrow^{2}(\nu\,s)\left(V\left(a,s\right)\mid\mathfrak{ F}!\langle V\rangle\right)\mid R\] \[\longrightarrow(\nu\,s)\left(s?(z_{x}).a?(m).a!\langle m\rangle.( \nu\,s^{\prime})\left(z_{x}\left(a,s^{\prime}\right)\mid\overline{s^{\prime}!}\langle z_{x}\rangle\right)\mid\mathfrak{F}!\langle V\rangle\right)\mid R\] \[\longrightarrow a?(m).a!\langle m\rangle.(\nu\,s^{\prime})\left(V \left(a,s^{\prime}\right)\mid\overline{s^{\prime}!}\langle V\rangle\right) \mid R\] \[=\left\lfloor P\right\rceil\mid R\]
#### 3.2.2 Decomposing Processes
As we have seen, each session type \(S\) is decomposed into \(\mathcal{G}(S)\), a list of minimal session types. Accordingly, given an assignment \(s:S\), we decompose \(s\) into a series of names, one for each action in \(S\). We use _indexed names_ to formalize the names used by minimally typed processes. Formally, an indexed name is a pair \((n,i)\) with \(i\in\mathbb{N}\), which we denote as \(n_{i}\). We refer to processes with indexed names as _indexed processes_.
The decomposition of processes is defined in Definition 3.9, and it relies on a breakdown function, denoted \(\mathbb{\mathit{\Sigma}}_{\mathbb{\mathit{\Sigma}}}^{k}(-)\), which operates on indexed processes. Before we dive into those functions we present some auxiliary definitions.
Preliminaries.To handle the unfolding of recursive types, we shall use the following auxiliary function, which decomposes guarded recursive types, by first ignoring all the actions until the recursion.
**Definition 3.4** (Decomposing an Unfolded Recursive Type).: Let \(S\) be a session type. The function \(\mathcal{R}^{\star}(-)\): is defined as follows
\[\mathcal{R}^{\star}(\mu\mathtt{t}.S) =\mathcal{R}(S)\] \[\mathcal{R}^{\star}(?(U);S) =\mathcal{R}^{\star}(S)\] \[\mathcal{R}^{\star}(!\langle U\rangle;S) =\mathcal{R}^{\star}(S)\]
**Example 3.5**.: Let \(T=?(\mathtt{bool})!\langle\mathtt{bool}\rangle;S\) be a derived unfolding of \(S\) from Example 3.2. Then, by Definition 3.3, \(\mathcal{R}^{\star}(T)\) is the list of minimal recursive types obtained as follows: first, \(\mathcal{R}^{\star}(T)=\mathcal{R}^{\star}(!\langle\mathtt{bool}\rangle;\mu \mathtt{t}.S^{\prime})\) and after one more step, \(\mathcal{R}^{\star}(!\langle\mathtt{bool}\rangle;\mu\mathtt{t}.S^{\prime})= \mathcal{R}^{\star}(\mu\mathtt{t}.S^{\prime})\). Finally, we have \(\mathcal{R}^{\star}(\mu\mathtt{t}.S^{\prime})=\mathcal{R}(S^{\prime})\). We get the same list of minimal types as in Example 3.2: \(\mathcal{R}^{\star}(T)=\mu\mathtt{t}.?(\mathtt{int});\mathtt{t},\mu\mathtt{t}.?( \mathtt{bool});\mathtt{t},\mu\mathtt{t}.!(\mathtt{bool});\mathtt{t}.\)\(\lnot\mathtt{t}.\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\not\lnot\lnot\lnot\lnot\lnot\lnot\not\lnot\lnot\lnot\not\lnot\lnot\lnot\lnot\lnot\not\lnot\lnot\lnot\lnot\not\lnot\lnot\lnot\lnot\lnot\not\lnot\lnot\lnot\lnot\lnot\not\lnot\lnot\lnotnot\lnot\lnot\not\lnot\lnot\lnotnot\lnot\lnot\not\lnot\lnot\lnotnot\lnot\lnot\lnot\not\lnot\lnot\lnot\lnot\lnot\lnot\lnot\lnot\not\lnotnot\lnot\lnot\lnot\not\lnotnot\lnot\lnot\not\lnotnot\lnot\lnot\lnot\lnot\lnotnot\lnot\lnot\lnotnot\lnot\lnot\lnot\lnot\lnotnot\lnot\lnot\lnot\lnotnot\lnot\lnotnot\lnot\lnotnot\lnot\lnotnot\lnotnot\lnotnot\lnot\lnotnot\lnot\lnotnot\lnotnot\lnot\lnotnot\lnot\lnotnot\lnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnotnot\lnotnot\lnotnot\lnotnot\lnotnot\lnotnotnot\lnotnot\lnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnot\lnotnot\lnotnotnot\lnotnot\lnotnot\lnotnotnot\lnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnotnot\lnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnotnot\lnotnotnotnot\lnotnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\lnotnot\lnotnotnot\lnotnotnotnot\lnotnotnot\lnotnotnot\lnotnotnot\not\lnotnot\not\lnotnotnot\lnotnot\lnotnot\not\lnotnot\not\lnot\lnotnotnot\lnotnot\not\lnotnot\not\lnotnotnot\lnotnot\not\lnotnot\not\lnotnot\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnot\not\lnotnot\not\lnotnot\not\lnot\not\lnotnot\not\lnotnot\not\lnotnot\not\lnotnot\lnot\not\lnotnot\not\not\lnot\not\lnotnot\not\lnot\not\lnot\not\not\lnot\not\not\lnot\not\lnot\not\not\lnot\not\lnot\not\not\lnot\not\not\lnot\not\not\lnot\not\lnot\not\lnot\not\not\
where \([S]^{*}_{l}\):
\[[\mu\text{\sf t.}S)^{*}_{l} =|\mathcal{R}(S)|-l+1\] \[[!(U);\!S)^{*}_{l} =[S]^{*}_{l+1}\] \[[?(U);\!S)^{*}_{l} =[S]^{*}_{l+1}\]
**Example 3.6**.: Let \(S^{\prime}=?(\mathsf{bool});!(\mathsf{bool});\!S\) where \(S\) is as in Example 3.2. Then \([S^{\prime}\rangle=2\) since the top-most prefix of \(S^{\prime}\) (\(?(\mathsf{bool});\!\)) is the second prefix in the body of \(S\). \(\lhd\)
In order to determine the required number of propagators \((c_{k},c_{k+1},\ldots)\) required in the breakdown of processes and values, we define the _degree_ of a process:
**Definition 3.6** (Degree of a Process).: Let \(P\) be an \(\mathsf{HO}\) process. The _degree_ of \(P\), denoted \(\left\langle P\right\rangle\), is defined as follows:
\[\left\langle P\right\rangle=\begin{cases}\left\langle Q\right\rangle+1&\text{ if }P=u_{i}!(V).Q\text{ or }P=u_{i}?(y).Q\\ \left\langle P^{\prime}\right\rangle&\text{ if }P=(\nu\,s:S)\,P^{\prime}\\ \left\langle Q\right\rangle+\left\langle R\right\rangle+1&\text{ if }P=Q\mid R\\ 1&\text{ if }P=V\,u_{i}\text{ or }P=\mathbf{0}\end{cases}\]
We define an auxiliary function that "initializes" the indices of a tuple of names, for turning a regular process into an indexed process.
**Definition 3.7** (Initializing an indexed process).: Let \(\widetilde{u}=(a,b,s,s^{\prime},r,r^{\prime},\ldots)\) be a finite tuple of names. We shall write \(\mathsf{init}(\widetilde{u})\) to denote the tuple of indexed names \((a_{1},b_{1},s_{1},s^{\prime}_{1},r_{1},r^{\prime}_{1},\ldots)\).
**Definition 3.8** (Subsequent index substitution).: Let \(n_{i}\) be an indexed name. We define \(\mathsf{next}(n_{i})=(\mathsf{lin}(n_{i}))\,?\,\{n_{i+1}\!/n_{i}\}\colon\{\}\).
_Remark 3.2_.: Recall that we write '\(c_{k}?()\)' and '\(\overline{c_{k}!}()\)' to denote input and output prefixes in which the value communicated along \(c_{k}\) is not relevant. While '\(c_{k}?()\)' stands for '\(c_{k}?()\)', '\(\overline{c_{k}!}()\)' stands for \(\overline{c_{k}!}(\lambda x.\,\mathbf{0})\)'. Their corresponding minimal types are \(?(\mathsf{end}\!\rightarrow\!\circ)\) and \(!(\mathsf{end}\!\rightarrow\!\circ)\), which are denoted by \(?(-)\) and \(!(\mathsf{-})\), respectively.
Given a typed process \(P\), we write \(\mathsf{rn}(P)\) to denote the set of free names of \(P\) whose types are recursive. As mentioned above, for each \(r\in\mathsf{rn}(P)\) with \(r:S\) we shall rely on a control trio of the form \(c^{r}?(x).x\,\widetilde{r}\), where \(\widetilde{r}=r_{1},\ldots,r_{|\mathcal{G}(S)|}\).
**Definition 3.9** (Decomposition of a Process).: Let \(P\) be a closed \(\mathsf{HO}\) process with \(\widetilde{u}=\mathsf{fn}(P)\) and \(\widetilde{v}=\mathsf{rn}(P)\). The _decomposition_ of \(P\), denoted \(\mathcal{D}(P)\), is defined as:
\[\mathcal{D}(P)=(\nu\,\widetilde{c})\,(\nu\,\widetilde{c}_{r})\, \Big{(}\overline{c_{k}!}(\lambda.\,\mathbf{0}\mid\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{ \mathbb{ \mathbb{ \cdot
**Output:** The decomposition of \(u_{i}!(V).Q\) is arguably the most interesting case, as both the sent value \(V\) and the continuation \(Q\) have to be decomposed. We distinguish two cases:
* If \(\neg\mathsf{tr}(u_{i})\) then \(u_{i}\) is linear or shared, and then we have: \[\mathscr{D}^{k}_{\widetilde{x}}(u_{i}!(V).Q)=c_{k}?(\widetilde{x}).u_{i}! \big{\langle}\vee_{\widetilde{y}}(V\sigma)\big{\rangle}.\overline{c_{k+1}}!( \widetilde{w})\mid\mathscr{D}^{k+1}_{\widetilde{w}}(Q\sigma)\] This decomposition consists of a leading trio that mimics an output action in parallel with the breakdown of \(Q\). The context \(\widetilde{x}\) must include the free variables of \(V\) and \(Q\), which are denoted
\begin{table}
\begin{tabular}{|l|l|l|} \hline \(P\) & \(\mathscr{D}^{k}_{\widetilde{x}}(P)\) & \\ \hline \multirow{5}{*}{\(u_{i}!(V).Q\)} & \(\bullet\neg\mathsf{tr}(S)\): & \(u_{i}:S\) \\ & \(c_{k}?(\widetilde{x}).u_{i}?(y).\overline{c_{k+1}!}(\widetilde{w})\mid \mathscr{D}^{k+1}_{\widetilde{w}}(Q\sigma)\) & \(\widetilde{y}=\mathsf{fv}(V)\) \\ & \(\bullet\mathsf{tr}(S)\): & \(\widetilde{w}=\mathsf{fv}(Q)\) \\ & \(c_{k}?(\widetilde{x}).c^{u!}\big{\langle}N_{V}\big{\rangle}\mid\mathscr{D}^{k +1}_{\widetilde{w}}(Q)\) & \(\sigma=\mathsf{next}(u_{i})\) \\ & where: & \(\widetilde{z}=(z_{1},\ldots,z_{|\mathscr{R}^{*}(S)|})\) \\ & \(N_{V}=\lambda\widetilde{z}.\,z_{|\mathscr{S}}?(y).\big{(}\overline{c_{k+1}!}( \widetilde{w})\mid c^{u?}(x).x\,\widetilde{z}\big{)}\) & \\ \hline \multirow{5}{*}{\(u_{i}?(y).Q\)} & \(\bullet\mathsf{tr}(S)\): & \(u_{i}:S\) \\ & \(c_{k}?(\widetilde{x}).u_{i}?(y).\overline{c_{k+1}!}(\widetilde{w})\mid \mathscr{D}^{k+1}_{\widetilde{w}}(Q\sigma)\) & \(\widetilde{w}=\mathsf{fv}(Q)\) \\ \cline{1-1} & \(\neg\mathsf{tr}(S)\): & \(\sigma=\mathsf{next}(u_{i})\) \\ \cline{1-1} & \(c_{k}?(\widetilde{x}).c^{u!}\big{\langle}N_{y}\big{\rangle}\mid\mathscr{D}^{ k+1}_{\widetilde{w}}(Q)\) & \(\widetilde{z}=(z_{1},\ldots,z_{|\mathscr{R}^{*}(S)|})\) \\ \cline{1-1} & \(N_{y}=\lambda\widetilde{z}.\,z_{|\mathscr{S}}?(y).\big{(}\overline{c_{k+1}!} (\widetilde{w})\mid c^{u?}(x).x\,\widetilde{z}\big{)}\) & \\ \hline \multirow{5}{*}{\(V\left(\widetilde{r},u_{i}\right)\)} & \(c_{k}?(\widetilde{x}).\overline{c^{r_{1}}!\big{\langle}\lambda\widetilde{z}_{1}.c^{r_{2}}!\big{\langle}\lambda\widetilde{z}_{2}.\cdots.c^{r_{n}}!\big{\langle} \lambda\widetilde{z}_{n}.Q\big{\rangle}\big{\rangle}\big{\rangle}\) & \(u_{i}:C\) \\ & where: & \(\widetilde{z}_{i}=(z_{1}^{i},\ldots,z_{|\mathscr{R}^{*}(S_{i})|}^{2}))\) \\ & \(Q=\mathscr{V}_{\widetilde{x}}(V)\left(\widetilde{z}_{1},\ldots,\widetilde{z}_{n },\widetilde{m}\right)\) & \(\widetilde{m}=(u_{i},\ldots,u_{i+|\mathscr{G}(C)|-1})\) \\ \hline \multirow{5}{*}{\(\left(\nu\,s:C\right)P^{\prime}\)} & \(\bullet\neg\mathsf{tr}(C)\): & \(\widetilde{x}=\mathsf{fv}(P^{\prime})\) \\ & \(\left(\nu\,\widetilde{s}:\mathscr{G}(C)\right)\,\mathscr{D}^{k}_{\widetilde{x}} (P^{\prime}\sigma)\) & \(\widetilde{s}=(s_{1},\ldots,s_{|\mathscr{G}(C)|})\) \\ & \(\bullet\mathsf{tr}(C)\) : & \(\widetilde{s}=(\overline{s_{1}},\ldots,\overline{s_{|\mathscr{G}(C)|}})\) \\ & \(\left(\nu\,\widetilde{s}:\mathscr{G}(C)\right)(\nu\,c^{s})\,c^{s?}(x).x\, \widetilde{s}\mid\mathscr{D}^{k}_{\widetilde{x}}(P^{\prime}\sigma)\) & \(\sigma=\{s_{1}\overline{s_{1}}/s\overline{s}\}\) \\ \hline \multirow{5}{*}{\(Q\mid R\)} & \multirow{5}{*}{\(c_{k}?(\widetilde{x}).\overline{c_{k+1}!}(\widetilde{y}).\overline{c_{k+1}!} (\widetilde{w})\mid\mathscr{D}^{k+1}_{\widetilde{y}}(Q)\mid\mathscr{D}^{k+1}_{ \widetilde{w}}(R)\)} & \(\widetilde{y}=\mathsf{fv}(Q)\) \\ & & \(l=\,\neg Q\) \\ \hline \multirow{2}{*}{\(\mathbf{0}\)} & \(c_{k}?(\mathsf{0}).\mathbf{0}\) & \multirow{2}{*}{} & \multirow{2}{*}{} \\ \hline \(V\) & \(\mathscr{V}_{\widetilde{x}}(\widetilde{V})\) & \\ \hline \(y\) & \(y\) & \\ \hline \multirow{5}{*}{\(\lambda(\widetilde{y}z).\,P\)} & \(\lambda(\widetilde{y^{1}},\ldots,\widetilde{y^{n}},\widetilde{z}):(\widetilde{M}) \stackrel{{\sim}}{{\cdots}}.N\) & \(\widetilde{y}z:\widetilde{S}C\) \\ & where: & \(\forall y_{i}\in\widetilde{y}.(y_{i}:S_{i}\wedge\mathsf{tr}(S_{i})\wedge\) \\ & \(\widetilde{M}=(\mathscr{G}(S_{1}),\ldots,\mathscr{G}(S_{n}),\mathscr{G}(C))\) & \(\widetilde{y^{i}}=(y^{i}_{1},\ldots,y^{i}_{|\mathscr{G}(S_{i})|}))\) \\ & \(N=\left(\nu\,\widetilde{c}\right)(\nu\,\widetilde{c}_{r})\,\prod_{i\in| \widetilde{y}|}\left(c^{u?}(x).x\,\widetilde{y}^{i}\right)\mid\overline{c_{1}!}(\widetilde{x})\mid\) & \(\widetilde{z}=(z_{1},\ldots,z_{|\mathscr{G}(C)|})\) \\ & \(\mathscr{D}^{1}_{\widetilde{x}}\big{\langle}P\{z^{1}\!/z\}\big{\rangle}\) & \(\widetilde{c}_{r}=\bigcup_{r\in\widetilde{y}}c^{r}\) \\ \hline \end{tabular}
\end{table}
Table 1: The breakdown function for processes and values.
\(\widetilde{y}\) and \(\widetilde{w}\), respectively. These tuples are not necessarily disjoint: variables with shared types can appear free in both \(V\) and \(Q\). The value \(V\) is then broken down with parameters \(\widetilde{y}\) and \(k+1\); the latter serves to consistently generate propagators for the trios in the breakdown of \(V\), denoted \(\mathcal{V}_{\widetilde{y}}(V\sigma)\) (see below). The substitution \(\sigma\) increments the index of session names; it is applied to both \(V\) and \(Q\) before they are broken down. By taking \(\sigma=\mathsf{next}(u_{i})\) we distinguish two cases (see Definition 3.8): * If name \(u_{i}\) is linear (i.e., it has a session type) then its future occurrences are renamed into \(u_{i+1}\), and \(\sigma=\{u_{i+1}/u_{i}\}\); * Otherwise, if \(u_{i}\) is shared, then \(\sigma=\{\}\). Note that if \(u_{i}\) is linear then it appears either in \(V\) or \(Q\) and \(\sigma\) affects only one of them. The last prefix activates the breakdown of \(Q\) with its corresponding context \(\widetilde{w}\). In case \(V=y\), the same strategy applies; because \(\mathcal{V}_{\widetilde{y}}(y\sigma)=y\), we have: \[\mathcal{B}_{\widetilde{x}}^{k}(u_{i}!\langle y\rangle.Q)=c_{k}?(\widetilde{x }).u_{i}!\langle y\rangle.\overline{c_{k+1}}!\langle\widetilde{w}\rangle\mid \mathcal{B}_{\widetilde{w}}^{k+1}(Q\sigma)\] Notice that variable \(y\) is not propagated further if it does not appear free in \(Q\).
* If \(\mathsf{tr}(u_{i})\) then \(u_{i}\) is tail-recursive and then we have: \[\mathcal{B}_{\widetilde{x}}^{k}(u_{i}!\langle V\rangle.Q)= c_{k}?(\widetilde{x}).c^{u}!\langle N_{V}\rangle\mid\mathcal{B}_{ \widetilde{w}}^{k+1}(Q)\] \[\text{where: }\ N_{V}= \lambda\widetilde{z}.z_{[S]}!\langle\mathcal{V}_{\widetilde{y}} (V)\rangle.(\overline{c_{k+1}}!\langle\widetilde{w}\rangle\mid c^{u}?(x).x \,\widetilde{z})\] The decomposition consists of a leading trio that mimics the output action running in parallel with the breakdown of \(Q\). After receiving the context \(\widetilde{x}\), the leading trio sends an abstraction \(N_{V}\) along \(c^{u}\), which performs several tasks. First, \(N_{V}\) collects the sequence of names \(\tilde{u}\); then, it mimics the output action of \(P\) along one of such names \((u_{[S]})\) and triggers the next trio, with context \(\widetilde{w}\); finally, it reinstates the server on \(c^{u}\) for the next trio that uses \(u\). Notice that indexing is not relevant in this case. In case \(V=y\), we have \(\mathcal{V}_{\widetilde{y}}(y\sigma)=y\) and \(\mathcal{V}_{\widetilde{y}}=0\), hence: \[\mathcal{B}_{\widetilde{x}}^{k}(u_{i}!\langle y\rangle.Q)=c_{k}?(\widetilde{x }).c^{u}!\langle\lambda\widetilde{z}.\,z_{[S]}!\langle y\rangle.(\overline{c _{k+1}}!\langle\widetilde{w}\rangle\mid c^{u}?(x).x\,\widetilde{z})\rangle \mid\mathcal{B}_{\widetilde{w}}^{k+1}(Q)\]
Input:To decompose a process \(u_{i}?(y).Q\) we distinguish two cases, as before: (i) name \(u_{i}\) is linear or shared or (ii) tail-recursive. In case (i), the breakdown is defined as follows:
\[\mathcal{B}_{\widetilde{x}}^{k}(u_{i}?(y).Q)=c_{k}?(\widetilde{x}).u_{i}?(y). \overline{c_{k+1}}!\langle\widetilde{w}\rangle\mid\mathcal{B}_{\widetilde{w} }^{k+1}(Q\sigma)\]
where \(\widetilde{w}=\mathtt{fv}(Q)\). A leading trio mimics the input action and possibly extends the context with the received variable \(y\). The substitution \(\sigma\) is defined as in the output case.
In case (ii), when \(u_{i}\) has tail-recursive session type \(S\), the decomposition is as in the output case:
\[\mathcal{B}_{\widetilde{x}}^{k}(u_{i}?(y).Q)=c_{k}?(\widetilde{x}).c^{u}! \langle\lambda\widetilde{z}.\,z_{[S]}?(y).(\overline{c_{k+1}}!\langle \widetilde{w}\rangle\mid c^{u}?(x).x\,\widetilde{z})\rangle\mid\mathcal{B}_{ \widetilde{w}}^{k+1}(Q)\]
Application:For simplicity we consider the breakdown of applications of the form \(V\left(\widetilde{r},u_{i}\right)\), where every \(r_{i}\in\widetilde{r}\) is such that \(\mathsf{tr}(r_{i})\) and only \(u_{i}\) is such that \(\neg\mathsf{tr}(u_{i})\). The general case (involving different orders in names and multiple names with non-recursive types) is similar. We have:
\[\mathcal{B}_{\widetilde{x}}^{k}(V\left(\widetilde{r},u_{i}\right))= c_{k}?(\widetilde{x}).\overline{c^{r_{1}}!\langle\lambda\widetilde{z} _{1}.c^{r_{2}}!\langle\lambda\widetilde{z}_{2}.\cdots.c^{r_{n}}!\langle \lambda\widetilde{z_{n}}.\,\mathcal{V}_{\widetilde{x}}(V)\left(\widetilde{z}_{1},\ldots,\widetilde{z}_{n},\widetilde{m}\right)\rangle\,\rangle}\]
Let us first discuss how names in \((\widetilde{r},u_{i})\) are decomposed using types. Letting \(|\widetilde{r}|=n\) and \(i\in\{1,\ldots,n\}\), for each \(r_{i}\in\widetilde{r}\) (with \(r_{i}:S_{i}\)) we generate a sequence \(\widetilde{z}_{i}=(z_{1}^{i},\ldots,z_{|\mathbb{R}^{s}(S_{i})|}^{i})\) as in the output case. We decompose name \(u_{i}\) (with \(u_{i}:C\)) as \(\widetilde{m}=(u_{i},\ldots,u_{i+|\mathcal{G}(C)|-1})\).
The decomposition first receives a context \(\widetilde{x}\) for value \(V\): we break down \(V\) with \(\widetilde{x}\) as a context since these variables need to be propagated to the abstracted process. Subsequently, an output on \(c^{r_{1}}\) sends a value containing \(n\) abstractions that occur nested within output prefixes--this is similar to the mechanism for partial instantiation shown in Example 2.3. For each \(j\in\{1,\ldots,n-1\}\), each abstraction binds \(\widetilde{z}_{j}\) and sends the next abstraction along \(c^{r_{j+1}}\). The innermost abstraction abstracts over \(\widetilde{z}_{n}\) and encloses the process \(\mathcal{V}_{\hat{x}}(V)\left(\widetilde{z}_{1},\ldots,\widetilde{z}_{n}, \widetilde{m}\right)\), which effectively mimics the application. This abstraction nesting binds all variables \(\widetilde{z}_{i}\), the decompositions of all tail-recursive names \((\widetilde{r})\).
The breakdown of a value application of the form \(y\left(\widetilde{r},u_{i}\right)\) results into the following specific case:
\[\mathbb{E}_{\hat{x}}^{k}(y\left(\widetilde{r},u_{i}\right))=c_{k}?(\widetilde{ x}).\overline{c^{r_{1}}!\langle\lambda\widetilde{z}_{1}.c^{r_{2}}!\langle \lambda\widetilde{z}_{2}.\cdots.c^{r_{n}}!\langle\lambda\widetilde{z}_{n}. \rangle\,y\left(\widetilde{z}_{1},\ldots,\widetilde{z}_{n},\widetilde{m} \right)\rangle\,\rangle}\]
Restriction:The decomposition of \((\nu\,s:C)\,P^{\prime}\) depends on \(C\):
* If \(\neg\mathsf{tr}(C)\) then \[\mathbb{E}_{\hat{x}}^{k}(\left(\nu\,s:C\right)P^{\prime})=(\nu\,\widetilde{s }:\mathcal{G}(C))\ \mathbb{E}_{\hat{x}}^{k}(P^{\prime}\sigma)\] By construction, \(\widetilde{x}=\mathtt{fv}(P^{\prime})\). Similarly as in the decomposition of \(u_{i}\) into \(\widetilde{m}\) discussed above, we use the type \(C\) of \(s\) to obtain the tuple \(\widetilde{s}\) of length \(|\mathcal{G}(C)|\). We initialize the index of \(s\) in \(P^{\prime}\) by applying the substitution \(\sigma\). This substitution depends on \(C\): if it is a shared type then \(\sigma=\{s_{1}/s\}\); otherwise, if \(C\) is a session type, then \(\sigma=\{s_{1}\overline{s_{1}}/s_{8}\overline{s}\}\).
* Otherwise, if \(\mathsf{tr}(C)\) then we have: \[\mathbb{E}_{\hat{x}}^{k}(\left(\nu\,s:C\right)P^{\prime})=(\nu\,\widetilde{s }:\mathcal{R}(S))\,(\nu\,c^{s})\,c^{s_{2}}?(x).x\,\widetilde{s}\mid(\nu\,c^{ \bar{s}})\,c^{\bar{s}}?(x).x\,\widetilde{\overline{s}}\mid\mathbb{E}_{\hat{x }}^{k}(P^{\prime})\] We decompose \(s\) into \(\widetilde{s}=(s_{1},\ldots,s_{|\mathcal{G}(S)|})\) and \(\overline{s}\) into \(\widetilde{\overline{s}}=(\overline{s_{1}},\ldots,\overline{s_{|\mathcal{G}( S)|}})\). Notice that as \(\mathsf{tr}(C)\) we have \(C\equiv\mu\mathsf{t}.S\), therefore \(\mathcal{G}(C)=\mathcal{R}(S)\). The breakdown introduces two servers in parallel with the breakdown of \(P^{\prime}\); they provide names for \(s\) and \(\overline{s}\) along \(c^{s}\) and \(c^{\overline{s}}\), respectively. The server on \(c^{s}\) (resp. \(c^{\overline{s}}\)) receives a value and applies it to the sequence \(\widetilde{s}\) (resp. \(\widetilde{\overline{s}}\)). We restrict over \(\widetilde{s}\) and propagators \(c^{s}\) and \(c^{\overline{s}}\).
Composition:The breakdown of a process \(Q\mid R\) is as follows:
\[\mathbb{E}_{\hat{x}}^{k}(Q\mid R)=c_{k}?(\widetilde{x}).\overline{c_{k+1}!} \langle\widetilde{y}\rangle.\overline{c_{k+l+1}!}\langle\widetilde{w}\rangle \mid\mathbb{E}_{\hat{y}}^{k+1}(Q)\mid\mathbb{E}_{\hat{w}}^{k+l+1}(R)\]
A control trio triggers the breakdowns of \(Q\) and \(R\); it does not mimic any action of the source process. The tuple \(\widetilde{y}\subseteq\widetilde{x}\) (resp. \(\widetilde{w}\subseteq\widetilde{x}\)) collects the free variables in \(Q\) (resp. \(R\)). To avoid name conflicts, the trigger for the breakdown of \(R\) is \(\overline{c_{k+l+1}}\), with \(l=\lceil Q\rceil\).
Inaction:To breakdown \(\mathbf{0}\), we define a degenerate trio with only one input prefix that receives a context that by construction will always be empty (i.e., \(\widetilde{x}=\epsilon\), cf. Remark 3.2):
\[\mathbb{E}_{\hat{x}}^{k}(\mathbf{0})=c_{k}?(\mathbf{0}).\mathbf{0}\]
Value:For simplicity, let us consider values of the form \(V=\lambda(\widetilde{y},z):(\widetilde{S},C)\,\widetilde{\widetilde{\,}}\,.\)\(P\), where \(\mathsf{tr}(y_{i})\) holds for every \(y_{i}\in\widetilde{y}\) and \(\neg\mathsf{tr}(z)\), and \(\neg\epsilon\in\{\neg\,,\rightarrow\}\). The general case is defined similarly. We have:
\[\mathcal{V}_{\hat{x}}(\lambda(\widetilde{y},z):(\widetilde{S},C) \,\widetilde{\widetilde{\,}}\,.\,P) =\lambda(\widetilde{y^{1}},\ldots,\widetilde{y^{\widetilde{n}}}, \widetilde{z}):(\widetilde{M})\,\widetilde{\widetilde{\,}}\,.\,N\quad\text{ where:}\] \[\widetilde{M} =\mathcal{G}(S_{1}),\ldots,\mathcal{G}(S_{n}),\mathcal{G}(C)\] \[N =(\nu\,\widetilde{c})\,(\nu\,\widetilde{c}_{r})\,\prod_{i\in| \widetilde{y}|}c^{y_{i}}?(x).x\,\widetilde{y^{i}}\mid\overline{c_{1}!}( \widetilde{x})\mid\mathbb{E}_{\hat{x}}^{1}(P\{z_{i}/z\})\]
Every \(y_{i}\) (with \(y_{i}:S_{i}\)) is decomposed into \(\widetilde{y}^{i}=(y_{1},\ldots,y_{|\mathcal{G}(S_{i})|})\). We use \(C\) to decompose \(z\) into \(\widetilde{z}\). We abstract over \(\widetilde{y}^{1},\ldots,\widetilde{y}^{n},\widetilde{z}\); the body of the abstraction (i.e. \(N\)) is the composition of recursive names propagators, the control trio, and the breakdown of \(P\), with name index initialized with the substitution \(\{z\!\!\!/z\}\). For every \(y_{i}\in\widetilde{y}\) there is a server \(c^{bi}?(x).x\,\widetilde{y}^{i}\) as a subprocess in the abstracted composition--the rationale for these servers is as in previous cases. We restrict the propagators \(\widetilde{c}=(c_{1},\ldots,c_{|P|})\): this enables us to type the value in a shared environment when \(\rightsquigarrow\)\(=\)\(\rightarrow\). Also, we restrict special propagator names \(\widetilde{c}_{r}=\bigcup_{r\in\widetilde{v}}c^{r}\).
### The Decomposition by Example
We illustrate the decompositions by means of several examples.
#### 3.3.1 Decomposing Processes with Non-Recursive Names
**Example 3.7**.: Consider process \(P=(\nu\,u)\,(Q\mid R)\) whose body implements end-points of channel \(u\) with session type \(S=?(U);?(\mathsf{bool})\);end, with \(U=(?(\mathsf{bool});\mathsf{end})\)\(\rightsquigarrow\), where:
\[Q =u?(x).\overbrace{u?(y).(\nu\,s)\,\big{(}x\,\overline{s}\mid s! (y)\big{)}}^{Q^{\prime}}\] \[R =\overline{u}!(V).\overline{u}!(\mathsf{true}).\mathbf{0}\] \[V =\lambda z.\,z?(b).\mathbf{0}\]
The process \(P\) reduces as follows:
\[P \longrightarrow(\nu\,u)\,\left(u?(y).(\nu\,s)\,\big{(}V\, \overline{s}\mid s!(y)\big{)}\mid\overline{u}!(\mathsf{true}).\mathbf{0} \right)\longrightarrow(\nu\,s)\,\big{(}V\,\overline{s}\mid s!(\mathsf{true}) \big{)}\] \[\longrightarrow(\nu\,s)\,\big{(}\overline{s}?(b).\mathbf{0}\mid s! (\mathsf{true})\big{)}=P^{\prime}\]
By Definition 3.9 we have that the decomposition of \(P\) is as follows:
\[\mathcal{D}(P)=(\nu\,c_{1},\ldots,c_{10})\,(\overline{c_{1}}!(\rangle\mid \mathbb{E}_{\epsilon}^{1}(P\sigma))\]
where \(\sigma=\{u_{1}\overline{u}_{1}\!/u\overline{u}\}\). We have:
\[\mathbb{E}_{\epsilon}^{1}(P\sigma)=(\nu\,u_{1},u_{2})\,c_{1}?().\overline{c_ {2}}!(\rangle.\overline{c_{3}}!(\rangle\mid\mathbb{E}_{\epsilon}^{2}(Q\sigma )\mid\mathbb{E}_{\epsilon}^{8}(R\sigma)\]
The breakdowns of sub-processes \(Q\) and \(R\) are as follows:
\[\mathbb{E}_{\epsilon}^{2}(Q\sigma) =c_{2}?().u_{1}?(x).\overline{c_{3}}!(x)\mid\mathbb{E}_{\epsilon }^{3}(Q^{\prime}\sigma^{\prime})\] \[\mathbb{E}_{x}^{3}(Q^{\prime}\sigma^{\prime}) =c_{3}?(x).u_{2}?(y).\overline{c_{4}}!(x,y)\mid\mathbb{E}^{4}(( \nu\,s)\,\big{(}x\,\overline{s}\mid s!(y)\big{)}\big{)}\] \[\mathbb{E}_{x,y}^{4}((\nu\,s)\,\big{(}x\,\overline{s}\mid s!(y) \big{)}) =(\nu\,s_{1}\,\big{(}c_{4}?(x).\overline{c_{5}}!(x).\overline{c_{6}}!(y) \mid c_{5}?(x).x\,\overline{s}_{1}\mid c_{6}?(y).s_{1}!(y).\overline{c_{7}}!( \rangle\mid c_{7}?().\mathbf{0})\] \[\mathbb{E}_{\epsilon}^{8}(R\sigma) =c_{8}?(\overline{u}_{1}!(\vee_{\epsilon}\,)).\overline{c_{9}}!( \rangle\mid\mathbb{E}_{\epsilon}^{9}(u_{2}!(\mathsf{true}).\mathbf{0})\] \[\mathbb{E}_{\epsilon}^{9}(u_{2}!(\mathsf{true}).\mathbf{0}) =c_{9}?().\overline{u}_{2}!(\mathsf{true}).\overline{c_{10}}!( \rangle\mid c_{10}?().\mathbf{0}\] \[\mathbb{V}_{\epsilon}(V) =\lambda z_{1}.\,((\nu\,c_{1}^{V},c_{2}^{V})\,\overline{c_{1}^{V}}!(\rangle\mid c_{1}^{V}?().z_{1}?(b).\overline{c_{2}^{V}}!(\rangle\mid c_{2}^{V }?().\mathbf{0})\]
where \(\sigma^{\prime}=\{u_{2}\overline{u}_{2}/u\overline{u}\}\). By \(\mathcal{G}(-)\) from Definition 3.3 we decompose \(S\) into \(M_{1}\) and \(M_{2}\) given as follows:
\[M_{1}=?(\mathcal{G}(U));\mathsf{end}=?(U);\mathsf{end}\] \[M_{2}=?(\mathsf{bool});\mathsf{end}\]
Above we may notice that \(\mathcal{G}(U)=U\). We remark that \(\mathcal{D}(P)\) accordingly implements indexed names \(u_{1},u_{2}\) typed with \(M_{1},M_{2}\), respectively.
Let us inspect the reductions of \(\mathcal{D}(P)\). First, there are three synchronizations on \(c_{1},c_{2},\) and \(c_{8}\):
\[\mathcal{D}(P) \longrightarrow(\nu\,c_{2},\ldots,c_{10})\,(\nu\,u_{1},u_{2})\, \overline{c_{2}!}(\cdot).\overline{c_{8}!}(\cdot)\mid\mathcal{B}^{2}_{\epsilon }(Q\sigma)\mid\mathcal{B}^{8}_{\epsilon}(R\sigma)\] \[\longrightarrow^{2}(\nu\,c_{3},\ldots,c_{7},c_{9},c_{10})\, \overline{u_{1}!(2\epsilon)}.\overline{c_{3}!}(\cdot)\mid\mathcal{B}^{3}_{ \epsilon}(Q^{\prime}\sigma^{\prime})\] \[\mid\overline{u_{1}!(\!\vee_{\epsilon}(V)\!)}.\overline{c_{9}!}( \cdot)\mid\mathcal{B}^{9}_{\epsilon}(u_{2}!(\prime\mathsf{true}).\mathbf{0})=D^ {1}\]
After reductions on propagators, \(D^{1}\) is able to mimic the original synchronization on channel \(u\) (highlighted above). It is followed by two administrative reductions on \(c_{3}\) and \(c_{9}\):
\[D^{1} \longrightarrow(\nu\,c_{3},\ldots,c_{7},c_{9},c_{10})\,\overline{c _{3}!}(\cdot\!\vee_{\epsilon}(V))\mid c_{3}?(x).u_{2}?(y).\overline{c_{4}!}(x,y)\mid\mathcal{B}^{4}((\nu\,s)\,\big{(}x\,\overline{s}\mid s!\langle y\rangle \big{)})\mid\] \[\overline{c_{9}!}(\cdot)\mid c_{9}?(\cdot).\overline{u_{2}!}( \mathsf{true}).\overline{c_{10}!}(\cdot)\mid c_{10}?(\cdot).\mathbf{0}\] \[\longrightarrow^{2}(\nu\,c_{4},\ldots,c_{7},c_{10})\,\overline{u_ {2}!(y)}.\overline{c_{4}!}(\cdot\!\vee_{\epsilon}(V),y)\mid\] \[(\nu\,s_{1})\,\big{(}c_{4}?(x,y).\overline{c_{5}!}(x).\overline{c _{6}!}(y)\mid c_{5}?(x).x\,\overline{s}_{1}\mid c_{6}?(y).s_{1}!\langle y\rangle.\overline{c_{7}!}(\cdot)\mid c_{7}?(\cdot).\mathbf{0}\big{)}\mid\] \[\overline{u_{2}!(\mathsf{true})}.\overline{c_{10}!}(\cdot)\mid c_{ 10}?(\cdot).\mathbf{0}=D^{2}\]
Similarly, \(D^{2}\) can mimic the next synchronization of the original process on name \(u_{2}\). Following up on that, syncronization on \(c_{10}\) takes place:
\[D^{2} \longrightarrow^{2}(\nu\,c_{4},\ldots,c_{7})\,\overline{c_{4}!}( \cdot\!\vee_{\epsilon}(V),\mathsf{true})\mid\] \[(\nu\,s_{1})\,\big{(}c_{4}?(x,y).\overline{c_{5}!}(x).\overline{c _{6}!}(y)\mid c_{5}?(x).x\,\overline{s}_{1}\mid c_{6}?(y).s_{1}!\langle y \rangle.\overline{c_{7}!}(\cdot)\mid c_{7}?(\cdot).\mathbf{0}\big{)}=D^{3}\]
Now, we can see that the next three reductions on \(c_{4}\), \(c_{5}\), and \(c_{6}\) appropriately propagate values \(\mathcal{V}_{\epsilon}(V)\) and \(\mathsf{true}\) to the breakdown of sub-processes. Subsequently, value \(\mathcal{V}_{\epsilon}(V)\) is applied to name \(\overline{s}_{1}\):
\[D^{3} \longrightarrow(\nu\,c_{5},\ldots,c_{7})\,(\nu\,s_{1})\,\big{(} \overline{c_{5}!}(\cdot\!\vee_{\epsilon}(V)).\overline{c_{6}!}(\cdot\mathsf{ true})\mid c_{5}?(x).x\,\overline{s}_{1}\mid c_{6}?(y).s_{1}!\langle y \rangle.\overline{c_{7}!}(\cdot)\mid c_{7}?(\cdot).\mathbf{0}\big{)}\] \[\longrightarrow^{2}(\nu\,c_{7})\,(\nu\,s_{1})\,\big{(}\mathcal{V} _{\epsilon}(V)\,\overline{s}_{1}\mid s_{1}!\langle\mathsf{true}\rangle. \overline{c_{7}!}(\cdot)\mid c_{7}?(\cdot).\mathbf{0}\big{)}\] \[\longrightarrow(\nu\,c_{7})\,(\nu\,s_{1})\,\big{(}(\nu\,c_{1}^{V},c_{2}^{V})\,\overline{c_{1}^{V}}(\cdot)\mid c_{1}^{V}?(\cdot)\mid c_{1}^{V}?( \cdot).s_{1}?(b).\overline{c_{2}^{V}}(\cdot)\mid c_{2}^{V}?(\cdot).\mathbf{0} \big{)}\mid s_{1}!\langle\mathsf{true}\rangle.\overline{c_{7}!}(\cdot)\mid c_{7 }?(\cdot).\mathbf{0}=D^{4}\]
Finally, after syncronization on \(c_{1}^{V}\) we reach the process \(D^{5}\) that is clearly able to simulate \(P^{\prime}\), and its internal communication on the channel \(s\):
\[D^{4}\longrightarrow(\nu\,c_{7})\,(\nu\,s_{1})\,\big{(}(\nu\,c_{2}^{V})\,s_{1}? (b).\overline{c_{2}^{V}}!(\cdot)\mid c_{2}^{V}?(\cdot).\mathbf{0}\big{)}\mid s_{1 }!\langle\mathsf{true}\rangle.\overline{c_{7}!}(\cdot)\mid c_{7}?(\cdot). \mathbf{0}=D^{5}\]
\(\lhd\)
**Example 3.8** (Breaking Down Name-Passing).: Consider the following process \(P\), in which a channel \(m\) is passed, through which a boolean value is sent back:
\[P=(\nu\,u)\,(u!(\!\ulcorner\,m\,\urcorner\,).\overline{m}?(b)\mid\overline{u}?( \ulcorner\,x\urcorner\,).x!\langle\mathsf{true}\rangle)\]
After expanding the syntactic sugar of name-passing, we get a process \(P=(\nu\,u)\,(Q\mid R)\), where
\[Q =u!(V).\overline{m}?(y).(\nu\,s)\,(y\,s\mid\overline{s}!\langle \lambda b.\,\mathbf{0}\rangle) V=\lambda z.\,z?(x).(x\,m)\] \[R =\overline{u}?(y).(\nu\,s)\,(y\,s\mid\overline{s}!(W)) W=\lambda x.\,x!\langle W^{\prime}\rangle\text{ with }W^{\prime}=\lambda z.\,z?(x).(x\,\mathsf{true})\]
Note that to mimic the name-passing synchronization, we require exactly four reduction steps:
\[P\longrightarrow^{4}[\overline{m}?(b)\mid m!\langle\mathsf{true}\rangle] \longrightarrow^{4}\mathbf{0} \tag{19}\]
We will now investigate the decomposition of \(P\) and its reduction chain. First, we use Definition 3.6 to compute \(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In Table 2 we have omitted substitutions that have no effect and trailing \(\mathbf{0}\mathbf{s}\). The first interesting action appears after synchronizations on \(c_{1}\), \(c_{2}\), and \(c_{8}\). At that point, the process will be ready to mimic the first action that is performed by \(P\), i.e., \(u_{1}\) will send \(\mathcal{V}_{\epsilon}(V)\), the breakdown of \(V\), from the breakdown of \(Q\) to the breakdown of \(R\). Next, \(c_{9}\) and \(c_{10}\) will synchronize, and \(\mathcal{V}_{\epsilon}(V)\) is passed further along, until \(s_{1}\) is ready to be applied to it in the breakdown of \(R\). At this point, we know that \(P\longrightarrow^{7}(\nu\,\widetilde{c})\,P^{\prime}\), where \(\widetilde{c}=(c_{3},\ldots,c_{12})\), and
\[P^{\prime} =\overline{c_{3}!}(\mid\,\varsigma_{5}?().\overline{m_{1}?}(y). \overline{c_{4}!}(y)\] \[\mid(\nu\,s_{1})\,(c_{4}?(y).\overline{c_{5}!}(y).\overline{c_{6 }!}(\mid\,\,c_{5}?(y).y\,s_{1}\mid c_{6}?().\overline{s_{1}!}(\!\setminus \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Example 3.9** (Decomposing Processes with Recursive Names (I)).: Let \(P=r?(x).r!\langle x\rangle.P^{\prime}\) be a process where \(r\) has type \(S=\mu\mbox{\rm t.?}(\mbox{\rm int})!\langle\mbox{\rm int}\rangle;\)t and \(r\in\mbox{\rm fn}(P^{\prime})\). By Definition 3.9 we have:
\[\mathscr{D}(P)=(\nu\,\widetilde{c})\,(\nu\,c^{r})\,\big{(}\overline{c_{1}!}( \langle\rangle\mid\mathscr{B}^{1}_{\epsilon}(P)\mid c^{r}?(x).x\,(r_{1},r_{2}) \big{)}\]
where \(\widetilde{c}=(c_{1},\ldots,c_{|P|})\). The control trio in the parallel composition provides a decomposition of \(r\) on name \(c^{r}\), which is _shared_. The decomposition \(\mathscr{B}^{1}_{\epsilon}(P)\) is defined as follows:
\[\mathscr{B}^{1}_{\epsilon}(P) =\overline{c_{1}!}(\langle\rangle\mid c_{1}?().c^{r}!(N_{1}) \mid c_{2}?(y).c^{r}!\langle N_{2}\rangle\mid\mathscr{B}^{3}_{\epsilon}(P^{ \prime})\big{)}\] \[N_{1} =\lambda(z_{1},z_{2}).\,z_{1}?(x).c^{r}?(x).x\,(z_{1},z_{2})\] \[N_{2} =\lambda(z_{1},z_{2}).\,z_{2}!(x).\overline{c_{3}!}(\langle \rangle.c^{r}?(x).x\,(z_{1},z_{2})\]
Each trio in \(\mathscr{B}^{1}_{\epsilon}(P)\) that mimics some action on \(r\) requests the sequence \(\tilde{r}\) from the server on \(c^{r}\). We can see that this request is realized by a higher-order communication: trios send abstractions (\(N_{1}\) and \(N_{2}\)) to the server; these abstractions contain further actions of trios and it will be applied to the sequence \(\tilde{r}\). Hence, the formal arguments for these values are meant to correspond to \(\tilde{r}\).
After two reductions (the trio activation on \(c_{1}\) and the communication on \(c^{r}\)), we have:
\[\mathscr{D}(P)\longrightarrow^{2}(\nu\,c_{2},\ldots,c_{|P|})\ r_{1}?(x). \overline{c_{2}!}\langle x\rangle.c^{r}?(x).x\,(r_{1},r_{2})\mid c_{2}?(y).c^ {r}!\langle N_{2}\rangle\mid\mathscr{B}^{3}_{\epsilon}(P^{\prime})=P_{1}\]
By synchronizing with the top-level server on \(c^{r}\), the bound names in \(N_{1}\) are instantiated with \(r_{1},r_{2}\). Now, the first trio in \(P_{1}\) is able to mimic the action on \(r_{1}\) that is followed by the activation of the next trio on \(c_{2}\). Then, the server on \(c^{r}\) gets reinstantiated making names \(r_{1},r_{2}\) available for future trios. The break down of the output action follows the same pattern. \(\lhd\)
**Example 3.10** (Decomposing Processes with Recursive Names (II)).: Let \(S=\mu\mbox{\rm t.?}(\mbox{\rm int});!\langle\mbox{\rm int}\rangle;\)t and \(T=\mu\mbox{\rm t.?}(\mbox{\rm bool});!\langle\mbox{\rm bool}\rangle;\)t, and define \(Q=V\,(u,v)\) as a process where \(u:S\) and \(v:T\), where \(V\) is some value of type \((S,T)\!\rightarrow\!\circ\). By Definition 3.9, the decomposition of \(Q\) is as in the previous example, except that now there are two servers, one for \(u\) and one for \(v\):
\[\mathscr{D}(Q) =(\nu\,c_{1}\widetilde{c})\,(\nu\,c^{u}c^{v})\,\big{(}c^{u}?(x).x \,(u_{1},u_{2})\mid c^{v}?(x).x\,(v_{1},v_{2})\mid\overline{c_{1}!}(\langle \rangle\mid\mathscr{B}^{1}_{\epsilon}(Q)\big{)}\] \[\mathscr{B}^{1}_{\epsilon}(Q) =c_{1}?(.)c^{u}!\langle\lambda(x_{1},x_{2}).\,c^{v}!\langle \lambda(y_{1},y_{2}).\,\mathscr{V}_{\epsilon}(V)\;(x_{1},x_{2},y_{1},y_{2}) \rangle\big{)}\]
with \(\tilde{c}=(c_{2},\ldots,c_{|Q|})\). Process \(Q\) is broken down in such a way that it communicates with both servers to collect \(\tilde{u}\) and \(\tilde{v}\). To this end, \(\mathscr{B}^{1}_{\epsilon}(Q)\) is a process in which abstractions are nested using output prefixes and whose innermost process is an application. After successive communications with multiple servers this innermost application will have collected all names in \(\tilde{u}\) and \(\tilde{v}\).
Observe that we use two nested outputs, one for each name with recursive types in \(Q\). We now look at the reductions of \(\mathscr{D}(Q)\) to analyze how the communication of nested abstractions allows us to collect all name sequences needed. After the first reduction along \(c_{1}\) we have:
\[\mathscr{D}(Q)\longrightarrow (\nu\,\tilde{c})\,(\nu\,c^{u}c^{v})\,\big{(}c^{u}?(x).x\,(u_{1}, u_{2})\mid c^{v}?(x).x\,(v_{1},v_{2})\mid\] \[c^{u}!\langle\lambda(x_{1},x_{2}).\,c^{v}!\langle\lambda(y_{1}, y_{2}).\,\mathscr{V}_{\epsilon}(V)\;(x_{1},x_{2},y_{1},y_{2})\rangle\big{)} \big{)}=R^{1}\]
From \(R^{1}\) we have a synchronization along name \(c^{u}\):
\[R^{1}\longrightarrow (\nu\,\tilde{c})\,(\nu\,c^{u}c^{v})\,\big{(}(\lambda(x_{1},x_{2}). \,c^{v}!\langle\lambda(y_{1},y_{2}).\,\mathscr{V}_{\epsilon}(V)\;(x_{1},x_{2}, y_{1},y_{2})\rangle)\,(u_{1},u_{2})\mid c^{v}?(x).x\,(v_{1},v_{2})\big{)}=R^{2}\]
Upon receiving the value, the server applies it to \((u_{1},u_{2})\), thus obtaining the following process:
\[R^{2}\longrightarrow (\nu\,\tilde{c})\,(\nu\,c^{u}c^{v})\,\big{(}c^{v}!\langle\lambda(y_ {1},y_{2}).\,\mathscr{V}_{\epsilon}(V)\;(u_{1},u_{2},y_{1},y_{2})\rangle\mid c^{ v}?(x).x\,(v_{1},v_{2})\big{)}=R^{3}\]
Up to here, we have partially instantiated name variables of a value with the sequence \(\tilde{u}\). Next, the first trio in \(R^{3}\) can communicate with the server on name \(c^{v}\):
\[R^{3}\longrightarrow (\nu\,\tilde{c})\,(\nu\,c^{u}c^{v})\,\big{(}\lambda(y_{1},y_{2}). \,\mathscr{V}_{\epsilon}(V)\;(u_{1},u_{2},y_{1},y_{2})\,(v_{1},v_{2})\big{)}\] \[\longrightarrow (\nu\,\tilde{c})\,(\nu\,c^{u}c^{v})\,\big{(}\mathscr{V}_{\epsilon} (V)\;(u_{1},u_{2},v_{1},v_{2})\big{)}\]
This completes the instantiation of name variables with appropriate sequences of names with recursive types. At this point, \(\mathscr{D}(Q)\) can proceed to mimic the application in \(Q\).
**Example 3.11** (Breakdown of Recursion Encoding).: We recall process \(\left\lceil P\right\rceil\) from Example 3.4:
\[\left\lceil P\right\rceil =a?(m).a!\langle m\rangle.(\nu\,s)\left(V\left(a,s\right)\mid\, \mathfrak{F}!(V)\right)\] \[V =\lambda(x_{a},y_{1}).y_{1}?(z_{x}).x_{a}?(m).x_{a}!\langle m \rangle.(\nu\,s)\left(z_{x}\left(x_{a},s\right)\mid\,\mathfrak{F}!(z_{x}). \mathbf{0}\right)\]
Here, bound name \(s\) is typed with \(S\), from Example 3.3, defined as:
\[S=\mu\mathfrak{t}.?((?(\mathsf{str})!!(\mathsf{str});\!\mathsf{end},\mathfrak{t })\!\rightarrow\!\circ);\mathsf{end}\]
We now analyze \(\mathcal{D}(\left\lfloor P\right\rceil)\) and its reduction chain. By Definition 3.6, we have \(\left\langle\left\lfloor P\right\rceil\right\rceil=7\). Then, we choose \(k=1\) and observe that \(\sigma=\left\{a_{1}\overline{a_{l}}/a\overline{a_{l}}\right\}\). Following Definition 3.9, we get:
\[\mathcal{D}(\left\lfloor P\right\rceil) =(\nu\,c_{1},\ldots,c_{7})\left(\nu\,c^{a}\right)(c^{a}?(x).x\left( a_{1},a_{2}\right)\mid\overline{c_{1}!}(\rangle\mid\mid\mid\mid\mid\mid\mid\mid\mid \mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid \mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid \mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid \mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid \mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid\mid \mid\
**Definition 3.10** (Decomposition of Environments).: Let \(\Gamma\), \(\Lambda\), and \(\Delta\) be typing environments. We define \(\mathcal{G}(\Gamma)\), \(\mathcal{G}(\Lambda)\), and \(\mathcal{G}(\Delta)\) inductively as follows:
\[\mathcal{G}(\Delta,u_{i}:S) =\mathcal{G}(\Delta),(u_{i},\dots,u_{i+|\mathcal{G}(S)|-1}): \mathcal{G}(S)\] \[\mathcal{G}(\Gamma,u_{i}:(U)) =\mathcal{G}(\Gamma),u_{i}:\mathcal{G}((U))\] \[\mathcal{G}(\Gamma,x:U) =\mathcal{G}(\Gamma),x:\mathcal{G}(U)\] \[\mathcal{G}(\Lambda,x:U) =\mathcal{G}(\Lambda),x:\mathcal{G}(U)\] \[\mathcal{G}(\emptyset) =\emptyset\]
**Lemma 3.1**.: _Let \(P\) be an indexed \(\mathsf{HO}\) process and \(V\) be a value._
1. _If_ \(\Gamma;\Lambda;\Delta\circ\Delta_{\mu}\vdash P\triangleright\circ\) _then_ \(\mathcal{G}(\Gamma_{1}),\Phi;\mathcal{G}(\Delta),\Theta\vdash\mathcal{E}_{ \widetilde{x}}^{k}(P)\triangleright\circ\)_, where:_ * \(k>0\)__ * \(\widetilde{r}=\text{dom}(\Delta_{\mu})\)__ * \(\Phi=\prod_{r\in\widetilde{r}}c^{r}:\langle\mathcal{R}^{*}(\Delta_{\mu}(r)) \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We use higher-order bisimulation as a basis to give a formal definition of MST bisimulation in Section 4.2, which we will use as a notion of behavioral equivalence for comparing \(P\) and \(\mathcal{D}(P)\). In order to show that our decomposition is correct, in Section 4.3 we exhibit a bisimulation relation \(\,\mathcal{S}\,\) which relates a process and its decomposition, containing a number of intermediate pairs, working from a motivating example in Section 4.3.1. Finally, in Section 4.4 we show that \(\,\mathcal{S}\,\) is indeed an MST bisimulation.
### Behavioral Equivalence in \(\mathsf{HO}\) and its Limitations
Let us begin by recalling the notion of \(\mathsf{HO}\) bisimulation, defined in [18] to characterize contextual equivalence of \(\mathsf{HO}\) processes.
**Definition 4.1** (Definition 17 in [18]).: A typed relation \(\Re\) is an \(\mathsf{HO}\)_bisimulation_ if for all \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\ \Re\ \Gamma_{2};\Lambda_{2};\Delta_{2} \vdash Q_{1}\),
1. Whenever \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\stackrel{{(\nu \,\widetilde{m_{1}})\,n!(V_{1})}}{{\longmapsto}}\Lambda^{\prime}_{1};\Delta^{ \prime}_{1}\vdash P_{2}\) then there exist \(Q_{2}\), \(\Delta^{\prime}_{2}\), and \(\Lambda^{\prime}_{2}\) such that \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\stackrel{{(\nu \,\widetilde{m_{2}})\,n!(V_{2})}}{{\longmapsto}}\Lambda^{\prime}_{2};\Delta^{ \prime}_{2}\vdash Q_{2}\) where, for a fresh \(t\), \[\Gamma_{1};\Lambda_{1};\Delta^{\prime\prime}_{1}\vdash(\nu\,\widetilde{m_{1}}) (P_{2}\mid t\leftarrow_{\mathsf{H}}V_{1})\ \Re\ \Gamma_{2};\Lambda_{2};\Delta^{\prime\prime}_{2}\vdash(\nu\,\widetilde{m_{2}}) (Q_{2}\mid t\leftarrow_{\mathsf{H}}V_{2})\]
2. Whenever \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\stackrel{{\ell}}{{ \longmapsto}}\Lambda^{\prime}_{1};\Delta^{\prime}_{1}\vdash P_{2}\), with \(\ell\) not an output, then there exist \(Q_{2}\), \(\Lambda^{\prime}_{2}\), and \(\Delta^{\prime}_{2}\) such that \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\stackrel{{\ell}}{{ \longmapsto}}\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) and \(\Gamma_{1};\Lambda^{\prime}_{1};\Delta^{\prime}_{1}\vdash P_{2}\ \Re\ \Gamma_{2};\Lambda^{\prime}_{2};\Delta^{ \prime}_{2}\vdash Q_{2}\).
3. The symmetric cases of 1, 2.
The largest such bisimulation is called \(\mathsf{HO}\)_bisimilarity_, denoted by \(\approx^{\mathtt{H}}\).
There are two points worth highlighting in this definition. Firstly, the labeled transition system \(\stackrel{{\ell}}{{\longmapsto}}\) used in the definition of \(\approx^{\mathtt{H}}\) is what is called the _refined transition system_, different from the standard labeled transition system for the higher-order \(\pi\)-calculus. The idea behind the refined transition system is that we want to disallow arbitrary inputs \(P\stackrel{{\nu(V)}}{{\longmapsto}}P^{\prime}\); having to consider such transitions in the definition of bisimilarity is undesirable, because it involves input of an arbitrary (higher-order) value \(V\), making the definition very much non-local and ensuring that the bisimulations are very large. As it turns out, due to the typed nature of the system, it suffices to consider inputs of the processes of a very particular kind--_characteristic values_, defined based on the type.
Secondly, because the inputs are restricted in the refined LTS, there is some price to pay in the handling of the outputs. If an output action \(P_{1}\stackrel{{(\nu\,\widetilde{m_{1}})\,n!(V_{1})}}{{ \longmapsto}}P_{2}\) is matched by an output action \(Q_{1}\stackrel{{(\nu\,\widetilde{m_{2}})\,n!(V_{2})}}{{ \longmapsto}}Q_{2}\), then we need to ensure that that the output processes \(V_{1}\) and \(V_{2}\) are somehow related. We have to ensure this in the output clause, because on the receiving end transitions inputing values \(V_{1}\) or \(V_{2}\) might not even be considered. To that extent, we package the values \(V_{1}\) or \(V_{2}\) in _trigger processes_ (denoted \(t\leftarrow_{\mathtt{H}}V_{1}\) and \(t\leftarrow_{\mathtt{H}}V_{2}\)), which are defined based on the typing. We then make them part of the processes that are considered at the "next step" of the bisimulation.
This notion of \(\mathsf{HO}\) bisimilarity works for processes of the same type. For our case, we need to compare processes of different, but related types. To that extent we make several changes to the definition above. Firstly, during the decomposition a single name \(x\) in a source process is decomposed into a sequence of names \(x_{1},\ldots,x_{k}\) in the target process. So in the definition of MST bisimilarity we match an action on a name \(x\) with an action on an _indexed_ name \(x_{i}\). Secondly, such discrepancy between names might arise in input and output values. This also needs to be considered as part of the definition. For this, we need to accommodate the difference between characteristic values and trigger processes for MST and HO. In the next subsection we work out the details sketched above.
### MST Bisimilarity
In this section we define a generalized version of HO bisimilarity allowing for comparing MST and HO process terms. Our goal is to define _MST bisimilarity_ (denoted \(\approx^{\mathsf{M}}\)), a typed behavioral equivalence, which we give in Definition 4.9. To define \(\approx^{\mathsf{M}}\), we require some auxiliary definitions, in particular:
* A refined LTS on typed processes (Definition 4.5);
* A relation \(\bowtie\) on values (Definition 4.6) and on names (Definition 4.14);
* A revised notion of trigger processes (Definition 4.8).
Refined LTS and characteristic values.The idea behind defining the refined LTS is to restrict the input of arbitrary processes (values) and make the transition system image-finite (modulo names).
The _refined LTS_ for HO is defined in [18] in three layers. First comes the _untyped LTS_\(P\xrightarrow{\ell}P^{\prime}\), which describes reductions of untyped processes in the usual style of the LTS semantics for \(\pi\)-calculus. Secondly, there is a notion of the _environmental LTS_\((\Gamma_{1};\Lambda_{1};\Delta_{1})\xrightarrow{\ell}(\Gamma_{2};\Lambda_{2}; \Delta_{2})\), which describes reductions of typing environments. This LTS describes the way a typing context can evolve in accordance with its session types. On top of these layers there are notions of _refined environmental LTS_ and _refined LTS for processes_. The former restricts the environmental LTS to inputs on characteristic values, as we discussed in Section 4.1. Finally, the refined LTS for processes restricts the untyped LTS to those actions which are supported by the refined environmental LTS.
We follow this approach for defining the refined LTS for MST processes. Both the untyped LTS for processes and the environmental LTS for MST processes coincides with the same LTSs for HO (or, to be more precise, with its restriction to minimal session types). It remains, then, to define the refined environmental LTS for MST processes, with the idea that the refined LTS restricts inputs to the inputs on _minimal characteristic values_ and _minimal trigger values_.
**Definition 4.2** (Minimal trigger value).: Given a value type \(C\rightsquigarrow\diamond\) and fresh (indexed) name \(t_{1}\), the _minimal trigger value_ on \(t_{1}\) of type \(\mathcal{G}(C)\rightsquigarrow\diamond\) is defined as the abstraction
\[\lambda\widetilde{x}.\,t_{1}?(y).y\,\widetilde{x}\]
where \(\widetilde{x}=(x_{1},\ldots,x_{|\mathcal{G}(C)|})\).
**Definition 4.3** (Minimal characteristic values).: Let \(u\) be a name and \(i>0\). We define \(\langle-\rangle_{i}^{u}\) and \(\langle-\rangle\) on types as follows.
\[\begin{array}{l
\(\begin{array}{c}\left[\text{MRcv}\right]\\ \hline\underbrace{(\Gamma_{1};\Lambda_{1};\Delta_{1})\xrightarrow{n?(V)}(\Gamma_{2}; \Lambda_{2};\Delta_{2})}_{\left(V\equiv\langle L\rangle\right)\vee(V\equiv \lambda\widetilde{x}.\,t_{1}?(y).(y\,\widetilde{x}))}_{\left(\Gamma_{1}; \Lambda_{1};\Delta_{1}\right)\xrightarrow{n?(V)}_{\mathfrak{m}}(\Gamma_{2}; \Lambda_{2};\Delta_{2})}\end{array}\)
where \(\lambda\widetilde{x}.\,t_{1}?(y).(y\,\widetilde{x})\) is a minimal trigger value of type \(\mathcal{G}(C)\) (Definition 4.2).
Finally, the refined LTS for MST processes is just a combination of the untyped LTS with the refined environmental LTS:
**Definition 4.5** (Refined Lts).: The environmental refined LTS extends to the typed refined LTS on processes. We write \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\xleftarrow{\ell}_{\mathfrak{m} }\Lambda_{1}^{\prime};\Delta_{1}^{\prime}\vdash P_{2}\) when
* \(P_{1}\xrightarrow{\ell}P_{2}\), and
* \((\Gamma_{1};\Lambda_{1};\Delta_{1})\xleftarrow{\ell}_{\mathfrak{m}}(\Gamma_{ 2};\Lambda_{2};\Delta_{2})\).
We write \(\xleftarrow{\ell}_{\mathfrak{m}}\) for the weak version of the transition \(\xleftarrow{\ell}_{\mathfrak{m}}\). Notice that while the untyped LTS and the non-refined environmental LTS coincide with that of HO, the refinement that we impose on the environmental LTS is different from its HO counterpart. Specifically in Rule [MRcv] we take special care to use minimal characteristic processes \(\langle-\rangle\), instead of general HO characteristic process \(\{-\}_{\mathbf{c}}\) as defined in [18].
Relating trigger and characteristic values.As we mentioned earlier, the notion of bisimulation that we consider requires matching transitions of the source HO term with the transitions of the target MST term. However, the two transitions might differ on the inputs of characteristic values. We accommodate for that difference by establishing a relation between the trigger and characteristic values of HO and MST.
**Definition 4.6**.: We define the relation \(\bowtie\) between HO processes and indexed processes inductively as:
\[\frac{|\tilde{x}|=|\mathcal{G}(C)|}{\lambda x:C.\,t?(y).y\,x\bowtie\lambda \tilde{x}:\mathcal{G}(C).\,t_{1}?(y).y\,\tilde{x}}\qquad\underbrace{\{C \leadsto\diamond_{\mathbf{c}}\bowtie\langle C\leadsto\diamond\rangle\}}\]
where \(\lambda\widetilde{x}:\mathcal{G}(C).\,t_{1}?(y).(y\,\widetilde{x})\) is a minimal trigger value of type \(\mathcal{G}(C)\leadsto\diamond\) (Definition 4.2) and \(\{-\}_{\mathbf{c}}\) denotes the characteristic values defined in [18]. We write \(\lambda x:C.\,t_{1}?(y).y\,x\) to mean that value \(\lambda x.\,t_{1}?(y).y\,x\) is of type \(C\leadsto\diamond\).
Trigger processes and MST bisimilarity.Before we give the definition of MST bisimilarity, we establish the following notations:
**Definition 4.7** (Indexed name).: Given a name \(n\), we write \(\check{n}\) to either denote \(n\) or any indexed name \(n_{i}\), with \(i>0\).
**Definition 4.8** (Trigger process).: Given a value \(V\), a trigger process for a fresh (indexed) name \(t_{1}\) is defined as:
\[t_{1}\xleftarrow{{}_{\mathtt{H}}}V\xleftarrow{{}_{\mathtt{H}}}t_{1}?( \widetilde{x}).(V\,\,\widetilde{x})\]
where \(|\widetilde{x}|=|\widetilde{C}|\) for \(V:\widetilde{C}\leadsto\diamond\).
**Lemma 4.1**.: _If \(\Gamma;\Lambda;\Delta\vdash V\triangleright\widetilde{C}\leadsto\diamond\), then \(\Gamma;\Lambda;\Delta,t_{1}:?(\widetilde{C})\vdash t_{1}\xleftarrow{{}_{ \mathtt{H}}}V\triangleright\diamond\)._
Finally, we are ready to formally define MST bisimilarity.
**Definition 4.9** (Mst Bisimilarity).: A typed relation \(\Re\) is an _MST bisimulation_ if for all \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\,\,\Re\,\,\Gamma_{2};\Lambda_{2} ;\Delta_{2}\vdash Q_{1}\),
1. Whenever \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\vdash\frac{(\nu\,\widetilde{m_{1}})\,n! (V_{1})}{\Lambda_{1}^{\prime}}\,\Lambda_{1}^{\prime}\vdash P_{2}\) then there exist \(Q_{2}\), \(\Delta_{2}^{\prime}\), and \(\Lambda_{2}^{\prime}\) such that \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\nu\,\widetilde{m_{2 }}}_{\blacksquare}\Lambda_{2}^{\prime};\Delta_{2}^{\prime}\vdash Q_{2}\) where, for a fresh \(t\), \[\Gamma_{1};\Lambda_{1};\Delta_{1}^{\prime\prime}\vdash(\nu\,\widetilde{m_{1}})( P_{2}\mid t\xRightarrow{\mu}\,V_{1})\ \Re\ \Gamma_{2};\Lambda_{2};\Delta_{2}^{\prime\prime}\vdash(\nu\,\widetilde{m_{2}})( Q_{2}\mid\ddot{t}\xRightarrow{\mu}\,V_{2})\]
2. Whenever \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\xRightarrow{\nu\,\widetilde{m_{1 }}}_{\blacksquare}\Lambda_{1}^{\prime};\Delta_{1}^{\prime}\vdash P_{2}\) then there exist \(Q_{2}\), \(\Lambda_{2}^{\prime}\), and \(\Delta_{2}^{\prime}\) such that \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\nu\,\widetilde{m_{ 2}}}_{\blacksquare}\Lambda_{2}^{\prime},\Delta_{2}^{\prime}\vdash Q_{2}\) where \(V_{1}\bowtie V_{2}\) and \(\Gamma_{1};\Lambda_{1}^{\prime};\Delta_{1}^{\prime}\vdash P_{2}\ \Re\ \Gamma_{2};\Lambda_{2}^{\prime};\Delta_{2}^{ \prime}\vdash Q_{2}\),
3. The symmetric cases of \(1\) and \(2\).
The largest such bisimulation is called _MST bisimilarity_, denoted by \(\approx^{\mathsf{M}}\).
In all clauses, we use the refined LTS (Definition 4.5) and rely on notation \(\tilde{n}\) (Definition 4.7). In the output clause, we use the triggers (Definition 4.8). In the input clause, we use the relation \(\bowtie\) on values (Definition 4.6).
We discuss differences between MST bisimilarity and higher-order bisimilarity as defined in [18]. First, an action in \(P_{1}\) must be matched by an action on an indexed name in \(Q_{1}\), and refined LTS actions in \(P_{1}\) are matched by minimal refined LTS actions in \(Q_{1}\) (Definition 4.6). As a consequence of the latter, in the input case the observed values are not identical but related by \(\bowtie\) (Definition 4.6). In other words, whenever \(P_{1}\) receives a trigger or a characteristic value, then \(Q_{1}\) should receive their minimal counterparts (Definition 4.2 and Definition 4.3). Further, as names could be indexed on the right-hand side, the typing environments could differ for open processes, so the MST bisimilarity assumes different typing environments on both sides.
### The Bisimulation Relation
Our goal is to complement our static correctness result (Theorem 3.1) by proving the following statement about the decomposition of processes (Definition 3.9):
**Theorem 4.1**.: _Let \(P\) be an \(\mathsf{HO}\) process such that \(\Gamma;\Delta;\Lambda\vdash P\triangleright\diamond\). We have_
\[\Gamma;\Lambda;\Delta\vdash P\ \approx^{\mathsf{M}}\ \mathcal{G}(\Gamma); \mathcal{G}(\Lambda);\mathcal{G}(\Delta)\vdash\mathcal{D}(P)\]
To show that \(P\) and \(\mathcal{D}(P)\) are MST-bisimilar, we provide a concrete bisimulation relation \(\,\mathcal{S}\,\) that contains \((P,\mathcal{D}(P))\). Defining \(\,\mathcal{S}\,\) to be just the set of such pairs is, however, not going to work; instead, the relation \(\,\mathcal{S}\,\) should also contain pairs corresponding to "intermediate" states in which the process and its decomposition may get "desynchronized". Before we give the concrete definition of \(\,\mathcal{S}\,\) we look at an example, illustrating the need for such intermediate pairs.
#### 4.3.1 A Motivating Example
Consider the following process:
\[P_{1}=u?(t).v?(x).(\nu\,s:S)\,(u!\langle x\rangle.\mathbf{0}\mid t\,s\mid \,\overline{s}!\langle x\rangle.\mathbf{0})\mid\overline{v!}\langle V\rangle.\mathbf{0}\]
where \(u:?(\langle U_{t}\rangle);!\langle U_{V}\rangle;\mathsf{end}\) and \(v:S\) with \(S=?(U_{V})\), \(U_{t}=S\!\rightarrow\!\diamond\), and \(U_{V}\) is some shared value type, i.e. \(U_{V}=S_{V}\!\rightarrow\!\diamond\), for some session type \(S_{V}\). Further, \(V\) is some value, such that \(V=\lambda y:S_{V}.\,R\).
Thus, \(P_{1}\) is typed using the typing of its constituents:
\[\emptyset;\emptyset;\overline{v}:\overline{S}\vdash\overline{v!}\langle V \rangle.\mathbf{0}\triangleright\diamond\]
\[\frac{\emptyset;\emptyset;u:?(\langle U_{t}\rangle);!\langle U_{V}\rangle; \mathsf{end},v:S\vdash u?(t).v?(x).(\nu\,s:S)\,(u!\langle x\rangle.\mathbf{0} \mid t\,s\mid\,\overline{s}!\langle x\rangle.\mathbf{0})\triangleright\diamond}{ \emptyset;u:?(\langle U_{t}\rangle);!\langle U_{V}\rangle;\mathsf{end},v:S, \overline{v}:\overline{S}\vdash P_{1}\triangleright\diamond}\]
The decomposition of \(P_{1}\) is as follows:
\[\mathcal{D}(P_{1}) =(\nu\,\widetilde{c})\,\left(\overline{c_{1}!}(\rangle\mid\,|\, \mathbb{E}_{\epsilon}^{1}(P_{1})\right)\] \[=(\nu\,\widetilde{c})\,\left(\overline{c_{1}!}(\rangle\mid\,|\, \mathbb{E}_{1}^{1}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{ c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\! \mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\! \mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid \,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle \!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\! \mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid \,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\, \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid \,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{ 11}!}(\rangle\!\mid\,\overline{c_{11}!}(\rangle\!\overline{c_{11}!}(\rangle\!\mid \,\overline{c_{11}!}(\rangle\!\overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}( \overline{\overline{c_{11}}(\rangle\!\mid\overline{c_{11}!}(\rangle\rangle\!\mid \overline{c_{11}!}(\rangle\!\mid\,\overline{c_{11}!}(\overline{\overline{c_{11}}( \rangle\overline{c_{11}!}(\rangle\overline{c_{11}!}(\overline{\overline{c_{11}}( \rangle\overline{c_{11}!}(\overline{\overline{c1}_{1}!(\overline{c1}(\overline{c11}(\overline{11}( \overline{11(\rangle(\rangle\overline{1(\langle\rangle\langle\langle\rangle\rangle)1}( \overline{1)}(\overline{c_{11}(\overline{1)1}(\overline{c_{11}( \overline{1}(\overline{1)}(\overline{c_{1}(\overline{(\overline{(\langle\langle\rangle\langle\rangle\rangle\overline{1\!\overline{\!\!
possible transitions of \(P_{1}\) and \(Q_{1}\), denoted schematically in Figure 7. First, let us consider a possible (refined) transition of \(P_{1}\), an input on \(u\) of a characteristic value:
\[P_{1}\xrightarrow{u?(V_{C})}v?(x).(\nu\,s:S)\,(u!(x).\mathbf{0}\mid V_{C}\,s \mid\,\overline{s!}\langle x\rangle.\mathbf{0})\mid\overline{v!}\langle V \rangle.\mathbf{0}=P_{2}\]
where \(V_{C}=\{U_{t}\}_{\mathbf{c}}=\lambda y:S.\,y?(x^{\prime}).(!\langle\cdot\rangle.\mathbf{0}\mid x^{\prime}\,s^{\prime})\) is the _characteristic value_ of \(U_{t}\).1 Process \(Q_{1}\) can weakly match this input action on the indexed name \(u_{1}\). This input does not involve \(V_{C}\) but the _minimal_ characteristic value of type \(U_{t}\) (Definition 4.3). We have:
Footnote 1: We use blue to denote characteristic values and trigger processes that do no occur in the original process, but which are induced by the bisimilarities defined in [18].
\[Q_{1}\xrightarrow{\tau}Q_{1}^{\prime}\xrightarrow{\tau}Q_{1}^{ \prime\prime} \xrightarrow{u_{1}?(V_{C}^{m})}(\nu\,\widetilde{c}_{\bullet})\, \overline{c_{3}!}(V_{C}^{m})\mid c_{3}?(t).v_{1}?(x).\overline{c_{4}!}(t,x) \mid\overline{c_{11}!}\langle\rangle\] \[\mid(\nu\,s_{1})\,(c_{4}?(t,x).\overline{c_{5}!}\langle x\rangle. \overline{c_{7}!}(t,x)\mid c_{5}?(x).u_{2}!(x).\overline{c_{8}!}\langle\rangle \mid c_{6}?()\mid\] \[\mid c_{7}?(t,x).\overline{c_{8}!}\langle t\rangle.\overline{c_{9 }!}\langle x\rangle\mid c_{8}?(t).t.s_{1}\mid c_{9}?(x).\overline{s_{1}!} \langle x\rangle.\overline{c_{10}!}\langle\rangle\mid c_{10}?())\] \[\mid c_{11}?(0).\overline{v_{1}!}\langle V_{\epsilon}\rangle. \overline{c_{12}!}\langle\rangle\mid c_{12}?()=Q_{2}\]
where \(V_{C}^{m}=\langle U_{t}\rangle=\lambda(y_{1}).\,y_{1}?(x^{\prime}).(t_{1}!). \mathbf{0}\mid x^{\prime}\,\overline{s}^{\prime})\), with \(y_{1}:?(S)\), \(|\overline{s}^{\prime}|=|\mathcal{G}(S_{V})|\), and \(\widetilde{c}_{\bullet}=c_{3},\ldots,c_{12}\).
Hence, we should have \(P_{2}\;\;\mathcal{S}\;\;Q_{2}\). Observe that \(Q_{2}\) is not exactly the decomposition of \(P_{2}\). First, \(V_{C}^{m}\) is not the breakdown of \(V_{C}\). Second, \(V_{C}^{m}\) is not at the same position in \(Q_{2}\) as \(V_{C}\); the later being in the application position and the former being pushed through several propagators. Therefore, the relation \(\;\mathcal{S}\) needs to (1) relate \(V_{C}\) and \(V_{C}^{m}\) and (2) account for the fact that a value related to \(V_{C}\) and thus it needs to be propagated (as in \(Q_{2}\)). To address the first point, we establish a relation \(\boxtimes\) between characteristic values and their minimal counterparts. For the second point, we record this fact by "decomposing" the process as \(P_{2}=P_{2}^{\prime}\{V_{C}/t\}\), and propagating the information about this substitution when computing the set of processes that are related to \(P_{2}\).
The same considerations we mentioned also apply to the value \(V\), which is transmitted internally, via a synchronization:
\[P_{2}\xrightarrow{\tau}(\nu\,s)\,(u!\langle V\rangle.\mathbf{0}\mid V_{C}\, s\mid\,\overline{s!}\langle V\rangle.\mathbf{0})=P_{3}\]
Value \(V\) transmitted in \(P_{2}\) should be related to its corresponding breakdown \(\mathcal{V}_{\epsilon}(V)\), which should be propagated through the decomposition:
\[Q_{2}\xrightarrow{\tau_{j}}Q_{2}^{\prime}\xrightarrow{\tau}Q_{2}^{ \prime\prime}\xrightarrow{\tau}(\nu\,\widetilde{c}_{\bullet\bullet})\, \overline{c_{4}!}(V_{C}^{m},\mathcal{V}_{\epsilon}(V))\] \[\mid(\nu\,s_{1})\,(c_{4}?(t,x).\overline{c_{5}!}\langle x \rangle.\overline{c_{7}!}\langle t,x\rangle\mid c_{5}?(x).u_{2}!\langle x \rangle.\overline{c_{6}!}\langle\rangle\mid c_{6}?()\mid\] \[\mid c_{7}?(t,x).\overline{c_{8}!}\langle t\rangle.\overline{c_ {9}!}\langle x\rangle\mid c_{8}?(t).t\,s_{1}\mid c_{9}?(x).\overline{s_{1}!} \langle x\rangle.\overline{c_{10}!}\langle\rangle\mid c_{10}?())\mid\] \[\mid\overline{c_{12}!}\langle\rangle\mid c_{12}?()=Q_{3}\]
where \(\widetilde{c}_{\bullet\bullet}=c_{4},\ldots,c_{10},c_{12}\).
Now, in \(P_{3}\) we can observe the output of \(V\) along \(u\):
\[P_{3}\xrightarrow{u!(V)}(\nu\,s)\,(\mathbf{0}\mid V_{C}\,s\mid\,\overline{s! }\langle x\rangle.\mathbf{0})=P_{4}\]
Process \(Q_{3}\) mimics this action by sending the process \(\mathcal{V}_{\epsilon}(V)\) along name \(u_{2}\):
\[Q_{3}\xrightarrow{u_{2}!(\mathcal{V}_{\epsilon}(V))}(\nu\, \widetilde{c}_{\bullet})\,\overline{c_{7}!}\langle\mathcal{V}_{\epsilon}(V) \rangle\mid\overline{c_{6}!}\langle\rangle\mid c_{6}?()\mid\] \[\mid c_{7}?(t,x).\overline{c_{8}!}\langle t\rangle.\overline{c_ {9}!}\langle x\rangle\mid c_{8}?(t).t\,s_{1}\mid c_{9}?(x).\overline{s_{1}!} \langle x\rangle.\overline{c_{10}!}\langle\rangle\mid c_{10}?())=Q_{4}\]
where \(\widetilde{c}_{\ast}=c_{6},\ldots,c_{10}\). Following the definition of higher-order bisimilarity, we should have:
\[P_{4}\parallel t^{\prime}\leftarrow_{\mathbb{H}}V\;\;\mathcal{S}\;\;Q_{4} \parallel t^{\prime}_{1}\leftarrow_{\mathbb{H}}\mathcal{V}_{\epsilon}(V)\]
for a fresh \(t^{\prime}\), where we have used '\(\parallel\)' (rather than '\(\mid\)') to denote process composition: we find it convenient to highlight those sub-processes in parallel that originate from trigger and characteristic processes.
We can see that the trigger process for \(V\) on the left-hand side should be matched with a trigger process for the _breakdown_ of \(V\) on the right-hand side. Moreover, the definition of trigger processes should be generalized to polyadic values, as \(\mathcal{V}_{\epsilon}(V)\) could be polyadic (see Definition 4.8).
Let us briefly consider how \(P_{4}\parallel t^{\prime}\hookrightarrow_{\mathbb{H}}V\) evolves after due to the synchronization in sub-process \(V_{c}\,s\) within \(P_{4}\):
\[P_{4}\parallel t^{\prime}\hookrightarrow_{\mathbb{H}}V\xrightarrow{\tau}( \nu\,s)\,(s?(x^{\prime}).(t!\langle\rangle\mid x^{\prime}\,s^{\prime})\parallel \overline{s!}\langle V\rangle.\mathbf{0})\parallel t^{\prime}\hookrightarrow_{ \mathbb{H}}V=P_{6}\parallel t^{\prime}\hookrightarrow_{\mathbb{H}}V\]
We can see that \(Q_{4}\) can mimic this synchronization after a few administrative reductions on propagators:
\[Q_{4}\xrightarrow{\tau}(\nu\,c_{9}c_{10})\,\overline{c_{9}!}( \mathcal{V}_{\epsilon}(V))\mid s_{1}?(x^{\prime}).(t_{1}!\langle\rangle\mid x ^{\prime}\,\overline{s}^{\prime})\mid c_{9}?(x).\overline{s_{1}!}\langle x \rangle.\overline{c_{10}!}\langle\rangle\mid c_{10}?())\parallel t^{\prime} \hookrightarrow_{\mathbb{H}}\mathcal{V}_{\epsilon}(V)\] \[=Q_{6}\parallel t^{\prime}_{1}\hookrightarrow_{\mathbb{H}} \mathcal{V}_{\epsilon}(V)\]
Therefore, we need to have:
\[P_{6}\parallel t^{\prime}\hookrightarrow_{\mathbb{H}}V\ \ \mathcal{S}\ \ Q_{6}\parallel t^{\prime}_{1} \hookrightarrow_{\mathbb{H}}\mathcal{V}_{\epsilon}(V)\]
To ensure that this pair is in \(\,\mathcal{S}\,\), we introduce an auxiliary relation, denoted \(\diamond\) (Definition 4.15), which allows us to account for the sub-processes that originate from characteristic values or trigger processes (in blue). We need to account for them separately, because one of them is not the decomposition of the other. We thus decree:
\[s?(x^{\prime}).(t!\langle\rangle\mid x^{\prime}\,s^{\prime}) \diamond s_{1}?(x^{\prime}).(t_{1}!\langle\rangle\mid x^{\prime}\,\overline{s} ^{\prime})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad t^{ \prime}\hookrightarrow_{\mathbb{H}}\mathcal{V}\diamond t^{\prime}_{1} \hookrightarrow_{\mathbb{H}}\mathcal{V}_{\epsilon}(V)\]
Next, the synchronization on \(s\) in \(P_{6}\) is mimicked by \(Q_{6}\) with a synchronization on \(s_{1}\):
\[P_{6}\parallel t\hookrightarrow_{\mathbb{H}}V\xrightarrow{\tau}( t!\langle\rangle.\mathbf{0}\mid V\,s^{\prime})\parallel t^{\prime} \hookrightarrow_{\mathbb{H}}V=P_{8}\parallel t^{\prime}\hookrightarrow_{ \mathbb{H}}V\] \[Q_{6}\parallel t^{\prime}_{1}\hookrightarrow_{\mathbb{H}} \mathcal{V}_{\epsilon}(V)\xrightarrow{\tau}(\nu\,c_{10})\,(t_{1}!\langle \rangle\mid\mathcal{V}_{\epsilon}(V)\,\,\widetilde{s}^{\prime})\mid \overline{c_{10}!}\langle\rangle\mid c_{10}?())=Q_{8}\parallel t^{\prime}_{1} \hookrightarrow_{\mathbb{H}}\mathcal{V}_{\epsilon}(V)\]
Finally, we can see that after the output on the trigger name \(t\) there is an application that activates \(R\), the body of \(V\):
\[P_{8}\xrightarrow{t!\langle\rangle}V\,s^{\prime}\xrightarrow{ \tau}R\{s^{\prime}/y\}\] \[Q_{8}\xrightarrow{t!\langle\rangle}\mathcal{V}_{\epsilon}(V)\, \,\widetilde{s}^{\prime}\xrightarrow{\tau}(\nu\,\widetilde{c}_{**})\,\overline{c _{12}!}\langle\rangle\mid\mathbb{E}^{12}_{\epsilon}(R)\{\widetilde{s}^{ \prime}/\widetilde{y}\}\equiv\mathscr{D}(R\{s^{\prime}/y\})\]
We reached the point where we relate process \(R\{s^{\prime}/y\}\) with its decomposition \(\mathscr{D}(R\{s^{\prime}/y\})\). Hence, the remaining pairs in \(\,\mathcal{S}\,\) are obtained in the same way.
Key insights.We summarize some key insights from the example:
* A received value can either be a pure value or a characteristic value. In the former case, the pure value has to be related to its decomposition, but in the later case the value should be related to an MST characteristic value of the same type. We define the relation \(\boxtimes\) on values to account for this (Definition 4.13).
* Trigger processes mentioned in the output case of MST bisimilarity should be matched with their minimal counterparts, and the same applies to processes originating from such trigger processes. The relation \(\diamond\) accounts for this (see Definition 4.15).
* Any value in process \(P\) could have been previously received. The definition of \(\,\mathcal{S}\,\) takes this into account by explicitly relating processes with substitutions (see Definition 4.17). That is, for \(P\), it relates \(P^{\prime}\{\tilde{W}/\tilde{x}\}\) such that \(P^{\prime}\{\tilde{W}/\tilde{x}\}=P\). Here, the substitution \(\{\tilde{W}/\tilde{x}\}\) records values that should be propagated.
#### 4.3.2 The relation \(\mathcal{S}\)
In this section we give the definition of the relation \(\mathcal{S}\) (Definition 4.17), following the insights gathered from the example. More specifically, we define
* a relation \(\boxtimes\) on values, which includes the relation \(\bowtie\) from Definition 4.6, (Definition 4.13);
* a relation \(\diamond\) on processes, for relating characteristic and trigger processes with their MST counterparts, (Definition 4.15);
* a set \(\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P\big{)}\) of processes _correlated_ to a process \(P\big{\{}\tilde{W}\!/\!\tilde{x}\big{\}}\), (Table 3).
Because we will be working extensively with indexed processes, we will use the following function, which returns a set of all valid indexing substitutions for a list of names.
**Definition 4.10** (Indexed names substitutions).: Let \(\widetilde{u}=(a,b,r,\overline{r},r^{\prime},\overline{r}^{\prime},s, \overline{s},s^{\prime},\overline{s}^{\prime},\ldots)\) be a finite tuple of names, where \(a,b,\ldots\) denote shared names, \(r,\overline{r},r^{\prime},\overline{r}^{\prime},\ldots\) denote tail-recursive names, and \(s,\overline{s},s^{\prime},\overline{s}^{\prime},\ldots\) denote linear (non tail-recursive names). We write \(\mathsf{index}(\widetilde{u})\) to denote
\[\mathsf{index}(\widetilde{u})=\{a_{1},b_{1},r_{1},\overline{r}_{1},r_{1}^{ \prime},\overline{r}_{1}^{\prime},s_{i},\overline{s}_{i},s_{j}^{\prime}, \overline{s}_{j}^{\prime},\ldots/a,b,r,\overline{r},r^{\prime},s,\overline{s},s^{\prime},\ldots:i,j,\ldots>0\}\]
Any substitution \(\sigma\in\mathsf{index}(\mathsf{fn}(P))\) turns an \(\mathsf{HO}\) process \(P\) into an indexed process \(P\sigma\).
Correlated values.The main ingredient in defining the relation \(\mathcal{S}\) is the the set \(\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P\big{)}\), which contains processes _correlated_ to process \(P\) with a substitution \(\{\tilde{W}\!/\!\tilde{x}\}\). The substitution, as discussed above, denotes previously received values, and we assume that \(\mathtt{fv}(P)=\widetilde{x}\). Essentially, \(\mathcal{C}_{-}^{-}\big{(}-\big{)}\) computes a breakdown of \(P\big{\{}\tilde{W}\!/\!\tilde{x}\big{\}}\) in parallel with an activating trio, that mimics the original actions of \(P\) up to transitions on propagators. The activating trio propagates not the original values \(\tilde{W}\), but the values related to \(\tilde{W}\). To do that we introduce the set \(\mathcal{C}_{-}^{-}\big{(}V\big{)}\) of correlated values and the relation \(\boxtimes\) on values, which are defined mutually recursively in the three following definitions.
**Definition 4.11** (Broken down values).: Given a value \(V\), the set \(\mathcal{C}\big{(}V\big{)}\) is defined as follows:
\[\mathcal{C}\big{(}V\big{)}=\bigcup\big{\{}\mathcal{C}_{\tilde{x}}^{\tilde{W}} \big{(}V^{\prime}\big{)}:V=V^{\prime}\{\tilde{W}\!/\!\tilde{x}\}\text{ and }V^{\prime}\text{ is not a variable}\big{\}}\]
We extend \(\mathcal{C}\big{(}-\big{)}\) to work on a list of values \(\widetilde{V}\) component-wise, that is:
\[\mathcal{C}\big{(}V_{1},\ldots,V_{n}\big{)}=\{B_{1},\ldots,B_{n}:B_{i}\in \mathcal{C}\big{(}V_{i}\big{)}\text{ for }i\in 1\ldots n\}.\]
This way, the elements in \(\mathcal{C}\big{(}V\big{)}\) differ in the propagated values \(\widetilde{W}\). Consider the following example:
**Example 4.1**.: Let \(V=\lambda y.\,y!\langle V_{1}\rangle.y!\langle V_{2}\rangle.\mathbf{0}\). There are four possibilities of \(V^{\prime}\), \(\widetilde{W}\), and \(\widetilde{x}\) such that \(V=V^{\prime}\{\tilde{W}\!/\!\tilde{x}\}\). That is,
* \(V=V^{1}\{V_{1}V_{2}/x_{1}x_{2}\}\) where \(V_{1}=\lambda y.\,y!\langle x_{1}\rangle.y!\langle x_{2}\rangle.\mathbf{0}\)
* \(V=V^{2}\{V_{1}/x_{1}\}\) where \(V^{2}=\lambda y.\,y!\langle x_{1}\rangle.y!\langle V_{2}\rangle.\mathbf{0}\)
* \(V=V^{3}\{V_{2}/x_{2}\}\) where \(V^{3}=\lambda y.\,y!\langle V_{1}\rangle.y!\langle x_{2}\rangle.\mathbf{0}\)
* Finally, we can take the identity substitution \(\widetilde{W}=\epsilon\) and \(\widetilde{x}=\epsilon\).
Thus, we have \(\mathcal{C}\big{(}V\big{)}=\big{\{}\mathcal{C}_{x_{1}x_{2}}^{V_{1}V_{2}}\big{(} V^{1}\big{)},\ \mathcal{C}_{x_{1}}^{V_{1}}\big{(}V^{2}\big{)},\ \mathcal{C}_{x_{2}}^{V_{2}}\big{(}V^{3}\big{)},\ \mathcal{C}_{\epsilon}^{ \epsilon}\big{(}V\big{)}\big{\}}\).
**Definition 4.12**.: Given a value \(V\), the set \(\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}V\big{)}\), where \(\mathsf{fn}(V)=\widetilde{x}\) is defined as follows:
\[\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}V\big{)}=\big{\{}\mathcal{V}_{\tilde{ x}}\big{(}V\big{)}\{\tilde{B}/\!\tilde{x}\}\mid\widetilde{W}\boxtimes\widetilde{B} \big{\}}.\]
**Definition 4.13** (Relating values).: The relation \(\boxtimes\) on values (with indexed names) is defined as follows:
\[V_{1}\boxtimes V_{2}\iff\begin{cases}\exists V^{\prime}_{1},\,\sigma\in\mathsf{ index}(\mathsf{fn}(V^{\prime}_{1})).\,\,V_{1}=V^{\prime}_{1}\sigma\wedge V^{\prime}_{1} \bowtie V_{2}&\text{ if }V_{1}\text{ is a characteristic or a trigger value}\\ V_{2}\in\mathcal{C}\big{(}V_{1}\big{)}&\text{ otherwise.}\end{cases}\]
where \(\bowtie\) is the relation from Definition 4.6.
Thus, in the definition of \(\mathcal{C}^{\tilde{W}}_{\tilde{x}}\big{(}V\big{)}\), the value \(V\) is related to the triggered break down values with \(\widetilde{B}\) substituted for \(\widetilde{x}\) such that \(\widetilde{W}\boxtimes\widetilde{B}\).
Additionally, to define \(\mathcal{C}^{\tilde{W}}_{\tilde{x}}\big{(}-\big{)}\) for processes, we have to observe the behaviour of processes enclosed in the received trigger and characteristic values. Further, we have to observe the behaviour of trigger processes of shape \(t\hookrightarrow_{\mathbb{H}}V\). For this we need to define a relation \(\diamond\) on processes that contains pairs
\[(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Correlated processes.Finally, we can use the introduced notions to define the set \(\mathcal{C}_{-}^{-}\big{(}-\big{)}\) of correlated processes. As mentioned, the set \(\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P\big{)}\) contains processes correlated to process \(P\) with a substitution \(\{\tilde{W}/\tilde{x}\}\). The definition of \(\mathcal{C}_{-}^{-}\big{(}-\big{)}\) is given in Table 3. Before looking into the details, we first describe how the \(\mathcal{C}_{-}^{-}\big{(}-\big{)}\) is used.
We introduce auxiliary notions for treating free (tail-recursive) names in processes.
**Definition 4.16** (Auxiliary Notions).: Let \(P\) be an \(\mathsf{HO}\) process.
* We write \(\mathsf{fpn}(P)\) to denote the set of free propagator names in \(P\).
* We define \(\mathsf{rfv}(P)\) to denote free tail-recursive names in values in \(P\).
* We define \(\mathsf{cr}(P)\) to denote free names of form \(c^{r}\) in \(P\).
* We define \(\mathsf{rfni}(P)\) such that \(r\in\mathsf{frni}(P)\) if and only if \((r_{i},\ldots,r_{j})\subseteq\mathsf{rn}(P)\) for some \(i,j>0\).
* Given \(r:S\) and \(\widetilde{r}=(r_{1},\ldots,r_{|\mathcal{G}(S)|})\), we write \(\mathcal{R}_{\tilde{v}}\) to denote the process \[\mathcal{R}_{\tilde{v}}=\prod_{r\in\tilde{v}}c^{r}\gamma(x).x\,\widetilde{r}\]
**Definition 4.17** (Relation \(\mathcal{S}\)).: Let \(P\{\tilde{W}/\tilde{x}\}\) be a well-typed process such that \(\mathsf{fn}(P)\cap\mathsf{fn}(\widetilde{W})=\emptyset\), and let the \(\mathcal{C}\)-set be as in Table 3. We define the relation \(\mathcal{S}\) as follows:
\[\mathcal{S} = \big{\{}\big{(}P\{\tilde{W}/\tilde{x}\},(\nu\,\widetilde{c}_{r} )\,(\nu\,\widetilde{c})\,R\big{)}:\ R\in\mathcal{C}_{\tilde{x}}^{\tilde{W} \sigma}\big{(}P\sigma\big{)}\] \[\text{with }\widetilde{u}=\mathsf{fn}(P\{\tilde{W}/\tilde{x}\} ),\ \sigma\in\mathsf{index}(\widetilde{u}),\ \widetilde{c}_{r}=\mathsf{cr}(R),\ \widetilde{c}=\mathsf{fpn}(R)\big{\}}\]
Now we describe the definition of \(\mathcal{C}_{-}^{-}\big{(}-\big{)}\) in Table 3. Essentially, \(\mathcal{C}_{-}^{-}\big{(}-\big{)}\) computes a breakdown of \(P\{\tilde{W}/\tilde{x}\}\) in parallel with an activating trio, that mimics the original actions of \(P\) up to transitions on propagators. This is done with the help of \(\mathcal{J}_{-}^{-}\big{(}-\big{)}\) (also given in Table 3), which computes a closure of a process with respect to \(\tau\)-transitions on propagators.
To define the \(\mathcal{C}\)-set we distinguish processes that do not appear in the given process, but that are composed in parallel by the clauses of MST bisimilarity (Definition 4.9). For this we use the following notions:
**Definition 4.18** (Trigger Collections).: We let \(H,H^{\prime}\) to range over _trigger collections_: processes of the form \(P_{1}\mid\cdots\mid P_{n}\) (with \(n\geq 1\)), where each \(P_{i}\) is a trigger process or a process that originates from a trigger or from a characteristic value.
**Example 4.2**.: Let \(H_{1}=t_{1}\leftarrow_{\texttt{H}}V\mid[C]^{u_{1}}\mid t_{2}!\langle u_{2} \rangle.\mathbf{0}\) where \(t_{1},t_{2},u_{1},u_{2}\) are channel names, \(V\) is a value, and \(C\) a channel type. Then, we could see that \(t_{2}!\langle u_{2}\rangle.\mathbf{0}\) originates from a characteristic value. Thus, \(H_{1}\) is a trigger collection.
Notice that we write \(P\) to denote a "pure" process that is not composed with a trigger collection. For processes with trigger collections, the following notation is relevant:
**Definition 4.19** (Process in parallel with a trigger or a characteristic process).: We write \(P\parallel Q\) to stand for \(P\mid Q\) where either \(P\) or \(Q\) is a trigger collection.
Now we can describe all the cases in the definitions of the \(\mathcal{J}\)-set and the \(\mathcal{C}\)-set in Table 3 (Page 37). Observe that the second and third columns in Table 3 are closely related: the third column lists side conditions for the definitions in the second column. Note that in each case we assume the substitution \(\rho=\{\tilde{W}/\tilde{x}\}\). We start with the cases for \(\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P\big{)}\):
**Parallel with a trigger collection:** The \(\mathcal{C}\)-set of \(Q_{1}\parallel Q_{2}\) is defined as:
\[\{R_{1}\parallel R_{2}:R_{1}\in\mathcal{C}_{\tilde{y}}^{W_{1}}\big{(}Q_{1}),\ R_{2}\in \mathcal{C}_{\tilde{w}}^{W_{2}}\big{(}Q_{2}\big{)}\}\]
By Definition 4.19, either \(Q_{1}\) or \(Q_{2}\) is a trigger collection. Notice that a composition
(where both \(Q_{1}\) and \(Q_{2}\) are "pure") is handled by \(\mathcal{J}\big{(}-\big{)}\), see below. We treat \(Q_{1}\parallel Q_{2}\) compositionally: we split the substitution into parts concerning \(Q_{1}\) and \(Q_{2}\), i.e., \(\{\tilde{W}/\!\tilde{x}\}=\{\tilde{W}_{\!\tilde{1}}/\!\tilde{y}\}\cdot\{\tilde{ W}_{\!\tilde{2}}/\!\tilde{w}\}\) such that \(\widetilde{y}=\mathtt{fv}(Q_{1})\) and \(\widetilde{w}=\mathtt{fv}(Q_{2})\), and relate it to a parallel composition whose components come from a corresponding \(\mathcal{C}\)-set.
**Restriction:**: The \(\mathcal{C}\)-set of \((\nu\,m:C)\,Q\) is inductively defined as:
\[\Big{\{}(\nu\,\widetilde{m}:\mathcal{G}(C))\;R:(\nu\,\widetilde{c}^{m})\,R\in \mathcal{C}^{W}_{\!\tilde{x}}\big{(}Q\sigma\big{)}\Big{\}}\]
where \(\sigma=\{m_{1}\overline{m_{1}}/\!m\overline{m}\}\) and \(\widetilde{m}=(m_{1},\ldots,m_{|\mathcal{G}(C)|})\) is the decomposition of \(m\) under \(C\). The elements are processes from the \(\mathcal{C}\)-set of \(Q\) with names \(\widetilde{m}\) restricted. In the case when restricted name \(m\) is a tail-recursive then we also restrict the special propagator names \(c^{m}\) and \(c^{\overline{m}}\) which appear in \(R\). Notice that the processes of the form \((\nu\,m)\,\big{(}Q_{1}\parallel Q_{2})\), which are induced by the output clause of MST bisimilarity, are treated in this case in the definition of \(\mathcal{C}\big{(}-\big{)}\).
**Pure process:**: The \(\mathcal{C}\)-set of a pure process \(Q\) is defined as follows:
\[\big{\{}\mathcal{R}_{\tilde{v}}\mid\overline{c_{k}}^{\intercal}\!(\widetilde{ B})\mid\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,} \mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,} \mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb{\, \,}\mathbb{\,}\mathbb{\,}\mathbb{\,}\mathbb
an identity substitution. We split \(\widetilde{W}\) into \(\widetilde{W}_{1}\) and \(\widetilde{W}_{2}\), associated to the emitted value \(V_{1}\) and the continuation \(Q\), respectively.
Instead of the emitted value \(V_{1}\) we consider values \(V_{2}\) that are \(\boxtimes\)-related to \(V_{1}\sigma\{\widetilde{W}_{1}/\tilde{y}\}\). This way, we uniformly handle cases when (i) \(V_{1}\) is a pure value, (ii) variable, and (iii) a characteristic value. In particular, if \(V_{1}\) is a pure value, the set \(\mathcal{C}_{\tilde{y}}^{\tilde{W}_{1}}\big{(}V_{1}\sigma\big{)}\) is included in all the values \(\boxtimes\)-related to \(V_{1}\sigma\{\widetilde{W}_{1}/\tilde{y}\}\).
Further, the propagator \(c_{k}\) actives the next trio with the values \(\widetilde{B}_{2}\) such that \(\widetilde{W}_{2}\boxtimes\widetilde{B}_{2}\): as \(\widetilde{W}_{2}\) denotes previously received values, we take a context of \(\boxtimes\)-related values. Again, received values could be either trigger and characteristic values (required to be observed by MST bisimilarity, cf. Definition 4.9) or pure values originated from internal actions. Again, by \(\boxtimes\) (Definition 4.13) we account for both cases.
In sub-case (ii), when \(u_{i}\) is a tail-recursive name, the elements are as follows:
\[\big{\{}c^{u\intercal}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The first set contains intermediate processes emerging while collecting recursive names using synchronizations with recursive name providers. We can see that the body of the inner-most abstraction, \(Q_{l}\), is an application of \(V_{2}\) (such that \(V_{1}\{\tilde{W}/\tilde{y}\}\boxtimes V_{2}\)) to partially instantiated recursive names: \(l\) denotes that decompositions of first \(l-1\) recursive names are retrieved. The final tuple in arguments of \(Q_{l}\), \(\widetilde{m}=(u_{i},\ldots,u_{i+|\mathscr{G}(C)|-1})\), is a full decomposition of non-recursive (linear or shared) name \(u_{i}\). Just like in the previous cases, by taking \(V_{2}\) as a \(\boxtimes\)-related value to \(V_{1}\{\tilde{W}/\tilde{y}\}\), we uniformly handle all the three possibilities for \(V_{1}\) (pure value, variable, and characteristic value). In the first set, the first element is a process is ready to send an abstraction to an appropriate name provider, in order to retrieve the decomposition of \(l\)-th recursive name. The second element is a process that results from a communication of the first element with a provider: an application which will instantiate \(l\)-th recursive name in \(Q_{l}\). Finally, the second set contains application processes in which the decompositions of all \(n\) recursive names are gathered, and it is ready to mimic the silent action (application reduction) of the original process.
**Parallel composition:**: The \(\mathcal{J}\)-set of \(Q_{1}\mid Q_{2}\) is defined using two sets:
\[\begin{array}{l}\big{\{}\overline{c_{k}}!\langle\widetilde{B}_{1}\rangle. \overline{c_{k+l}}!\langle\widetilde{B}_{2}\rangle\mid\mathscr{E}_{\tilde{y}}^ {k}(Q_{1})\mid\mathscr{E}_{\tilde{z}}^{k+l}(Q_{2}):\widetilde{W}_{1}\boxtimes \widetilde{B}_{1},\widetilde{W}_{2}\boxtimes\widetilde{B}_{2}\big{\}}\\ \cup\\ \big{\{}(R_{1}\mid R_{2}):R_{1}\in\mathcal{C}_{\tilde{y}}^{\tilde{W}_{1}}(Q_{ 1}),R_{2}\in\mathcal{C}_{\tilde{z}}^{\tilde{W}_{2}}(Q_{2})\big{\}}\end{array}\]
The first set contains a control trio that is ready to activate the decomposition of the two components in parallel. Just like in the other cases, the control trio propagates values that are \(\boxtimes\)-related to \(\tilde{W}_{1}\) and \(\tilde{W}_{2}\). In order to close the set with respect to the \(\tau\)-actions on propagators, the second set contains the composition of processes drawn from the \(\mathcal{C}\)-sets of \(Q_{1}\) and \(Q_{2}\), with appropriate substitutions.
### Proving Operational Correspondence
Recall that we aim to establish Theorem 4.1. To that end, we prove that \(\,\mathcal{S}\,\) (Definition 4.17) is an MST bisimulation, by establishing two results:
* Lemma 4.6 covers the case in which the given process performs an action, which is matched by an action of the decomposed process. In terms of operational correspondence (see, e.g., [11]), this establishes _completeness_ of the decomposition.
* Lemma 4.7 covers the converse direction, in which the decomposed process performs an action, which is matched by the initial process. This established the _soundness_ of the decomposition.
For proving both operational completeness and soundness, we will need the following result. Following Parrow [21], we refer to prefixes that do not correspond to prefixes of the original process, i.e. prefixes on propagators \(c_{i}\), as _non-essential prefixes_. Then the relation \(\,\mathcal{S}\,\) is closed under reductions that involve non-essential prefixes.
**Lemma 4.3**.: _Given an indexed process \(P_{1}\{\tilde{W}/\tilde{x}\}\), the set \(\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P_{1}\big{)}\) is closed under \(\tau\)-transitions on non-essential prefixes. That is, if \(R_{1}\in\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P_{1}\big{)}\) and \(R_{1}\stackrel{{\tau}}{{\rightarrow}}R_{2}\) is inferred from the actions on non-essential prefixes, then \(R_{2}\in\mathcal{C}_{\tilde{x}}^{\tilde{W}}\big{(}P_{1}\big{)}\)._
Proof.: By the induction on the structure of \(P_{1}\). See Appendix C.1 for more details.
Operational completeness.We first consider transitions using the unrestricted and untyped LTS; in Lemma 4.6 we will consider transitions with the refined LTS.
**Lemma 4.4**.: _Assume \(P_{1}\{\tilde{W}/\tilde{x}\}\) is a process such that \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\triangleright \diamond\) with \(\mathsf{balanced}(\Delta_{1})\) and \(P_{1}\{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,Q_{1}\)._
1. _Whenever_ \(P_{1}\{\tilde{W}/\tilde{x}\}\xrightarrow{(\nu\,\widetilde{m}_{1})\,n!(V_{2})} P_{2}\) _, such that_ \(\overline{n}\not\in\mathsf{fn}(P_{1}\{\tilde{W}/\tilde{x}\})\)_, then there exist_ \(Q_{2}\) _and_ \(V_{2}\) _such that_ \(Q_{1}\xRightarrow{\nu\,\widetilde{m}_{2}}\,\mathsf{h}!(V_{2})\)__\(Q_{2}\) _and, for a fresh_ \(t\)_,_ \[(\nu\,\widetilde{m}_{1})(P_{2}\parallel t\leftrightarrow_{\mathbb{H}}V_{1}) \{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,(\nu\,\widetilde{m}_{2})(Q_{2} \parallel t_{1}\leftrightarrow_{\mathbb{H}}V_{2})\]
2. _Whenever_ \(P_{1}\{\tilde{W}/\tilde{x}\}\xrightarrow{n!(V_{1})}P_{2}\) _, such that_ \(\overline{n}\not\in\mathsf{fn}(P_{1}\{\tilde{W}/\tilde{x}\})\)_, then there exist_ \(Q_{2}\)_,_ \(V_{2}\)_, and_ \(\sigma\) _such that_ \(Q_{1}\xRightarrow{\nu\,\widetilde{m}_{2}}Q_{2}\) _where_ \(V_{1}\sigma\boxtimes V_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_,_
3. _Whenever_ \(P_{1}\xrightarrow{\tau}P_{2}\) _then there exists_ \(Q_{2}\) _such that_ \(Q_{1}\xRightarrow{\tau}Q_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_._
Proof.: By transition induction. See Appendix C.2 for more details.
The following statement builds upon the previous one to address the case of the typed LTS (Definition 4.5):
**Lemma 4.5**.: _Assume \(P_{1}\{\tilde{W}/\tilde{x}\}\) is a process and \(P_{1}\{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,Q_{1}\)._
1. _Whenever_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\} \xrightarrow{(\nu\,\widetilde{m}_{1})\,n!(V_{1})}\Lambda^{\prime}_{1};\Delta^ {\prime}_{1}\vdash P_{2}\) _then there exist_ \(Q_{2}\)_,_ \(V_{2}\)_,_ \(\Delta^{\prime}_{2}\)_, and_ \(\Lambda^{\prime}_{2}\) _such that_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\nu\,\widetilde{m} _{2}}\,\mathsf{h}!(V_{2})\)__\(\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _and, for a fresh_ \(t\)_,_ \[(\nu\,\widetilde{m}_{1})(P_{2}\parallel t\leftrightarrow_{\mathbb{H}}V_{1}) \{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,(\nu\,\widetilde{m}_{2})(Q_{2} \parallel t_{1}\leftrightarrow_{\mathbb{H}}V_{2})\]
2. _Whenever_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\} \xrightarrow{n^{\prime}(V_{1})}\Lambda^{\prime}_{1};\Delta^{\prime}_{1}\vdash P _{2}\) _then there exist_ \(Q_{2}\)_,_ \(V_{2}\)_,_ \(\sigma\)_,_ \(\Lambda^{\prime}_{2}\)_, and_ \(\Delta^{\prime}_{2}\) _such that_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\pi\,\widetilde{ 7}(V_{2})}\Lambda^{\prime}_{2},\Delta^{\prime}_{2}\vdash Q_{2}\) _where_ \(V_{1}\sigma\boxtimes V_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_,_
3. _Whenever_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\} \xrightarrow{\tau}\Lambda^{\prime}_{1};\Delta^{\prime}_{1}\vdash P_{2}\) _then there exist_ \(Q_{2}\)_,_ \(\Lambda^{\prime}_{2}\)_, and_ \(\Delta^{\prime}_{2}\) _such that_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\tau}\Lambda^{ \prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_._
Proof.: The proof uses results of Lemma 4.4. We consider the first case, the other two being similar.
By the definition of the typed LTS we have:
\[\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\} \tag{20}\] \[(\Gamma_{1};\emptyset;\Delta_{1})\xrightarrow{(\nu\,\widetilde{m}) \,n!(V)}(\Gamma_{1};\emptyset;\Delta_{2}) \tag{21}\]
By (21) we further have
\[\begin{array}{c}\Gamma,\Gamma^{\prime};\Lambda^{\prime};\Delta^{\prime} \vdash V\vdash U\qquad\qquad\Gamma^{\prime};\emptyset;\Delta_{j}\vdash m_{j} \triangleright U_{j}\qquad\qquad\overline{n}\not\in\mathsf{dom}(\Delta)\\ \Delta^{\prime}\backslash(\cup_{j}\Delta_{j})\subseteq(\Delta,n:S)\qquad \qquad\Gamma^{\prime};\emptyset;\Delta^{\prime}_{j}\vdash\overline{m}_{j} \triangleright U^{\prime}_{j}\qquad\qquad\Lambda^{\prime}\subseteq\Lambda\\ \hline(\Gamma;\Lambda;\Delta,s:!(U);S)\xrightarrow{(\nu\,\widetilde{m})\,n!(V)} (\Gamma,\Gamma^{\prime};\Lambda\backslash\Lambda^{\prime};(\Delta,n:S,\cup_{j} \Delta^{\prime}_{j})\backslash\Delta^{\prime})\end{array}\]
By (20) and the condition \(\overline{n}\not\in\mathsf{dom}(\Delta)\) we have \(\overline{n}\not\in\mathsf{fn}(P_{1}\{\tilde{W}/\tilde{x}\})\). Therefore, we can apply Item 1 of Lemma 4.4.
Finally, we are in a position to address the case of the refined typed LTS (Definition 4.5):
**Lemma 4.6**.: _Assume \(P_{1}\{\tilde{W}/\tilde{x}\}\) is a process and \(P_{1}\{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,Q_{1}\)._
1. _Whenever_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xrightarrow{ \left(\nu\,\widetilde{m_{1}}\right)n!(V_{1})}\Lambda_{1}^{\prime};\Delta_{1}^ {\prime}\vdash P_{2}\) _then there exist_ \(Q_{2}\)_,_ \(V_{2}\)_,_ \(\Delta_{2}^{\prime}\)_, and_ \(\Lambda_{2}^{\prime}\) _such that_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\left(\nu\, \widetilde{m_{2}}\right)n!(V_{2})}_{\blacksquare}\Lambda_{2}^{\prime};\Delta_{2}^ {\prime}\vdash Q_{2}\) _and, for a fresh_ \(t\)_,_ \[(\nu\,\widetilde{m_{1}})(P_{2}\parallel t\leftrightarrow_{\blacksquare}V_{1})\{ \tilde{W}/\tilde{x}\}\,\mathcal{S}\,(\nu\,\widetilde{m_{2}})(Q_{2}\parallel t_ {1}\leftrightarrow_{\blacksquare}V_{2})\]
2. _Whenever_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \widetilde{n^{\prime}(V_{1})}\Lambda_{1}^{\prime};\Delta_{1}^{\prime}\vdash P _{2}\) _then there exist_ \(Q_{2}\)_,_ \(V_{2}\)_,_ \(\Lambda_{2}^{\prime}\)_, and_ \(\Delta_{2}^{\prime}\) _such that_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\left\{\begin{array} []{c}\tilde{n}^{\prime}(V_{2})\\ \hline\end{array}\right\}_{\blacksquare}}\Lambda_{2}^{\prime},\Delta_{2}^{\prime} \vdash Q_{2}\) _where_ \(V_{1}\bowtie V_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_,_
3. _Whenever_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \Lambda_{1}^{\prime};\Delta_{1}^{\prime}\vdash P_{2}\) _then there exist_ \(Q_{2}\)_,_ \(\Lambda_{2}^{\prime}\)_, and_ \(\Delta_{2}^{\prime}\) _such that_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{\left\{\begin{array} []{c}\tilde{n}^{\prime}(V_{2})\\ \hline\end{array}\right\}_{\blacksquare}}\Lambda_{2}^{\prime};\Delta_{2}^{\prime} \vdash Q_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_._
Proof.: By case analysis of the transition label \(\ell\). It uses results of Lemma 4.5. We consider two cases: (i) \(\ell\equiv n?(V_{1})\) and (ii) \(\ell\not\equiv n?(V_{1})\).
1. Case \(\ell\equiv n?(V_{1})\). This case concerns Part (2) of the lemma. In this case we know \(P_{1}=n?(y).Q\). We have the following transition inference tree: \[\xRightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv} \right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{ \left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv} \right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{ \left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle \mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle\mathtt{Rcv}\right\rangle} \xrightarrow{\left\langle\mathtt{Rcv}\right\rangle}\xrightarrow{\left\langle
3. _Whenever_ \(Q_{1}\!\!\stackrel{{\tau}}{{\to}}Q_{2}\) _either (i)_ \(P_{1}\{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,Q_{2}\) _or (ii) there exists_ \(P_{2}\) _such that_ \(P_{1}\!\!\stackrel{{\tau}}{{\to}}\!\!P_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_._
Proof (Sketch).: By transition induction. See Appendix C.3 for more details.
**Lemma 4.8**.: _Assume \(P_{1}\{\tilde{W}/\tilde{x}\}\) is a process and \(P_{1}\{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,Q_{1}\)._
1. _Whenever_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xrightarrow{(\nu\,\widetilde{m _{2}})\,\pitchfork\{V_{2}\}}\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _then there exist_ \(P_{2}\)_,_ \(V_{1}\)_,_ \(\Delta^{\prime}_{1}\)_, and_ \(\Lambda^{\prime}_{1}\) _such that_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \frac{(\nu\,\widetilde{m_{1}})\,\pitchfork\{V_{1}\}}{\Lambda^{\prime}_{1}}; \Delta^{\prime}_{1}\vdash P_{2}\) _and, for a fresh_ \(t\)_,_ \[(\nu\,\widetilde{m_{1}})(P_{2}\parallel t\leftarrow_{\mathfrak{B}}V_{1})\{ \tilde{W}/\tilde{x}\}\,\mathcal{S}\,(\nu\,\widetilde{m_{2}})(Q_{2}\parallel t _{1}\leftarrow_{\mathfrak{B}}V_{2})\]
2. _Whenever_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xrightarrow{\tilde{n}^{2}(V_{2}) }\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _then there exist_ \(P_{2}\)_,_ \(V_{1}\)_,_ \(\sigma\)_,_ \(\Lambda^{\prime}_{1}\)_, and_ \(\Delta^{\prime}_{1}\) _such that_ \(\Gamma_{1};\Lambda_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \frac{n^{2}(V_{1})}{\Lambda^{\prime}_{1}},\Delta^{\prime}_{1}\vdash P_{2}\) _where_ \(V_{1}\sigma\boxtimes V_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_,_
3. _Whenever_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xrightarrow{\tau}\Lambda^{ \prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _then there exist_ \(P_{2}\)_,_ \(\Lambda^{\prime}_{1}\)_, and_ \(\Delta^{\prime}_{1}\) _such that_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xrightarrow {\tau}\Lambda^{\prime}_{1};\Delta^{\prime}_{1}\vdash P_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_._
**Lemma 4.9**.: _Assume \(P_{1}\{\tilde{W}/\tilde{x}\}\) is a process and \(P_{1}\{\tilde{W}/\tilde{x}\}\,\mathcal{S}\,Q_{1}\)._
1. _Whenever_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{(\nu\,\widetilde{m _{2}})\,\pitchfork\{V_{2}\}}\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _then there exist_ \(P_{2}\)_,_ \(V_{1}\)_,_ \(\Delta^{\prime}_{1}\)_, and_ \(\Lambda^{\prime}_{1}\) _such that_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \frac{(\nu\,\widetilde{m_{1}})\,\pitchfork\{V_{1}\}}{\Lambda^{\prime}_{1}}; \Delta^{\prime}_{1}\vdash P_{2}\) _and, for a fresh_ \(t\)_,_ \[(\nu\,\widetilde{m_{1}})(P_{2}\parallel t\leftarrow_{\mathfrak{B}}V_{1})\{ \tilde{W}/\tilde{x}\}\,\mathcal{S}\,(\nu\,\widetilde{m_{2}})(Q_{2}\parallel t _{1}\leftarrow_{\mathfrak{B}}V_{2})\]
2. _Whenever_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{(\mu\,\widetilde{m _{2}})\,\pitchfork\{V_{2}\}}\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _then there exist_ \(P_{2}\)_,_ \(V_{1}\)_,_ \(\Lambda^{\prime}_{1}\)_, and_ \(\Delta^{\prime}_{1}\) _such that_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \frac{n^{2}(V_{1})}{\Lambda^{\prime}_{1}},\Delta^{\prime}_{1}\vdash P_{2}\) _where_ \(V_{1}\bowtie V_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_,_
3. _Whenever_ \(\Gamma_{2};\Lambda_{2};\Delta_{2}\vdash Q_{1}\xRightarrow{(\mu\,\widetilde{m _{2}})\,\pitchfork\{V_{2}\}}\Lambda^{\prime}_{2};\Delta^{\prime}_{2}\vdash Q_{2}\) _then there exist_ \(P_{2}\)_,_ \(\Lambda^{\prime}_{1}\)_, and_ \(\Delta^{\prime}_{1}\) _such that_ \(\Gamma_{1};\Lambda_{1};\Delta_{1}\vdash P_{1}\{\tilde{W}/\tilde{x}\}\xRightarrow \Lambda^{\prime}_{1};\Delta^{\prime}_{1}\vdash P_{2}\) _and_ \(P_{2}\,\mathcal{S}\,Q_{2}\)_._
Summary.Together, Lemmas 4.6 and 4.7 imply that \(\,\mathcal{S}\,\) is an MST-bisimilarity. In summary, we have shown Theorem 4.1, i.e., that for any typed process \(P\), we have that
\[\Gamma;\Lambda;\Delta\vdash P\ \approx^{\mathfrak{M}}\ \ \mathcal{G}(\Gamma); \mathcal{G}(\Lambda);\mathcal{G}(\Delta)\vdash\mathcal{D}(P).\]
In this section we have defined a notion of MST bisimilarity, following the notion HO bisimilarity for non-minimal processes. Following the strategy of Parrow in the untyped setting, we defined a relation \(\,\mathcal{S}\,\) containing all pairs \((P,\mathcal{D}(P))\), which we proved to be an MST bisimulation.
## 5 Optimizations of the Decomposition
In this section we discuss two optimizations that can be applied to the decomposition process. These optimizations simplify the structure of the trios and the nature of the underlying communication discipline.
The first optimization replaces trios in the decomposition with _duos_ (i.e., processes with at most two sequential prefixes). The decomposition in Section 3 follows Parrow's approach in that it converts a process into a parallel composition of trios. The use of trios seems to be necessary in (plain) \(\pi\)-calculus; in our first optimization we show that, by exploiting the higher-order nature of communications in HO, the trios can be replaced by duos.
The second optimization replaces polyadic communications (sending and receiving several values at once) with monadic communications (sending and receiving only a single value per prefix). In the decomposition, we use polyadic communications in order to propagate dependencies through sub-processes. We show that the use of monadic communication prefixes is sufficient for that task.
From Trios to Duos.In the first optimization we replace trios with _duos_, i.e., processes with at most two sequential prefixes. This optimization is enabled by the higher-order nature of HO. In the translation we make of _thunk processes_, i.e., inactive processes that can be activated upon reception. We write \(\{\!\{P\}\!\}\) to stand for the thumb process \(\lambda x:\langle\texttt{end}\!\rightarrow\!\circ\rangle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The monadic break down function \(\mathbb{B}^{k}(-)\), given in Figure 9, simplifies the one in Table 1 by using only one parameter, namely \(k\). In Figure 9 we use \(\sigma\) to denote the subsequent substitution \(\mathsf{next}(u_{i})\), the same as in Table 1, and use \(\widetilde{m}\) to denote the breakdown \((u_{i},\ldots,u_{i+|\mathcal{G}(C)|-1})\) of the name \(u_{i}\).
The breakdown function \(\mathbb{B}^{k}(-)\) uses propagators \(c_{k}\) (\(k>0\)) for encoding sequentiality and dedicated propagators \(c_{x}\) for each variable \(x\). As propagators \(c_{k}\) now only serve to encode sequentiality, only dummy values are being communicated along these channels (see Remark 3.2).
Let us describe the breakdown of a process with an input prefix, as it illustrates the key points common to all the other cases. The breakdown \(\mathbb{B}^{k}(u_{i}?(x).Q)\) consists of a trio in parallel with the breakdown of the continuation \(\mathbb{B}^{k+1}(Q\sigma)\) with name \(c_{x}\) restricted. The trio is first activated on \(c_{k}\). This is followed by the prefix that mimics original input action on indexed name \(u_{i}\). Upon receiving value \(x\), two things will happen in parallel. First, the next trio will be activated on name \(c_{k+1}\)
Figure 8: Our monadic decomposition function \(\mathbb{D}(-)\), illustrated. As in Figure 4, nodes represent process states, ‘\(\|\)’ represents parallel composition of processes, black arrows stand for actions, and red arrows indicate synchronizations that preserve the sequentiality of the source process; also, blue arrows indicates synchronizations that propagate (bound) values.
Figure 9: Monadic breakdown of processes and values
Second, the value \(x\) received on \(u_{i}\) is propagated further by the dedicated process \(W_{x}^{\rightarrow}\).
The specific mechanism of propagation depends on whether a received value is linear (\(\rightsquigarrow\equiv\rightsquigarrow\)) or shared (\(\rightsquigarrow\equiv\rightsquigarrow\)). In the former case, we simply propagate a value along the _linear_ name \(c_{x}\) once. In the later case, we cannot propagate the value only once, because a shared variable can be used in multiple trios. Thus, \(W_{x}^{\rightarrow}\) implements a recursive mechanism that repeatedly sends a value on the _shared_ name \(c_{x}\). The recursion is encoded in the same way as in Example 3.4: action \(c_{x}!\langle x\rangle\) is enclosed in value \(V\) that gets appropriately duplicated upon a synchronization.
The breakdown function for values, \(\vee\left(-\right)\), is accordingly changed to invoke \(\mathbb{B}^{1}(-)\) for breaking down a function body.
For simplicity, we defined the decomposition of the output process using a subprocess with four prefixes. Alternatively, we could have used a decomposition that relies on two trios, by introducing abstraction passing as in the previous section.
Let us illustrate the monadic breakdown by the means of an example:
**Example 5.1** (Monadic Decomposition).: We again consider process \(P=\left(\nu\,u\right)\left(Q\mid R\right)\) as in Example 3.7 where:
\[Q =u?(x).\overbrace{u?(y).(\nu\,s)\left(x\,\overline{s}\mid s! \langle y\rangle\right)}^{Q^{\prime}}\] \[R =\overline{u}!\langle V\rangle.\overline{u!}\langle\mathsf{true} \rangle.\mathbf{0}\] \[V =\lambda z.\,z?(w).\mathbf{0}\]
Let us recall the reductions of \(P\):
\[P \longrightarrow u?(y).(\nu\,s)\left(V\,\overline{s}\mid s! \langle y\rangle\right)\mid\overline{u!}\langle\mathsf{true}\rangle. \mathbf{0}\longrightarrow\left(\nu\,s\right)\left(V\,\overline{s}\mid s! \langle\mathsf{true}\rangle\right)\] \[\longrightarrow\left(\nu\,s\right)\left(\overline{s}?(w). \mathbf{0}\mid s!\langle\mathsf{true}\rangle\right)=P^{\prime}\]
The monadic decomposition of \(P\) is as follows:
\[\mathbb{D}(P)=\left(\nu\,c_{1},\ldots,c_{10}\right)\left(\nu\,u_{1},u_{2} \right)\left(\overline{c_{1}!}\langle\rangle\mid\mathbb{B}^{1}(P\sigma)\right)\]
where \(\sigma=\left\{u_{1}\overline{u}_{1}\!/\!u\overline{u}\right\}\). We have:
\[\mathbb{B}^{1}(P\sigma)=c_{1}?().\overline{c_{2}!}\langle\rangle.\overline{c _{8}!}\langle\rangle\mid\mathbb{B}^{2}(Q\sigma)\mid\mathbb{B}^{8}(R\sigma)\]
where:
\[\mathbb{B}^{2}(Q\sigma) =\left(\nu\,c_{x}\right)\left(c_{2}?().u_{1}?(x).(\overline{c_{3}!}\langle\rangle\mid\overline{c_{x}!}\langle x\rangle\right)\mid\mathbb{B}^{3 }(Q^{\prime}\sigma^{\prime})\right)\] \[\mathbb{B}^{3}(Q^{\prime}\sigma^{\prime}) =\left(\nu\,c_{y}\right)\left(c_{3}?().u_{2}?().(\overline{c_{4}!}\langle\rangle\mid W_{y})\mid\mathbb{B}^{4}((\nu\,s)\left(x\,\overline{s} \mid s!\langle y\rangle\right))\right)\] \[\mathbb{B}^{4}((\nu\,s)\left(x\,\overline{s}\mid s!\langle y \rangle\right)) =\left(\nu\,s_{1}\right)c_{4}?(.\overline{c_{5}!}\langle\rangle. \overline{c_{6}!}\langle\rangle\mid c_{5}?(.)c_{x}?(.)x.\,x\,\overline{s}_{1}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad c_{6 }?().c_{y}?(y).s_{1}!(y).\overline{c_{7}!}\langle\rangle\mid c_{7}?().\mathbf{0}\] \[\mathbb{B}^{8}(R\sigma) =c_{8}?().\overline{u}_{1}!\langle\langle V\rangle\rangle. \overline{c_{9}!}\langle\rangle\mid\mathbb{B}^{9}(\overline{u_{2}!} \langle\mathsf{true}\rangle.\mathbf{0})\] \[\mathbb{B}^{9}(\overline{u}_{2}!\langle\mathsf{true}\rangle. \mathbf{0}) =c_{9}?(.\overline{u}_{2}!\langle\mathsf{true}\rangle.\overline{c _{10}!}\langle\rangle\mid c_{10}?().\mathbf{0}\] \[\vee\left(V\right) =\lambda z_{1}.\left(\nu\,c_{1}^{V},c_{2}^{V}\right)\overline{c_{1} ^{V}!}\langle\mid\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
Now, the synchronization on \(u_{1}\) can take a place in \(D^{1}\) (on the prefixes highlighted above). We can see that value \(\vee(V)\) received on \(u_{1}\) can be propagated along \(c_{x}\) to a trio using it. Following up on that, propagators \(c_{3}\) and \(c_{9}\) are synchronized.
\[D^{1} \longrightarrow(\nu\,c_{3},\ldots,c_{7},c_{9},c_{10})\,(\nu\,c_{x} )\,\big{(}\overline{c_{3}!}\langle\,\rangle\ |\ c_{x}!\langle\vee(V)\,\rangle\ |\] \[\qquad|\,(\nu\,c_{y})\,\big{(}c_{3}?().u_{2}?().(\overline{c_{4}!} \langle\,\mid W\,|_{y}\,\rangle\ |\ \mathbb{B}^{4}((\nu\,s)\,\big{(}x\,\overline{s}\mid s! \langle y\rangle\big{)}\big{)}\big{)}\big{)}\ |\ \overline{c_{9}!}\langle\,\rangle\ |\ \mathbb{B}^{9}(\overline{u}_{2}! \langle\mathsf{true}\rangle.\mathbf{0})\] \[\longrightarrow^{2}(\nu\,c_{4},\ldots,c_{7},c_{10})\,(\nu\,c_{x} )\,\big{(}c_{x}!\langle\vee(V)\,\rangle\ |\] \[\qquad|\,(\nu\,c_{y})\,\big{(}\underline{\overline{u_{2}!}( \overline{y})}.(\overline{c_{4}!}\langle\,\mid W\,|_{y}\,\rangle\ |\ \mathbb{B}^{4}((\nu\,s)\,\big{(}x\,\overline{s}\mid s! \langle y\rangle\big{)}\big{)}\big{)}\big{)}\ |\ \overline{\overline{u_{2}!}(\mathsf{true}).c_{10}!} \langle\rangle\ |\ c_{10}?().\mathbf{0}=D^{2}\]
Similarly, \(D^{2}\) can mimic the synchronization on name \(u_{2}\). Again, this is followed by synchronizations on propagators.
\[D^{2} \longrightarrow(\nu\,c_{4},\ldots,c_{7},c_{10})\,(\nu\,c_{x})\, \big{(}c_{x}!\langle\vee(V)\,\rangle\ |\ (\nu\,c_{y})\,\big{(}\overline{c_{4}!}\langle\,\mid W _{y}\{\mathsf{true}/y\}\,|\ \mathbb{B}^{4}((\nu\,s)\,\big{(}x\,\overline{s}\mid s! \langle y\rangle\big{)}\big{)}\big{)}\big{)}\] \[\qquad|\ \overline{c_{10}!}\langle\,\rangle\ |\ c_{10}?(). \mathbf{0}\] \[\longrightarrow^{4}(\nu\,c_{7})\,(\nu\,c_{x})\,\big{(}c_{x}! \langle\vee(V)\,\rangle\ |\ (\nu\,c_{y})\,\big{(}W_{y}\{\mathsf{true}/y\}\,|\ (\nu\,s_{1})\,c_{x}?().x\,\overline{s}_{1}\] \[\qquad|\,c_{y}?(y).s_{1}!\langle y\rangle.c\overline{r}!\langle \cdot\rangle\ |\ c_{7}?().\mathbf{0})\big{)}=D^{3}\]
The subprocess \(W_{y}\{\mathsf{true}/y\}\) is dedicated to providing the value \(\mathsf{true}\) on a shared name \(c_{y}\). Specifically, it reduces as follow Its reductions are as follows:
\[W_{y}\{\mathsf{true}/y\}\longrightarrow^{2}c_{y}!\langle\mathsf{true}\rangle.W _{y}\{\mathsf{true}/y\}\]
In this example, the shared value received on \(y\) is used only once; in the general case, a process could use a shared value multiple times: thus there could be multiple trios requesting the shared value on \(c_{y}\).
With this information, we have the following reductions of the decomposed process:
\[D^{3} \longrightarrow^{2}(\nu\,c_{7})\,(\nu\,c_{x})\,\big{(}c_{x}! \langle\vee(V)\,\rangle\ |\ (\nu\,c_{y})\,\big{(}c_{y}!\langle\mathsf{true}\rangle.W_{y}\{\mathsf{true}/y\} \,|\ (\nu\,s_{1})\,c_{x}?().x\,\overline{s}_{1}\] \[\qquad|\ c_{y}?(y).s_{1}!\langle y\rangle.c\overline{r}!\langle \cdot\rangle\ |\ c_{7}?().\mathbf{0})\big{)}=D^{4}\]
In \(D^{4}\) a value for \(x\) is requested on name \(c_{x}\) before it is applied to name \(\overline{s}_{1}\). Similarly, a value for \(y\) is gathered by the communication on \(c_{y}\). These values are retrieved in two reductions steps as follows:
\[D^{4} \longrightarrow^{2}(\nu\,c_{7})\,(\nu\,s_{1})\,\vee(V)\ \overline{s}_{1}\ |\ s_{1}! \langle\mathsf{true}\rangle.\overline{c_{7}!}\langle\rangle\ |\ c_{7}?().\mathbf{0}\ |\ (\nu\,c_{y})\,W_{y}\{\mathsf{true}/y\}=D^{5}\]
We remark that \((\nu\,c_{y})\,W_{y}\{\mathsf{true}/y\}\) reduces to \((\nu\,c_{y})\,\overline{c_{y}!}\langle\mathsf{true}\rangle.W_{y}\{\mathsf{true}/y\}\) which is behaviorally equivalent to the inactive process.
Next, the application of the value is followed by the synchronization on propagator \(c_{1}^{V}\):
\[D^{5} \longrightarrow(\nu\,c_{7})\,(\nu\,s_{1})\,(\nu\,c_{1}^{V},c_{2}^{V})\,c _{1}^{V}\langle\rangle\ |\ (\nu\,c_{w})\,c_{1}^{V}?().\overline{s}_{1}?(w).(\overline{c_{2}^{V}}! \langle\,\mid W_{w}\,\rangle\ |\ c_{2}^{V}?()\mathbf{0}\] \[\qquad|\ s_{1}!\langle\mathsf{true}\rangle.\overline{c_{7}!} \langle\,\mid\,c_{7}?().\mathbf{0}\ |\ (\nu\,c_{y})\,W_{y}\{\mathsf{true}/y\}\] \[\longrightarrow(\nu\,c_{7})\,(\nu\,s_{1})\,(\nu\,c_{2}^{V})\,(\nu\,c _{w})\,\overline{s}_{1}?(w).(\overline{c_{2}^{V}}!\langle\rangle\ |\ W_{w}\,\rangle\ |\ c_{2}^{V}?()\mathbf{0}\] \[\qquad|\ s_{1}!\langle\mathsf{true}\rangle.\overline{c_{7}!} \langle\,\mid\,c_{7}?().\mathbf{0}\ |\ (\nu\,c_{y})\,W_{y}\{\mathsf{true}/y\}=D^{6}\]
Here, we can see that \(D^{6}\) can simulate \(P^{\prime}\), and its internal communication on the channel \(s\).
## 6 Extension with Labeled Choice
In this section we discuss how to extend our approach to include sessions with _selection_ and _branching_ - constructs which are used commonly in session types to express deterministic choices. Forgoing formal proofs, we illustrate by examples how to harness the expressive power of abstraction-passing to decompose these constructs at the process level. First, we demonstrate how to break down
selection and branching constructs in absence of recursion in Section 6.1. Then, in Section 6.2 we explore the interplay of recursion and labeled choice, as it requires special attention. Finally, in Section 6.3 we sketch how the operational correspondence proof can be adapted to account for branching and selection.
Let us briefly recall the labeled choice constructs in HO, following [17]. On the level of processes, selection and branching are modeled using labeled choice:
\[P,Q\ ::=\ \ldots\ \ |\ \ u\triangleleft l.P\ \ |\ \ u\triangleright\{l_{i}:P_{i}\}_{i \in I}\]
The process \(u\triangleleft l.P\) selects the label \(l\) on channel \(u\) and then proceeds as \(P\). The process \(uv\triangleright\{l_{i}:P_{i}\}_{i\in I}\) receives a label on the channel \(u\) and proceeds with the continuation branch \(P_{i}\) based on the received label. Selection and branching constructs can synchronize with each other, as represented in the operational semantics by the following reduction rule:
\[u\triangleleft l_{j}.Q\ |\ \overline{u}\triangleright\{l_{i}:P_{i}\}_{i\in I} \longrightarrow Q\ |\ P_{j}\ \ \ \ \ \ \ \ (j\in I)\ \ [\text{Sel}]\]
At the level of types, selection and branching are represented with the following types:
\[S\ ::=\ \ldots\ \ |\ \ \oplus\{l_{i}:S_{i}\}_{i\in I}\ \ |\ \&\{l_{i}:S_{i}\}_{i\in I}\]
The _selection type_\(\oplus\{l_{i}:S_{i}\}_{i\in I}\) and the _branching type_\(\&\{l_{i}:S_{i}\}_{i\in I}\) are used to type, respectively, the selection and branching process constructs. Note the implicit sequencing in the sessions involving selection and branching: the exchange of a label \(l_{i}\) precedes the execution of one of the stipulated protocol \(S_{i}\). The typing rules for type-checking branching and selection processes are given in Figure 10.
Given these process constructs and types, what are the minimal versions of the session types with labeled choice? We do not consider branching and selection as atomic actions as their purpose is to make a choice of a stipulated protocol. In other words, it is not meaningful to type a channel with branching type in which all protocols are end. Thus, we extend the minimal syntax types Definition 3.1 with branching and selection constructs as follows:
\[M\ ::=\ \ldots\ \ |\ \ \oplus\{l_{i}:M_{i}\}_{i\in I}\ \ |\ \&\{l_{i}:M_{i}\}_{i\in I}\]
That is, MSTs also include branching and selection types with MSTs nested in branches.
Next we explain our strategy for extending the breakdown function to account for selection and branching.
### Breaking Down Selection and Branching
Notice that in a branching process \(u\triangleright\{l_{i}:P_{i}\}_{i\in I}\) each subprocess \(P_{i}\) can have a different session with a different degree. Abstraction-passing allows to uniformly handle these kinds of processes. We extend the breakdown function in Definition 3.3 to selection and branching as follows:
\[\mathcal{G}(\&\{l_{i}:S_{i}\}_{i\in I}) =\&\{l_{i}:!(\mathcal{G}(S_{i})\!-\!\circ\!)\}_{i\in I}\] \[\mathcal{G}(\oplus\{l_{i}:S_{i}\}_{i\in I}) =\oplus\{l_{i}:?(\mathcal{G}(\overline{S_{i}})\!-\!\circ\!)\}_{i \in I}\]
This decomposition follows the intuition that branching and selection correspond to the input and output of labels, respectively. For example, in the case of branching, once a particular branch
Figure 10: Typing rules for selection and branching.
has been selected, we would like to input names on which to provide sessions from the branch \(\mathcal{G}(S_{i})\). In our higher-order setting, we do not input or output names directly. Instead, we send out an abstraction of the continuation process, which binds those names. It is then the job of the (complementary) selecting process to activate that abstraction with the names we want to select.
To make this more concrete, let us consider decomposition of branching and selection at the level of processes through the following extended example.
**Example 6.1**.: Consider a mathematical \(\mathcal{G}\) that offers clients two operations: addition and negation of integers. The server uses name \(u\) to implement the following session type:
\[S=\&\{\mathtt{add}:\underbrace{?(\mathtt{int})!?(\mathtt{int})!(\mathtt{int}); \mathtt{end}}_{S_{\mathtt{add}}}\,\ \mathtt{neg}:\underbrace{?(\mathtt{int})!(\mathtt{int}); \mathtt{end}}_{S_{\mathtt{neg}}}\}\]
The branches have session types with different lengths: one receives two integers and sends over their sum, the other has a single input of an integer followed by an output of its negation. Let us consider a possible implementation for the server \(Q\) and for a client \(R\) that selects the first branch to add integers \(16\) and \(26\):
\[Q \triangleq u\triangleright\{\mathtt{add}:Q_{\mathtt{add}},\ \mathtt{ neg}:Q_{\mathtt{neg}}\} R\triangleq\overline{u}\triangle\mathtt{add}.\overline{u}!\langle \mathbf{16}\rangle.\overline{u}!\langle\mathbf{26}\rangle.\overline{u}?(r)\] \[Q_{\mathtt{add}} \triangleq u?(a).u?(b).u!\langle a+b\rangle\] \[Q_{\mathtt{neg}} \triangleq u?(a).u!\langle-a\rangle\]
The composed process \(P\triangleq(\nu\,u)\,(Q\mid R)\) can reduce as follows:
\[P\ \longrightarrow\ (\nu\,u)\,(u?(a).u?(b).u!\langle a+b\rangle\mid \overline{u!}\langle\mathbf{16}\rangle.\overline{u!}\langle\mathbf{26}\rangle. \overline{u}?(r))\ \longrightarrow^{2}\ (\nu\,u)\,(u!\langle\mathbf{16}+\mathbf{26}\rangle\mid \overline{u}?(r))=P^{\prime}\]
Let us discuss the decomposition of \(P\). First, the decomposition of \(S\) is the minimal session type \(M\), defined as follows:
\[M=\mathcal{G}(S)=\&\{\mathtt{add}:!\langle\big{(}?(\mathtt{int}),?(\mathtt{int}),!(\mathtt{int}) \big{)}\!\rightarrow\!\circ\rangle,\] \[\mathtt{neg}:!\langle\big{(}?(\mathtt{int}),!(\mathtt{int})\big{)}\! \rightarrow\!\circ\rangle\}\]
Following Definition 3.9, we decompose \(P\) as follows:
\[\mathcal{D}(P)=(\nu\,c_{1}\ldots c_{7})\,\big{(}\overline{c_{1}}!\langle\ \rangle\mid(\nu\,u_{1})\,(c_{1}?().\overline{c_{2}}!\langle\rangle.\overline{c_ {3}}!\langle\ \rangle\mid\mathbb{\wp}_{\epsilon}^{2}\,(Q\sigma_{2})\ \mid\mathbb{\wp}_{\epsilon}^{3}\,(R\sigma_{2}))\big{)}\]
where \(\sigma_{2}=\{u_{1}\overline{u_{1}}\!/u\overline{u}\}\). The breakdown of the server process \(Q\), which implements the branching, is as follows:
\[\mathbb{\wp}_{\epsilon}^{2}\,(Q\sigma_{2})=c_{2}?().u_{1} \triangleright\{\mathtt{add}: u_{1}!\langle\underbrace{\lambda(y_{1},y_{2},y_{3}).\,(\nu\,c_{1} ^{V}\ldots c_{4}^{V})\,\overline{c_{1}^{V}}!()\ \mid\mathbb{\wp}_{\epsilon}^{1}\,(Q_{\mathtt{add}}\{y_{1} \!/u\})\sigma_{V}}_{V}\rangle,\] \[\mathtt{neg}: u_{1}!\langle\underbrace{\lambda(y_{1},y_{2}).\,(\nu\,c_{1} ^{W}\ldots c_{3}^{W})\,\overline{c_{1}^{W}}!()\ \mid\mathbb{\wp}_{\epsilon}^{1}\,(Q_{\mathtt{neg}}\{y_{1} \!/u\})\sigma_{W}}_{W}\rangle\}\]
where:
\[\mathbb{\wp}_{\epsilon}^{1}\,(Q_{\mathtt{add}}\{y_{1}\!/u\}) =c_{1}?().y_{1}?(a).\overline{c_{2}}!\langle a\rangle\mid c_{2}?(a). y_{2}?(b).\overline{c_{3}}!\langle a,b\rangle\mid c_{3}?(a,b).y_{3}!(a+b). \overline{c_{4}}!()\ \mid c_{4}?()\] \[\mathbb{\wp}_{\epsilon}^{1}\,(Q_{\mathtt{neg}}\{y_{1}\!/u\}) =c_{1}?().y_{1}?(a).\overline{c_{2}}!\langle a\rangle\mid c_{2}?(a). y_{2}!(-a).\overline{c_{3}}!()\ \mid c_{3}?()\]
with \(\sigma_{V}=\{c_{1}^{V},\ldots,c_{4}^{V}/c_{1},\ldots,c_{4}\}\) and \(\sigma_{W}=\{c_{1}^{W},c_{2}^{W},c_{3}^{W}/c_{1},c_{2},c_{3}\}\). In process \(\mathbb{\wp}_{\epsilon}^{2}\,(Q\sigma_{2})\), name \(u_{1}\) implements the minimal session type \(M\). Following the common trio structure, the first prefix awaits activation on \(c_{2}\). The next prefix mimics the branching action of \(Q\) on \(u_{1}\). Then, each branch consists of the output of an abstraction along \(u_{1}\). This output does not have a counterpart in \(Q\); it is meant to synchronize with process \(\mathbb{\wp}_{\epsilon}^{3}\,(R\sigma_{2})\), the breakdown of the corresponding selection process (see below).
The abstractions sent along \(u_{1}\) encapsulate the breakdown of subprocesses in the two branches (\(Q_{\mathsf{add}}\) and \(Q_{\mathsf{neg}}\)). An abstraction in the branch has the same structure as the breakdown of a value \(\lambda y:C^{\rightarrow}.\,P\) in Table 1: it is a composition of a control trio and the breakdown of a subprocess; the generated propagators are restricted. In the first branch the server needs three actions to perform the session, and in the second branch the server needs to perform two actions. Because of that the first abstraction binds three names \(y_{1},y_{2},y_{3}\), and the second abstraction binds two names \(y_{1},y_{2}\).
In the bodies of the abstractions we break down \(Q_{\mathsf{add}}\) and \(Q_{\mathsf{neg}}\), but not before adjusting the names on which the broken down processes provide the sessions. For this, we substitute \(u\) with \(y_{1}\) in both processes, ensuring that the broken down names are bound by the abstractions. By binding decomposed names in abstractions we account for different session types of the original name in branches, while preserving typability: this way the decomposition of different branches can use (i) the same names but typed with different minimal types and (ii) a different number of names, as it is the case in this example.
The decomposition of the client process \(R\), which implements the selection, is as follows:
\[\mathbb{B}_{\epsilon}^{3}(R\sigma_{2}) = (\nu\,u_{2},u_{3},u_{4})\,c_{3}?().\overline{u_{1}}\triangleleft \mathsf{add}.\overline{u_{1}}?(z).\overline{c_{4}}!(\rangle\rangle.z\,(u_{2}, u_{3},u_{4})\mid\mathbb{B}_{\epsilon}^{4}(\overline{u}_{2}!\langle\mathbf{16} \rangle.\overline{u}_{2}!\langle\mathbf{26}\rangle.\overline{u}_{2}?(r))\]
where:
\[\mathbb{B}_{\epsilon}^{4}(\overline{u}_{2}!\langle\mathbf{16}\rangle. \overline{u}_{2}!\langle\mathbf{26}\rangle.\overline{u}_{2}?(r)) = c_{4}?().\overline{u}_{2}!\langle\mathbf{16}\rangle. \overline{c_{5}}!(\rangle\mid c_{5}?().\overline{u}_{3}!\langle\mathbf{26} \rangle.\overline{c_{6}}!(\rangle\mid c_{6}?().\overline{u}_{4}?(r). \overline{c_{7}}!(\rangle\mid c_{7}?()\]
After receiving the context on \(c_{3}\) (empty in this case), the selection action on \(u_{1}\) is mimicked; then, an abstraction (an encapsulation of the selected branch) is received and applied to \((u_{2},u_{3},u_{4})\), which are locally bound. The intention is to use these names to connect the received abstraction and the continuation of a selection process: the subprocess encapsulated within the abstraction will use \((u_{2},u_{3},u_{4})\), while the dual names \((\overline{u}_{2},\overline{u}_{3},\overline{u}_{4})\) are present in the breakdown of the continuation.
For simplicity, we defined \(\mathbb{B}_{\epsilon}^{3}(R\sigma_{2})\) using a subprocess with four prefixes. Alternatively, we could have used a decomposition that relies on two trios, by introducing abstraction passing as in Section 5.
We will now examine the reductions of the decomposed process \(\mathcal{D}(P)\). First, \(c_{1}\), \(c_{2}\), and \(c_{3}\) will synchronize. We have \(\mathcal{D}(P)\longrightarrow^{4}D_{1}\), where
\[D_{1}=(\nu\,c_{4}\ldots c_{7})\,(\nu\,u_{1})\,\big{(}u_{1}! \langle V\rangle\big{)}\,\,\mathsf{neg}:\,u_{1}!\langle W\rangle\big{\}}\\ \mid(\nu\,u_{2},u_{3},u_{4})\,(\lambda(y_{1},y_{2},y_{3}). \overline{u}_{1}\triangleleft\mathsf{add}.\overline{u}_{1}?(z).\overline{c_{4 }}!(\rangle.z\,(y_{1},y_{2},y_{3}))\,(u_{2},u_{3},u_{4})\mid\\ \mathbb{B}_{\epsilon}^{4}(\overline{u}!(\mathbf{26}).\overline{u }?(r))\big{)}\]
In \(D_{1}\), \((u_{2},u_{3},u_{4})\) will be applied to the abstraction; after that, the process chooses the label \(\mathsf{add}\) on \(u_{1}\). Process \(D_{1}\) will reduce further as \(D_{1}\longrightarrow^{2}D_{2}\longrightarrow^{2}D_{3}\), where:
\[D_{2}=(\nu\,c_{4}\ldots c_{7})\,(\nu\,u_{1})\,\big{(}\ u_{1}! \langle V\rangle\mid(\nu\,u_{2},u_{3},u_{4})\,(\overline{u}_{1}?(z). \overline{c_{4}}!(\rangle.z\,(u_{2},u_{3},u_{4})\mid\mathbb{B}_{\epsilon}^{4 }(\overline{u}!(\mathbf{26}).\overline{u}?(r)))\big{)}\] \[D_{3}=(\nu\,c_{4}\ldots c_{7})\,(\nu\,u_{1},u_{2},u_{3},u_{4})\, \big{(}\overline{c_{4}}!(\rangle.V\,(u_{2},u_{3},u_{4})\mid\] \[c_{4}?().\overline{u}_{2}!(\mathbf{16}).\overline{c_{5}}!(\rangle \mid c_{5}?().\overline{u}_{3}!\langle\mathbf{26}\rangle.\overline{c_{6}}!( \rangle\mid c_{6}?().\overline{u}_{4}?(r).\overline{c_{7}}!(\rangle\mid c_{7}? ()\big{)}\]
Then \(D_{3}\) reduces as \(D_{3}\longrightarrow D_{4}\longrightarrow D_{5}\), where:
\[D_{4}=(\nu\,c_{5}\ldots c_{7})\,(\nu\,u_{2},u_{3},u_{4})\,\big{(}( \nu\,c_{1}^{V}\ldots c_{4}^{V})\,(\overline{c_{1}}!(\rangle\mid c_{1}^{V}?(). u_{2}?(a).\overline{c_{2}}!(a)\mid c_{2}^{V}?(a).u_{3}?(b).\overline{c_{3}}!(a,b) \mid\] \[c_{3}^{V}?(a,b).u_{4}!(a+b).\overline{c_{4}}!(\rangle\mid c_{4}^ {V}?())\mid\] \[\overline{u}_{2}!\langle\mathbf{16}\rangle.\overline{c_{5}}!(\rangle \mid c_{5}?().\overline{u}_{3}!(\mathbf{26}).\overline{c_{6}}!(\rangle\mid c_{6 }?().\overline{u}_{4}?(r).\overline{c_{7}}!(\rangle\mid c_{7}?()\big{)}\] \[D_{5}=(\nu\,c_{5}\ldots c_{7})\,(\nu\,u_{2},u_{3},u_{4})\,\big{(}( \nu\,c_{2}^{V}\ldots c_{4}^{V})\,(u_{2}?(a).\overline{c_{2}}!^{V}(a)\mid c_{2}^ {V}?(a).u_{3}?(b).\overline{c_{3}}!(a,b)\mid\] \[c_{3}^{V}?(a,b).u_{4}!(a+b).\overline{c_{4}}!(\rangle\mid c_{4}^ {V}?())\mid\] \[\overline{u}_{2}!\langle\mathbf{16}\rangle.\overline{c_{5}}!(\rangle \mid c_{5}?().\overline{u}_{3}!(\mathbf{26}).\overline{c_{6}}!(\rangle\mid c_{6 }?().\overline{u}_{4}?(r).\overline{c_{7}}!(\rangle\mid c_{7}?()\big{)}\]
Now, process \(D_{5}\) can mimic the original transmission of the integer \(16\) on channel \(u_{2}\) as follows:
\[D_{5}\longrightarrow(\nu\,c_{5}\ldots c_{7})\,(\nu\,u_{2},u_{3},u_{4 })\,\big{(}(\nu\,c_{2}^{V}\ldots c_{4}^{V})\,(\overline{c_{2}^{V}}!(1\!{\bf 6}) \mid c_{2}^{V}?(a).u_{3}?(b).\overline{c_{3}^{V}}!(a,b)\mid\] \[c_{3}^{V}?(a,b).u_{4}!(a+b).\overline{c_{4}^{V}}!(1\!\mid c_{4}^{ V}?(0)\mid\] \[\overline{c_{5}}!(\setminus)\mid c_{5}?().\overline{u_{3}}!(2\!{ \bf 6}).\overline{c_{6}}!(\setminus)\mid c_{6}?(0).\overline{u_{4}}?(r). \overline{c_{7}}!(\setminus)\mid c_{7}?(0)\big{)}=D_{6}\]
Finally, process \(D_{6}\) reduces to \(D_{7}\) in three steps, as follows:
\[D_{6}\longrightarrow^{3}(\nu\,c_{5}\ldots c_{7})\,(\nu\,u_{4})\,\big{(}(\nu\,c_ {4}^{V})\,(u_{4}!(1\!{\bf 6}+{\bf 26}).\overline{c_{4}^{V}}!(\setminus\mid c_{4}^{V}?(0) \mid\overline{u}_{4}?(r).\overline{c_{7}}!(\setminus)\mid c_{7}?(0)\big{)}=D_ {7}\]
Clearly, process \(D_{7}\) correctly simulates the synchronizations of the process \(P^{\prime}\). \(\triangleleft\)
### The Interplay of Selection/Branching and Recursion
Now, we discuss by example how recursive session types involving branching/selection are broken down. For simplicity, we consider recursive types without nested recursion and in which the recursive step is followed immediately by branching or selection, without any intermediate actions, i.e. types of the following form:
\[\mu\mbox{\tt\_}\&\{l_{i}:S_{i}\}_{i\in I}\qquad\mu\mbox{\tt\_}\,\oplus\,\{l_{ i}:S_{i}\}_{i\in I}\]
where none of \(S_{i}\) contain branching/selection or recursion.
In this case, the decomposition of branching recursive types should be defined differently than for tail-recursive types: a type such as \(\mu\mbox{\tt\_}\&\{l_{i}:S_{i}\}_{i\in I}\) does not necessarily describe a channel with an infinite behavior, because some of the branches \(S_{i}\) can result in termination. In such case, decomposing all actions in the type \(\&\{l_{i}:S_{i}\}_{i\in I}\) as their own recursive types using the \(\mathcal{R}(-)\) function would be incorrect.
Instead, we decompose the body of the recursive type with \(\mathcal{G}(-)\) itself:
\[\mathcal{G}(\mu\mbox{\tt\_}\&\{l_{i}:S_{i}\}_{i\in I}) =\mu\mbox{\tt\_}\&\{l_{i}:!\langle\mathcal{G}(S_{i})\!\!-\!\! \circ\rangle\}_{i\in I}\] \[\mathcal{G}(\mu\mbox{\tt\_}\,\oplus\,\{l_{i}:S_{i}\}_{i\in I}) =\mu\mbox{\tt\_}\,\oplus\,\{l_{i}:?(\mathcal{G}(\overline{S_{i}}) \!\!-\!\!\circ\rangle\}_{i\in I}\]
If some branch \(S_{i}\) contains the recursion variable \(\mathfrak{t}\), then it will appear in \(\mathcal{G}(S_{i})\), because \(\mathcal{G}(\mathfrak{t})=\mathfrak{t}\). That is, recursion variables will appear as part of the abstraction \(\mathcal{G}(\overline{S_{i}})\!\!-\!\!\circ\). That means that the decomposition of a tail-recursive type form can produce a minimal _non_-tail-recursive types.
Now, we illustrate this decomposition on the level of processes.
**Example 6.2**.: We consider a process \(P\) with a name \(r\) that is typed as follows:
\[S=\mu\mbox{\tt\_}\&\{l_{1}:?(\mbox{\tt str})!(\mbox{\tt\_})!(\mbox{\tt\_})!( \mbox{\tt\_})!,t,\ l_{2}:\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\}.\]
For simplicity, we give \(P\) in \(\mathsf{HO}\pi\) (which includes \(\mathsf{HO}\) with recursion as sub-calculus):
\[P =R\mid Q\] \[R =\mu X.r\triangleright\{l_{1}:r?(t).r!\langle\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_} \,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt \_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt \_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\, \mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{ \tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox{\tt\_}\,\mbox
The decomposition of \(S\), denoted \(M^{*}\), is the following minimal session type:
\[M^{*}=\mathcal{G}(S)=\mu\text{\rm t.\&}\{l_{1}:!((?(\text{\rm str}),\!(\text{\rm int }),\ \text{\rm t})\!-\!\circ\!\circ\!),\ l_{2}:\text{\rm end}\}\]
As in the previous example (Example 6.1), the continuation of a selected branch will be packed in an abstraction and sent over. This abstraction binds names on which the session actions should be performed. In addition, if a branch contains a recursive call, then the last argument of the abstraction will be a name on which the next instance of the recursion will be mimicked. We illustrate this mechanism by giving the decomposition of \([P]\) and inspecting its reductions.
\[\mathcal{D}([P]) =(\nu\,c_{1},\ldots,c_{12})\,\overline{c_{1}}!(\langle\ \rangle\ |\ c_{1}?(). \overline{c_{2}}!(\rangle.\overline{c_{5}}!(\rangle\mid\ |\ S_{\epsilon}^{2}([R])\mid\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(D_{2}\) can mimic a silent select action on \(r_{1}\); this is followed by a reception of value \(W\{\mathcal{V}_{\epsilon}(V)/y\}\) on name \(\overline{r}_{1}\), which is then applied to names \((r_{2},r_{3},r_{4})\). The resulting process is as follows:
\[D_{2}\longrightarrow^{*} (\nu\,c_{8},\ldots,c_{12})\,(\nu\,r_{2}:?(\text{str}),\ r_{3}:!( \text{int}),\ r_{4}:M^{*})\] \[(\nu\,c_{1}^{W}\ldots c_{5}^{W})\,\overline{c_{1}^{W}}!(\!)\mid c _{1}^{W}?(.r_{2}?(t).\overline{c_{2}^{W}}!(\!)\mid c_{2}^{W}?(t).r_{3}!(\text{ len}(t)).\overline{c_{3}^{W}}!(\!)\] \[\mid(\nu\,s_{1})\,(c_{3}^{W}?().\overline{c_{4}^{W}}!(\!)\!)(. \overline{c_{5}^{W}}!(\!)\mid c_{4}^{W}?().\overline{c_{5}}^{W}!(\!)\mid c_{4 }^{W}?().\overline{c_{1}}!(\!)\mid c_{5}^{W}?().s_{1}!(\!\backslash\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
abstraction-input action on the side of selection, if present. However, this abstraction-sending action does not correspond to any action of the source process.
Therefore, to show the operational correspondence between the source term and its decomposition, we need to restrict our attention to processes in which branching and selection types are both present in (matching) pairs. Specifically, we assume the following conditions on the source process \(P\):
* \(P\) is a well-typed, that is \(\Gamma;\Delta;\Lambda\vdash P\triangleright\diamond\) with \(\mathsf{balanced}(\Delta)\);
* for any name \(u\), \(u\in\mathtt{fn}(P)\) with \(u:S\) such that \(S\) involves selection or branching constructs if and only if \(\overline{u}\in\mathtt{fn}(P)\).
Intuitively, these two conditions ensure that every branching action in \(P\) has its complement (and vice-versa). Note that for closed typeable processes both the balancedness condition and the second condition on names are vacuously true.
With this condition in place, we need to enlarge the relation \(\,\mathcal{S}\,\) in order to account for silent actions that are introduced by the breakdown of selection and branching constructs. That is, when matching the original silent action involving selection/branching, the corresponding broken down process need to perform several silent actions, in order to be able to mimic the process continuation.
## 7 Related Work
We draw inspiration from insights developed by Parrow [21], who showed that every process in the untyped, summation-free \(\pi\)-calculus with replication is weakly bisimilar to its decomposition into trios (i.e., \(P\approx\mathcal{D}(P)\)). As already mentioned, we are concerned with a different technical setting: our decomposition treats processes from a calculus without name-passing but with higher-order concurrency (abstraction-passing), supports recursive types, and can accommodate labeled choices. Our goals are different than those of Parrow [21]: for us, trios processes are a relevant instrument for defining and justifying minimal session types, but they are not an end in themselves. Still, we retain the definitional style and terminology for trios from [21], which are elegant and clear.
Our main results connect the typability and the behaviour of a process with its decomposition, as witnessed by the static and dynamic correctness theorems. Static correctness was not considered by Parrow, as he worked in an untyped setting. As for dynamic correctness, a similar result was established in [21], linking the process and its decomposition through weak bisimilarity. In our setting we had to use a different, typed notion of bisimilarity. An obstacle here is that known notions of typed bisimilarity for session-typed processes, such as those given by Kouzapas et al. [18], only relate processes typed under the _same_ typing environments. To that extent, our notion of equivalence (MST bisimulations) is more flexible than prior related notions as it (i) relates processes typable under different environments (e.g., \(\Delta\) and \(\mathcal{G}(\Delta)\)) and (ii) admits that actions along a name \(s\) from \(P\) can be matched by \(\mathcal{D}(P)\) using actions along indexed names \(s_{k}\), for some \(k\) (and viceversa).
As mentioned in the introduction, our approach is broadly related to works that relate session types with other type systems for the \(\pi\)-calculus (cf. [16, 5, 6, 7, 9]). Hence, these works target the _relative expressiveness_ of session-typed process languages, by encoding processes between two different systems. By contrast, we relate a session types system with its subsystem of minimal types. Thus, by explaining session types in terms of themselves, our work emerges as the first study of _absolute expressiveness_ in the context of session types.
In this context, works by Kobayashi [16] and Dardha et al. [5, 6] are worth discussing. Kobayashi [16] encoded a finite session \(\pi\)-calculus into a \(\pi\)-calculus with linear types with usages (without sequencing); this encoding uses a continuation-passing style to codify a session name using multiple linear channels. Dardha et al. [5, 6] formalize and extend Kobayashi's approach. They use two separate encodings, one for processes and one for types. The encoding of processes uses a freshly generated linear name to mimic each session action; this fresh name becomes an additional argument in communications. The encoding of types codifies sequencing in session types by nesting payload types. In contrast, we "slice" the \(n\) actions occurring in a session \(s:S\) along indexed names
with minimal session types--\(n\) slices of \(S\). Hence, Dardha et al.'s could be described as codifying sequencing in a "dynamic style", via the freshly generated names, whereas we follow a "static style" using names that are indexed according to the corresponding session type.
Recently, Jacobs [15] developed a small programming calculus with a single fork-like construct and a linear type system, which can be used to encode session-typed communications. His system can be seen as a distillation of Wadler's GV [23] which is, in essence, a \(\lambda\)-calculus with session-based concurrency; in contrast, HO can be seen as a \(\pi\)-calculus in which abstractions can be exchanged. While similar in spirit, our work and the developments by Jacobs are technically distant; we observe that the operational correspondences developed in [15] are strictly simpler than our dynamic correspondence result (Theorem 4.1) although they are mechanized in the Coq proof assistant.
Finally, we elaborate further on our choice of HO as source language for minimal session types. HO is one of the sub-calculi of HO\(\pi\), a higher-order process calculus with recursion and both name- and abstraction-passing. The basic theory of HO\(\pi\) was studied by Kouzapas et al. [17, 18] as a hierarchy of session-typed calculi based on relative expressiveness. Our results enable us to place HO with minimal session types firmly within this hierarchy. Still, the definition of minimal session types does not rely on having HO as source language, as they can be defined on top of other process languages. In fact, in separate work we have defined minimal session types on top of the first-order sub-calculus of HO\(\pi\)[2]. This development attests that minimal session types admit meaningful formulations independently from the kind of communicated objects (abstractions or names).
## 8 Concluding Remarks
We have presented a minimal formulation of session types, one of the most studied classes of behavioral types for message-passing programs. This minimal formulation forgoes sequencing on the level of types. We formally connect standard and minimal session types (MSTs), through a _decomposition_ of session-typed processes, adopting the higher-order process calculus HO as target language. Following Parrow [21], we defined the decomposition of a process \(P\), denoted \(\mathcal{D}(P)\), as a collection of _trio processes_ (processes with at most three actions) that trigger each other mimicking the sequencing in the original process. We proved that typability of \(P\) using standard session types implies the typability of \(\mathcal{D}(P)\) with minimal session types; we also established that \(P\) and \(\mathcal{D}(P)\) are behaviourally equivalent through an _MST bisimulation_. Our results hold for all session types constructs, including labeled choices and recursive types.
From a foundational standpoint, our study of minimal session types is a conceptual contribution to the theory of behavioral types, in that we clarify the status of sequencing in theories of session types. As remarked in Section 1, there are many session types variants, and their expressivity often comes at the price of an involved underlying theory. Our work contributes in the opposite direction, as we identified a simple yet expressive fragment of an established session-typed framework [17, 18], which allows us to justify session types in terms of themselves. Understanding further the underlying theory of minimal session types (e.g., notions such as type-based compatibility) is an exciting direction for future work.
As mentioned above, one insight derived from our results is that sequentiality in session types is convenient but not indispensable. Convenience is an important factor in the design of type systems for message-passing programs, because types are abstract specifications of communication structures. By identifying sequencing as a source of redundancy, our minimal formulation of session types does not contradict or invalidate the prior work on standard session types and their extensions; rather, it contributes to our understanding of the sources of convenience of those advanced type systems.
In formulating minimal session types we have committed to a specific notion of minimality, tied to sequencing constructs in types--arguably the most distinctive feature in session types. There could be other notions of minimality, unrelated to sequencing but worth exploring nevertheless. Consider, for instance, the framework of _context-free_ session types [22], which extend standard session types by allowing sequencing of the form \(S;T\). This form of sequential composition is quite powerful, and yet it could be seen as achieving a form of minimality different from the one we studied here:
as illustrated in [22, Section 5], context-free session types allow to describe the communication of tree-structured data while minimizing the need for channel creation and avoiding channel passing.
Our work can be seen as a new twist on Parrow's decomposition results in the _untyped_ setting [21]. While Parrow's work indeed does not consider types, in fairness we must observe that when Parrow's work appeared (1996) the study of types (and typed behavioral equivalences) for the \(\pi\)-calculus was rather incipient (for instance, the widely known formulation of binary session types, given in [12], appeared in 1998). That said, we would like to stress that our results are not merely an extension of Parrow's work with session types, for types in our setting drastically narrow down the range of conceivable decompositions. Additionally, in this work we exploit features not supported in [21], most notably higher-order concurrency (cf. Section 5).
Finally, from a practical standpoint, we believe that our approach paves a new avenue to the integration of session types in programming languages whose type systems lack sequencing, such as Go. It is natural to envision program analysis tools which, given a message-passing program that should conform to protocols specified as session types, exploit our decomposition as an intermediate step in the verification of communication correctness. Remarkably, our decomposition lends itself naturally to an implementation--in fact, we generated our examples automatically using MISTY, an associated artifact written in Haskell [4].
AcknowledgmentsWe are grateful to Erik Voogd, who as a BSc student was one of the authors in the conference version of this paper [3].
|
2308.09030 | SQL Access Patterns for Optimistic Concurrency Control | Transaction processing is of growing importance for mobile and web
applications. Booking tickets, flight reservation, e-Banking, e-Payment, and
booking holiday arrangements are just a few examples. Due to temporarily
disconnected situations the synchronization and consistent transaction
processing are key issues. To avoid difficulties with blocked transactions or
communication loss several authors and technology providers have recommended to
use Optimistic Concurrency Control (OCC) to solve the problem. However most
vendors of Relational Database Management Systems (DBMS) implemented only
locking schemes for concurrency control which prohibit the immediate use of
OCC. We propose Row Version Verifying (RVV) discipline to avoid lost updates
and achieve a kind of OCC for those DBMS not providing an adequate non-blocking
concurrency control. Moreover, the different mechanisms are categorized as
access pattern in order to provide programmers with a general guideline for SQL
databases. The proposed SQL access patterns are relevant for all transactional
applications with unreliable communication and low conflicting situations. We
demonstrate the proposed solution using mainstream database systems like
Oracle, DB2, and SQLServer. | Fritz Laux, Martti Laiho | 2023-08-17T15:04:51Z | http://arxiv.org/abs/2308.09030v1 | # SQL Access Patterns for Optimistic Concurrency Control
###### Abstract
Transaction processing is of growing importance for mobile and web applications. Booking tickets, flight reservation, e-Banking, e-Payment, and booking holiday arrangements are just a few examples. Due to temporarily disconnected situations the synchronisation and consistent transaction processing are key issues. To avoid difficulties with blocked transactions or communication loss several authors and technology providers have recommended to use Optimistic Concurrency Control (OCC) to solve the problem. However most vendors of Relational Database Management Systems (DBMS) implemented only locking schemes for concurrency control which prohibit the immediate use of OCC. We propose Row Version Verifying (RVV) discipline to avoid lost updates and achieve a kind of OCC for those DBMS not providing an adequate non-blocking concurrency control. Moreover, the different mechanisms are categorized as access pattern in order to provide programmers with a general guideline for SQL databases. The proposed SQL access patterns are relevant for all transactional applications with unreliable communication and low conflicting situations. We demonstrate the proposed solution using mainstream database systems like Oracle, DB2, and SQLServer.
## I Introduction
Mobile applications enable users to execute business transactions while being on the move. It is essential that temporary disconnected situations do not compromise transaction properties or block database resources on the server. To prevent blocked resources researchers have intensively studied OCC [1, 2], but no commercial database product has implemented this mechanism, yet. With the popularity of multitier software architectures technology vendors like those for J2EE platforms, object relational mappers or Service Oriented Architecture (SOA) have proposed to use OCC to solve the problem.
But shifting the burden to the middleware is a tricky task. The designer and implementer of a transactional application have to leave the DBMS unaware of the user transaction to avoid the automatic locking of data for an unpredictable time. On the other hand, concurrent transactions of different applications may interfere without the possibility for any help by the DBMS. Therefore the applications and the DBMS need to co-operate somehow to ensure that at least the lost update problem will be avoided.
A typical fault in multi-user file-based systems without proper concurrency control is the lost update problem i.e. a record \(x\) updated by some process A will be overwritten by some other concurrent process B like in the following problematic canonical schedule [4, pp. 62-63]: \(r_{A}(x),r_{B}(x),w_{A}(x),w_{B}(x)\), where \(r_{T}(x)\) and \(w_{T}(x)\) denote read and write operations of transaction T on data item \(x\).
A properly used DBMS would not allow such a situation to happen because it would lock \(x\) for transaction A and prevent B from accessing \(x\) before A commits or aborts. But, if the database does not receive a termination request, e.g. because of a communication failure, the record \(x\) remains blocked.
We do not want to risk blocked data, therefore a kind of OCC should be applied. Even if the DBMS does not support OCC directly we will show that it could help the application to detect concurrency conflicts. For relational databases we will show how this can be achieved using a row version column for every table and specific access patterns.
### _Structure of the Paper_
After a motivation for our approach and the related work we present in Section II the lost update problem by example. Section III describes three SQL patterns that solve the problem and in Section IV we provide an implementation for a server side row version column to support OCC for mainstream SQL databases. In Section V we conclude our findings.
### _Motivation_
Kung and Robinson [1] distinguishes three phases of a transaction when OCC is used:
* read phase
* validation phase
* write phase
The first phase includes user input and thinking time. It may last for an unpredictable time span. The following phases are without any user interaction. Validation and write phases are therefore very short in the range of milliseconds. The last two phases are critical in the sense that exclusive access is required. Failing to do so could result in inconsistent data, e.g lost update. A Relational Database Management System
(RDBMS) could help to support each phase by choosing the proper transaction isolation level. The read phase should read only valid data (READ COMMITED) and transaction mode can be set to READ ONLY. Switching to a strong enough isolation level (REPEATABLE READ, SNAPSHOT, or SERIALIZABLE) during validation and write phases will yield the corresponding transaction properties against competing transactions (see Fig. 1).
The transaction isolation level may only be altered before or as the first statement of a transaction. This implies that our user transaction has to be split up into two database/SQL transactions. Each SQL transaction should be set to the isolation levels as recommended before. During the validation phase the application has to re-read the data and check for any changes by concurrent transactions. If any changes are detected, then the transaction has to abort else it may proceed with the write phase.
Applying OCC to the previous example the result would be the following history: \(r_{A}(x),r_{B}(x),val_{A},w_{A}(x),val_{B},a_{B}\), where \(val_{A}\) denotes the validation of transaction \(A\) and \(a_{B}\) denotes the abort command for transaction \(B\).
Or, if transaction A decides to abort then B will be successful with \(r_{A}(x),r_{B}(x),a_{A},val_{B},w_{B}(x)\). In either case only one of the competing transactions can be successful.
This example also shows that the non blocking concurrency control comes for the prize of transaction aborts.
Instead of optimistic concurrency control theories presented in database textbooks, we are interested in ways how to implement such a mechanism using mainstream DBMS systems and what application developers need to understand about reliable access of databases.
We therefore extend the read-write model as used by Herbrand's semantic (see [4, 14, 15]) to fit with the OCC mechanism. If the validation is not passed successfully then the write phase will be skipped, leading to an aborted transaction. Instead of having validation \(val_{T}(x)\) and write \(w_{T}(x)\) we introduce a conditional write operation \(w(x,k)\). This write operation on the data item \(x\) is only executed if the condition \(k\) evaluates to true. Checking \(k\) may require to read the actual database state. Reading, checking \(k\), and writing \(x\) do not allow any parallel operations as explained above.
In this paper we present access patterns that implement this \(w(x,k)\) operation. A simple example of implementation would use solely the SQL update command:
UPDATE \(table\) SET \(X\) = \(val\) WHERE \(k\) AND \(id(x)\)
where \(id(x)\) evaluates to true only for the row of data item \(x\).
### _Related Work_
Concurrency control is a cornerstone of transaction processing, it has been extensively studied for decades. Namely Gray and Reuter [3] studied locking schemes, whereas Kung and Robinson [1] developed optimistic methods for concurrency control. Unland [2] presents OCC algorithms without critical section. Using these algorithms would allow relaxed isolation levels but involve checking the read set against all concurrent transactions. Because the application is not aware of concurrent transactions its use can be ruled out in our case.
A higher concurrency for query intensive transactions provide Multiversion Concurrency Control (MVCC) as described by Stearns and Rosenkrantz [18] and Bernstein and Goldman [19]. If we check the MVCC method for its usability for web or mobile transaction processing it is even worse than locking in terms of resource consumption. While locking needs to record only the id of the item locked, the MVCC needs to store a version each time an item is updated that was read by an active transaction prior to the update. In case of disconnected situations this may lead to large number of versions for a single data item.
With the dissemination of middleware OCC has been recommended by IT-vendors ([20, 21, 22]) for transactional e-business and m-commerce applications but little concern have been spend on how this can be achieved using commercial SQL databases [7]. He8[20] simply uses Hibernate's optimistic-lock="version" option but does not mention the risk of legacy applications not under control of Hibernate which could still lead to lost updates. Nock [6] uses a timestamp column with Java timestamp resolution ignoring the fact that contemporary database products can produce more than a hundred times the same timestamp [7]. Akbar-Husain [21] believes that demarking the method that checks the version with the required transaction attribute will be sufficient to avoid lost updates. He fails to tell that only a strong enough isolation level will achieve the desired results.
## II Lost Update Problem in the Application Context
Let us consider first the following problematic scenario of SQL transactions of two concurrent processes A and B updating the balance of the same account in Table I.
The withdrawal of 200 made by the transaction of B will be overwritten by A, in other words the update made by B in step
5 will be lost in step 7 when the transaction of A overwrites the updated value by value 900 which is based on stale data i.e. outdated value of the balance from step 3. If the transactions of A and B serialized properly, the correct balance value after these transactions would be 700, but there is nothing that the DBMS could do to protect the update of step 5 since the guilty party to this lost update problem is the programmer of process A, who has ordered a wrong isolation level from the DBMS. READ COMMITTED, which for performance reasons is the default transaction isolation level used by most RDBMS systems, does not protect any data read by transaction of getting outdated right after reading the value. Locking Scheme Concurrency Control (LSCC) prevents conflicting access to data. Conflicts are defined in terms of isolation levels. The proper isolation level on LSCC systems to prevent a lost update should be REPEATABLE READ or SERIALIZABLE, which would protect the values read in the transaction from getting outdated during the transaction by holding shared locks on these rows up to the end of the transaction. The isolation service of the DBMS does guarantee that the transaction will either get the ordered isolation or, in case of serialization conflict, the transaction will be rejected by the DBMS. The means used for this service and the transactional outcome for the very same application code can be different when using different DBMS systems, and even in using different table structures. A LSCC may as well delay to grant a lock request until the possible conflict disappears. Usually a transaction rejected due to a serialization conflict should be retried by the application, but we will discuss this later.
The erroneous scenario above would also be the same if process A commits its transaction of steps 1 and 3 (let us call it transaction A1) in step 4, and continues (for example after some user interaction) with another transaction A2 of phases 7-8. In this case, no isolation level can help, but transaction A2 will make a blind write (based on stale data, insensitive of the current value) over the balance value updated by transaction B.
## III SQL Access Patterns for Avoiding Lost Updates
The blind write of the update transaction A2 of phases 7-8 (resulting in the lost update of transaction B) could have been avoided by any of the following practice. The access patterns apply to the validation and write phase (process A2) as shown in Figure 1. We present the patterns in the canonical form given by Coplien [23] that is shorter and more essential than the one used by Gamma et al [5]:
### _Access Pattern: Sensitive UPDATE_
_Problem_: How to prevent a lost update in case of concurrent updates without using explicit locks.
_Context_: Concurrent transaction processing in distributed systems has to deal with temporary disconnected situations and nevertheless ensure correct results.
_Forces_:
* Using locks to prevent other transactions from changing the value can block data items for unpredictable time in case of communication failure or in case of long user thinking time.
* Multiversion concurrency control (MVCC) or OCC do not block data access, but lead to abort conflicting transactions except for the first one that updates the data.
* OCC is not supported by commercial SQL databases, hence we cannot directly use DBMS support.
_Solution_: There is no risk of lost update if A2 in step 7 uses the form of the update which is sensitive to the current value, like B uses in step 5 as follows:
UPDATE Accounts
SET balance = balance - 100
WHERE acctId = :id;
_Consequences_: It should be noted that the update of the balance is based on a value that is not seen by the application and therefore the user will not be aware of the changed balance. So, this access pattern does not provide repeatable read isolation. If the user needs to know about the changed situation the access pattern "Re-SELECT... UPDATE" could be used (see below).
### _Access Pattern: Conditional UPDATE_
_Problem_: How to prevent a lost update and provide repeatable-read for a user transaction in case of concurrent updates without using locking.
_Context_: The "Sensitive UPDATE" pattern in concurrent read situations may result in non-repeatable phenomenon.
_Forces_: Same as for "Sensitive UPDATE" plus:
* The data value read and displayed to the user may not be the same on which the update is based (non-repeatable read phenomenon).
_Solution_: After transaction A1 first has read the original row version data in step 3, transaction A2 verifies in step 7, using an additional comparison expression in the WHERE clause of the UPDATE command, that the current row version in the database is still the same as it was when the process previously accessed the account row, for example,
UPDATE Accounts
SET balance = :newBalance
Fig. 1: Context of the OCC Access Patterns
WHERE acctld = :id AND
(rowVersion = :old_rowVersion);
The comparison expression can be a single comparison predicate like in the example above where rowVersion is a column (or a pseudo-column provided by the DBMS) reflecting any changes made in the contents of the row and :old_rowVersion is a host variable containing the value of the column when the process previously read the contents of the row. In the case that more than one column is involved in the comparison, the expression can be built of version comparisons of all columns used and based on the 3-value logic of SQL.
_Consequences_: Since this access pattern does not explicitly read data, there is no need to set isolation level. The result of the concurrency control services is the same for locking scheme concurrency control (LSCC) and multiversion concurrency control (MVCC) based DBMS. The result of the update depends on the row version verifying predicate, and the application code needs to evaluate the return code to find out the number of updated rows to verify the result.
### _Access Pattern: Re-SELECT.. UPDATE_
_Problem_: How to provide repeatable-read for a user transaction in case of concurrent updates without using locks. Signal the user if conflicting transactions have changed the read set.
_Context_: "Conditional UPDATE" pattern does not allow to inform the user of the changed read set before aborting the transaction.
_Forces_: Same as for "conditional UPDATE" plus:
* In the time span between the re-SELECT and the UPDATE statement the data read may be updated again by concurrent transactions. In the worst case, this can lead to an infinite loop.
* Executing the pattern in repeatable read isolation may force the transaction to abort if no locking is used.
_Solution_: This is a variant of the "conditional UPDATE" pattern in which transaction A2 first reads the current row version data from the database into some host variable current_rowVersion which allows the application to inform the user of the changed situation:
SELECT rowVersion
INTO :current_rowVersion
FROM Accounts
WHERE acctld = :id;
//... inform the user if desired
and then apply the conditional update:
if (current_rowVersion = old_rowVersion) then
UPDATE Accounts
SET balance = :newBalance
WHERE acctld = :id ;
To avoid repeatedly re-SELECT, it is necessary to make sure that no other transaction can change the row between the SELECT and the UPDATE. For this purpose, we need to apply a strong enough isolation level (REPEATABLE READ, SNAPSHOT, or SERIALIZABLE) or explicit row-level locking, such as Oracle's FOR UPDATE clause in the SELECT command.
_Consequences_: Since isolation level implementations of LSCC and MVCC based DBMS are different, the result of concurrency services can be different: In LSCC based systems the first writer of the row or reader using REPEATABLE READ or SERIALIZABLE isolation level will usually win, whereas in MVCC based systems the first writer wins the concurrency competition.
## IV RVV Discipline and Server Side Stamping
The last access pattern doesn't require any locking before transaction step 7 (start of A2). This update method is generally known as "Optimistic Locking" [6], but we prefer to call it Row Version Verification (RVV) Discipline. There are multiple options for row version verification, including comparison of original contents of all or some relevant subset of columns of the row, a checksum of these, a technical SQL column, or some technical pseudo-column maintained by the DBMS.
A general solution for row version management is to include a technical row version column \(rv\) and to use a row-level trigger to increase the value of column \(rv\) on any row automatically every time the row is updated. We call the use of trigger or use of technical pseudo-column as "server-side stamping" which no application can bypass, as opposite to client-side stamping using the SET clause within the UPDATE command - a discipline that all applications should follow in that case. Row-level triggers are affordable, but have performance cost of some percents in execution time on Oracle and DB2, whereas SQL Server does not even support row-level triggers.
Timestamps are typically mentioned in database literature as a means of differentiating any updates of a row. However, our tests [7] prove that, for example, on a 32bit Windows workstation using a single processor Oracle 11g can generate up to 115 updates having the very same timestamp. Almost the same problem applies to DATETIME of SQL Server 2005 and TIMESTAMP of DB2 LUW 9, with exception of the new ROW CHANGE TIMESTAMP option in DB2 9.5 which generates unique timestamp values for every update of the same row having technical TIMESTAMP column.
The native TIMESTAMP data type of SQL Server is not a timestamp but a technical column which can be used to monitor the order of all row updates inside a database. We prefer to use its synonym name ROWVERSION. This provides the most effective server-side stamping method in SQL Server; although, as a side-effect it generates an extra U-lock which will result in a deadlock in the example of Figure I.
In version 10 and later versions, Oracle provides a new pseudo-column ORA_ROWSCN for rows in every table created with the ROWDEPENDENCIES option [8]. This will show the transaction's System Change Number (SCN) of the
last committed transaction which has updated the row. This provides the most effective server-side stamping method for RVV in Oracle databases, although as a harmful side-effect, the row-locking turns its value to NULL.
In our "RVV Paper" [7], we have presented an SQL view as solutions for mapping these technical row version column contents into BIGINT data type for Row Version Verification (RVV) at the client-side.
## V Conclusion
The concurrency control by DBMS treats SQL transactions without their application context in line with Herbrand semantics, and this is the typical scope of database textbooks in teaching transaction programming. We see the need to expand this scope to the application level, to typical user transactions which are the context for SQL transactions. Even if the widely accepted Design Patterns of GoF [5] do not even mention database transactions, we can identify and build practical Data Access Patterns to be used for teaching Data Access Technologies.
Modern application architectures have introduced new practices and needs which have outdated some practices of earlier SQL programming like locking and holdable cursors. Commercial database management systems do not yet support OCC which is needed for mobile and web-applications. So, for example, we had to develop access patterns for optimistic locking services on the user level. We presented three of these patterns and showed how far current DBMS can support it.
|
2310.13222 | Equivariant Transformer is all you need | Machine learning, deep learning, has been accelerating computational physics,
which has been used to simulate systems on a lattice. Equivariance is essential
to simulate a physical system because it imposes a strong induction bias for
the probability distribution described by a machine learning model. This
reduces the risk of erroneous extrapolation that deviates from data symmetries
and physical laws. However, imposing symmetry on the model sometimes occur a
poor acceptance rate in self-learning Monte-Carlo (SLMC). On the other hand,
Attention used in Transformers like GPT realizes a large model capacity. We
introduce symmetry equivariant attention to SLMC. To evaluate our architecture,
we apply it to our proposed new architecture on a spin-fermion model on a
two-dimensional lattice. We find that it overcomes poor acceptance rates for
linear models and observe the scaling law of the acceptance rate as in the
large language models with Transformers. | Akio Tomiya, Yuki Nagai | 2023-10-20T01:57:03Z | http://arxiv.org/abs/2310.13222v1 | # Equivariant Transformer is all you need
###### Abstract:
Machine learning, deep learning, has been accelerating computational physics, which has been used to simulate systems on a lattice. Equivariance is essential to simulate a physical system because it imposes a strong induction bias for the probability distribution described by a machine learning model. This reduces the risk of erroneous extrapolation that deviates from data symmetries and physical laws. However, imposing symmetry on the model sometimes occur a poor acceptance rate in self-learning Monte-Carlo (SLMC). On the other hand, Attention used in Transformers like GPT realizes a large model capacity. We introduce symmetry equivariant attention to SLMC. To evaluate our architecture, we apply it to our proposed new architecture on a spin-fermion model on a two-dimensional lattice. We find that it overcomes poor acceptance rates for linear models and observe the scaling law of the acceptance rate as in the large language models with Transformers.
Introduction
Lattice QCD is essential for calculating quantum field expectations but struggles with the critical slowing down issue, reducing computational efficiency. Machine learning methods can efficiently handle this problem and work well with structured data like gauge configurations [1, 2].
Transformer is a neural network originally for dealing with natural languages, but it has been applied to various data [3, 4, 5, 6]. The most crucial feature of Transformer is it can deal with global correlations, such as modifiers in natural language, which sometimes act from distant places.
In computational physics with machine learning, in particular equivariant neural networks enhance numerical calculations by capturing input data symmetries. This improves generalization on unfamiliar data and may boost computational efficiency. Data symmetry ensures alignment with physical laws like momentum conservation. Convolutional layers, for instance, exhibit equivariant properties to spatial translation. This reduces the risk of erroneous extrapolation that deviates from data symmetries and physical laws [7].
In this work, we develop an equivariant Transformer for physical systems. In quantum field theory, local action/Hamiltonians are typically considered. However, when fermions, described by Grassmann numbers, are integrated ahead of numerical simulations, the resulting effective action/Hamiltonian becomes non-local regarding bosons. We develop a neural network architecture which is capable with global correlations from fermions and symmetric under sytem's symmetries.
In this proof-of-principle study, we employ the _double exchange model_ (DE) in two spacial dimensions. DE model is well established model in condensed matter physics and contains fermions and spatially fixed classical Heisenberg-spins. The model Hamiltonian is invariant under global O(3) spin rotation. This model is similar to Yukawa system with fermions and three component scalars on the lattice in particle physics. To see more detail of this study, refer [8].
## 2 Concepts in Machine learning
### Self-learning Monte-Carlo
We review concepts in machine learning to introduce our numerical calculation. The Self-learning Monte-Carlo (SLMC) is an exact Markov chain Monte-Carlo (MCMC) algorithm with an effective model [9]. In MCMC for a spin system, a spin configuration \(\mathbf{S}\) is distributed with a probability distribution \(W(\mathbf{S})\). Samples from desired distribution are obtained after many steps. The detailed balance condition is a sufficient condition for convergence of MCMC, which is \(W(\{\mathbf{S}\})T(\{\mathbf{S}^{\prime}\}|\{\mathbf{S}\})=W(\{\mathbf{S}^{\prime}\})T(\{\mathbf{S }\}|\{\mathbf{S}^{\prime}\})\), where \(T(\{\mathbf{S}^{\prime}\}|\{\mathbf{S}\})\) is the transition probability from a configuration \(\{\mathbf{S}\}\) to another configuration \(\{\mathbf{S}^{\prime}\}\). If a probabilistic process described \(T(\{\mathbf{S}^{\prime}\}|\{\mathbf{S}\})\) with this condition, the obtained configurations distributed according to \(W(\{\mathbf{S}\})\).
In general Metropolis-Hastings (MH) algorithm, the transition probability is factorised in two sub-steps, \(T(\{\mathbf{S}^{\prime}\}|\{\mathbf{S}\})=g(\{\mathbf{S}^{\prime}\}|\{\mathbf{S}\})A(\{\mathbf{S} ^{\prime}\},\{\mathbf{S}\})\), where the proposal distribution \(g(\{\mathbf{S}\}^{\prime}|\{\mathbf{S}\})\) is the conditional probability of proposing a configuration \(\{\mathbf{S}^{\prime}\}\) when a configuration \(\{\mathbf{S}\}\) is given, and the acceptance ratio \(A(\{\mathbf{S}^{\prime}\},\{\mathbf{S}\})\) is the probability to accept the proposed configuration \(A(\{\mathbf{S}^{\prime}\},\{\mathbf{S}^{\prime}\})\). The Markov chain that has the desired distribution \(W(\{\mathbf{S}\})\) is obtained when the
acceptance ratio is given as
\[A(\{\mathbf{S^{\prime}}\},\{\mathbf{S}\})=\min\left(1,\frac{W(\{\mathbf{S^{\prime}}\})\ g(\{\mathbf{S}\}|\{\mathbf{S^{ \prime}}\})}{W(\{\mathbf{S}\})\ g(\{\mathbf{S^{\prime}}\}|\{\mathbf{S}\})}\right). \tag{1}\]
This is the general argument about the MH algorithm. The Metropolis test is a reduced version, \(A(\{\mathbf{S^{\prime}}\},\{\mathbf{S}\})=\min\left(1,W(\{\mathbf{S^{\prime}}\})/W(\{\mathbf{S} \})\right)\) and this is obtained assuming the reversibility for \(g(\{\mathbf{S^{\prime}}\}|\{\mathbf{S}\})=g(\{\mathbf{S}\}|\{\mathbf{S^{\prime}}\})\).
#### 2.1.1 Inner Markov chain in SLMC
SLMC is a nested MCMC. The simplest update is the local update, where a single site in the configuration is randomly selected and its spin orientation is changed. We perform the Metropolis accept/reject procedure following to the single-site-update with \(W_{\text{eff}}(\{\mathbf{S}\})\), which is the Boltzmann weight with an effective model. Since the Metropolis test with the single cite update satisfies the detailed balance condition, it will converge into \(W_{\text{eff}}(\{\mathbf{S}\})\). In SLMC, the effective model \(W_{\text{eff}}(\{\mathbf{S}\})\) contains trainable parameters.
#### 2.1.2 Outer Markov chain in SLMC
In SLMC, the inner Markov chain using an effective model proposes processes for the outer chain. A correction step ensures the target system's distribution. In the general MH algorithm, the inner Markov updates represent the \(g(\cdot|\cdot^{\prime})\) process in the outer chain, with the acceptance ratio designed to offset \(W_{\text{eff}}\) and distribute as \(W(\cdot)\). The acceptance ratio in the SLMC is given as
\[A(\{\mathbf{S^{\prime}}\},\{\mathbf{S}\})=\min\left(1,\frac{W(\{\mathbf{S^{\prime}}\})}{ W(\{\mathbf{S}\})}\frac{W_{\text{eff}}(\{\mathbf{S}\})}{W_{\text{eff}}(\{\mathbf{S^{ \prime}}\})}\right), \tag{2}\]
where \(W(\{\mathbf{S}\})\) is the probability weight for the target system. We remark that, the second factor in the second column in \(\min(1,\cdot)\) is up-side-down of the weights for the inner chain. In summary, SLMC process can be express as Fig. 1.
### Equivariance
The concept of equivariance plays a pivotal role in machine learning and physics, offering solutions to issues like model overfitting and the preservation of physical laws. Its importance stems from its capacity to embed symmetries directly into neural networks, ensuring that the learned model respects certain symmetries of the data. In neural networks, equivariance is usually achieved through weight sharing, reducing the number of irrelevant parameters. This is critical as an excess of parameters often leads to model overfitting, undermining the model's ability to generalize to new data.
In physics, ensuring numerical calculations align with physical laws can be achieved via equivariant neural networks. While penalties in the loss function were previously used to impose
Figure 1: Schematic figure of SLMC. Blue thin arrows the Metropolis chain in the inner chain with \(W_{\text{eff}}\). Red bold arrows indicate the MH test in the outer chain with \(W_{\text{eff}}\) and \(W\) as Eq. (2).
physical laws [10], this method is not always reliable. A more dependable approach embeds these symmetries directly into the neural network architecture [7].
## 3 Lattice setup
In this study, we employ semi-classical _double exchange_ (DE) model in two dimension for a testbed [11, 12, 13]. It is a semi-classical system and has electrons and classical Heisenberg spins,
\[H=-t\sum_{\alpha,\,\langle i,j\rangle}(\hat{c}^{\dagger}_{i\alpha}\hat{c}_{j \,\alpha}+\text{h.c.})+\frac{J}{2}\sum_{i}\mathbf{S}_{i}\cdot\hat{\mathbf{\sigma}}_{i}, \tag{3}\]
where \(\mathbf{S}_{i}\) is the classical Heisenberg spins on the \(i\)-th site, namely it is normalized O(3) scalar field. \(\hat{c}^{\dagger}_{i\alpha}\) is the fermionic creation operator at the \(i\)-th site for fermion with spin \(\alpha\in\{\uparrow,\downarrow\}\). A symbol \(\langle i,j\rangle\) indicate the pairs of nearest neighbors. The interaction term with Pauli matrices are defined as \([\hat{\mathbf{\sigma}}_{i}]_{\gamma}\equiv\hat{c}^{\dagger}_{i\alpha}\sigma^{ \gamma}_{\alpha\beta}\hat{c}_{i\beta}\) (\(\gamma=x,y,z\)). \(J\) is the interaction strength between the classical spins and the electrons and we consider the hopping constant \(t\) as the unit of energy. We adopt the periodic boundary condition on \(N_{x}\times N_{y}\) site system. The total number of the sites is \(N\equiv N_{x}N_{y}\). The Hamiltonian has \(O(3)\) rotational symmetry in the spin sector and discrete translational invariance.We want to calculate statistical expectation value with \(\exp[-\beta H]\) for (3).
### Equivariant Transformer for physical system
Here we introduce an equivariant Transformer for a physical spin system. Input of our Transformer is a spin configuration \(\mathbf{S}\) and output is modified spin configuration. In our formalism, query, key, and value are \(N\times 3\) matrices and they are defined as
\[\mathbf{S}^{\text{Q}}\equiv\hat{W}^{\text{Q}}\mathbf{S},\,\mathbf{S}^{\text{K}}\equiv\hat {W}^{\text{K}}\mathbf{S},\,\mathbf{S}^{\text{V}}\equiv\hat{W}^{\text{V}}\mathbf{S}, \tag{4}\]
respectively. \(S^{\alpha}_{i\mu}\equiv\sum_{\langle i,j\rangle_{k}}W^{\alpha}_{k}S_{j\mu}\), \(W^{\alpha}_{k}\in\mathbb{R}\) (\(\alpha=\text{Q, K, V}\) and \(k=0,1,2,\cdots,\tilde{N}\)). A symbol \(\langle i,j\rangle_{k}\) picks up \(k\)-th nearest neighbors for sites \(i,j\). This procedure can be regarded as a block spin transformation with \(\tilde{N}+1\) free parameters. In this work, we take \(\tilde{N}=6\).
Consequently, we introduce the following (equivariant) self-attention block as,
\[\text{SelfAttention}^{\text{spin}}(\mathbf{S})=\tilde{M}\mathbf{S}^{\text{V}}, \tag{5}\]
where \(\tilde{M}\) is a \(N\times N\) matrix. This \(\tilde{M}\) is defined by \(\left[\tilde{M}\right]_{ij}=\sigma\left(\sum_{\mu=1}^{3}S^{\text{Q}}_{i\mu}S^ {\text{K}}_{j\mu}/\sqrt{3}\right)\), where \(i,j\) are indices for spatial position on the lattice. \(\sigma(\cdot)\) is a nonlinear activation function. Intuitively, the argument \(\sum_{\mu}S^{\text{Q}}_{i\mu}S^{\text{K}}_{j\mu}\) is a set of 2 point correlation functions of the blocked spin field from a point \(j\) to \(i\) and this is invariant under O(3) rotations since rotation matrices cancel out. In this study, we take an activation function \(\sigma(\cdot)\) as ReLU function.
We construct the effective spin field that with multiple attention layers. Our neural neural network architecture is defined with a residual connection,
\[\mathbf{S}^{(l)}\equiv\mathbf{N}\left(\mathbf{S}^{(l-1)}+\text{SelfAttention}^{\text{ spin}}_{\mathbf{\theta}^{(l)}}(\mathbf{S}^{(l-1)})\right), \tag{6}\]
and \(\mathbf{S}^{(0)}\equiv\mathbf{S}\) and \(\mathbf{S}^{\text{eff}}\equiv\mathbf{S}^{(L)}\). \(l\) is an index for layers and \(l=1,2,\cdots,L\). \(\mathbf{\theta}^{(l)}\) represents a set of network trainable parameters in \(l\)-th layer. \(\mathbf{N}(\mathbf{S})\) normalizes the spin vector on each lattice
site, \(\mathcal{N}(\mathbf{S}_{i})=\mathbf{S}_{i}/\|\mathbf{S}_{i}\|\). We call this network architecture the equivariant Transformer, which is schematically visualized in Fig. 2. We remark that if all weights \(W_{k}^{\alpha}\) are \(0\) in \(l\)-th block, the self-attention block (indicated by a purple block in Fig. 2) of \(l\) works as an identity operation and it does nothing since the second term in the argument in (6) is zero (See [8] for details).
The long-range correlation in the DE model with SLMC is partially considered in the linear effective model in the literature [12, 14]. In this work, we replace a bare spin operator \(\mathbf{S}_{i}\) by the effective spin operator from the Transformer as,
\[H_{\text{eff}}[\mathbf{S}]=-\sum_{(i,j)_{n}}J_{n}^{\text{eff}}\mathbf{S}_{i}^{\text{ eff}}\cdot\mathbf{S}_{j}^{\text{eff}}+E_{0}, \tag{7}\]
where \(\mathbf{S}_{i}^{\text{eff}}\) is an output the Transformer at site \(i\). A symbol \(\langle i,j\rangle_{n}\) represents the \(n\)-th nearest neighbor pair. The effective spin \(\mathbf{S}^{\text{eff}}\) is a function of \(\mathbf{S}\) and in total, \(H_{\text{eff}}\) is a function of \(\mathbf{S}\). This effective Hamiltonian contains a number of parameters in the Transformer. In SLMC, \(J_{n}^{\text{eff}}\), \(E_{0}\) and parameters in \(\mathbf{S}_{i}^{\text{eff}}\) is determined by using AdamW and to increase acceptance ratio.
In our calculation, we apply the MH test (2) using the effective model (7) and the Hamiltonian (3) in a form where the fermions is traced out by the exact diagonalization.
## 4 Results
Here we show our results. First, we show results for physical observables. Our results show that it is consistent with exact results. As depicted in Fig. 3, the Self-Learning Monte Carlo (SLMC) method with effective models accurately replicates results from the original theory, exhibiting anti-ferromagnetic order at lower temperatures.
The acceptance rate with the number of layers is shown in Fig. 4, left panel. Using effective models trained at \(T=0.05t\) on a \(6\times 6\) lattice, we set \(N_{\text{MC}}^{\text{original}}=3\times 10^{4}\) and \(N_{\text{MC}}^{\text{eff}}=100\). The linear model SLMC has an acceptance ratio of \(21\%\) due to the omission of long-range spin-spin interaction. As observed, the acceptance ratio improves with an increase in Attention layers.
Finally, we show results for the scaling law of the loss function in Fig. 4 (right panel). The value of loss function is estimated from acceptance [15]. It is known that, large language models with Transformer show a power-type scaling law; Model performance increases in depending on the size of the input data and the number of parameters in the model [16].
Figure 2: (_Left_) Effective spin construction using the Transformer with an Attention block. Yellow is defined by Eq. (6); purple is the attention block. (_Right_) Blue represents the attention block (see main text).
with equivariant Transformer shows the scaling law. There are no direct relation from our model to large language models and the origin of the scaling law has to be studied in another work.
## Acknowledgments
The work of A.T. was partially by JSPS KAKENHI Grant Numbers 20K14479, 22H05112, and 22H05111. Y.N. was partially supported by JSPS KAKENHI Grant Numbers 22K12052, 22K03539, 22H05111 and 22H05114. The calculations were partially performed using the supercomputing system HPE SGI8600 at the Japan Atomic Energy Agency. This work was partially supported by MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Grant Number JPMXP1020230411, JPMXP1020230409).
|
2302.13033 | Speaker Recognition in Realistic Scenario Using Multimodal Data | In recent years, an association is established between faces and voices of
celebrities leveraging large scale audio-visual information from YouTube. The
availability of large scale audio-visual datasets is instrumental in developing
speaker recognition methods based on standard Convolutional Neural Networks.
Thus, the aim of this paper is to leverage large scale audio-visual information
to improve speaker recognition task. To achieve this task, we proposed a
two-branch network to learn joint representations of faces and voices in a
multimodal system. Afterwards, features are extracted from the two-branch
network to train a classifier for speaker recognition. We evaluated our
proposed framework on a large scale audio-visual dataset named VoxCeleb$1$. Our
results show that addition of facial information improved the performance of
speaker recognition. Moreover, our results indicate that there is an overlap
between face and voice. | Saqlain Hussain Shah, Muhammad Saad Saeed, Shah Nawaz, Muhammad Haroon Yousaf | 2023-02-25T09:11:09Z | http://arxiv.org/abs/2302.13033v1 | # Speaker Recognition in Realistic Scenario Using Multimodal Data
###### Abstract
In recent years, an association is established between faces and voices of celebrities leveraging large scale audio-visual information from YouTube. The availability of large scale audio-visual datasets is instrumental in developing speaker recognition methods based on standard Convolutional Neural Networks. Thus, the aim of this paper is to leverage large scale audio-visual information to improve speaker recognition task. To achieve this task, we proposed a two-branch network to learn joint representations of faces and voices in a multimodal system. Afterwards, features are extracted from the two-branch network to train a classifier for speaker recognition. We evaluated our proposed framework on a large scale audio-visual dataset named VoxCeleb1. Our results show that addition of facial information improved the performance of speaker recognition. Moreover, our results indicate that there is an overlap between face and voice.
Speaker identification, Multimodal, Face-voice association +
Footnote †: publicationid: pubid: 979-8-3503-2212-5/23/531.00 ©2023 IEEE
## I Introduction
Speaker recognition is a fundamental task of speech processing with applications in a variety of real world domains. However,speaker recognition task is challenging under real world scenarios due to intrinsic and extrinsic variations. Intrinsic variations are associated with the speaker attributes namely gender, age and manner of speaking while extrinsic variations include factors outside the speaker personality such as background noise, microphone noise etc. [1]. This makes speech signals prone to a large degree of variability. In recent years, Convolutional Neural Networks (CNNs) have opened new paths for speaker recognition task where speech signal is converted to spectrograms to be classified with these networks [2, 3]. Though, speaker recognition methods based on CNNs have surpassed the traditional methodologies [2]. However these methods suffered deterioration under real world scenarios. Recently, large scale datasets namely VoxCeleb1 and VoxCeleb2 are curated for speaker recognition task. These datasets are instrumental in developing CNN methods for speaker recognition task. For example, the work in [2, 3] modified standard CNN such as VGG-M [4] and ResNet [5] to perform speaker recognition task. Moreover, both VoxCeleb1 and VoxCeleb2 datasets contain visual information which is instrumental for developing various multimodal applications such as cross modal transfer between face and voice [6, 7, 8, 9, 10], emotion recognition [11], speech separation [12] and face generation [13]. These applications are instrumental in establishing a correlation between faces and voices of speakers. Moreover, it is a well studied fact that humans end up associating voices and faces of people due to the fact that neuro-cognitive pathways for voices and faces share the same structure [14]. Due to the availability of large scale audio-visual datasets such as VoxCeleb1 and association between faces and voices of speakers, a fundamental question arises: _can audio-visual information can be used to improve speaker recognition task?_ To investigate it, we proposed a two-branch network to establish association between faces and voices. The proposed two-branch consists of the following three components: 1) feature extraction of faces and voices with task-specific pre-trained subnetworks, 2) a series of fully connected layers for faces and voices to learn joint multimodal representations, and 3) loss formulations. Afterwards, we extracted the features of audio segments to train a classifier for speaker identification task. Our results indicate that the facial information along with speech segments is instrumental in improve speaker recognition task. Fig. 1 shows the training and testing strategy of the proposed framework.
We summarize our key contributions as follows: 1) We propose a two-branch network to learn multimodal discriminative joint representations of faces and voices of speakers. 2) We present a comparisons of speaker recognition task with only speech segments and multimodal information. 3) Our results indicate that multimodal information considerable improves speaker recognition task.
The paper is organized in the following sections. Section II provides detail overview of the the related work. Section III provides an overview of the proposed framework following by results and discussion in Section IV. Finally, Section V provides concluding remarks of our work.
## II Related Work
We summarize previous work relevant to the speaker recognition and face-voice association tasks.
### _Speaker Recognition_
Sandra et al. [15] laid the groundwork for speaker recognition systems attempting to find a similarity measure between two speech signals by using filter banks and digital spectrograms. We provided a brief overview of speaker recognition methods as clustered in two main categorizes: Traditional and
neural network based methods.
**Traditional methods.** There have been many advancements in the speaker recognition task due to the availability of data and computing resources. However, noisy environment presents a challenging scenario. For several years, the standard speaker recognition task relied on ways that are dependant on features that required manual intervention and domain knowledge. This includes features extracted from low dimensional short term representation of the speech signals such as MEL Frequency Cepstrum Coefficients [16]. The performance of these systems degrade in real world conditions [17, 18]. These systems are dependent on Human ability to extract useful features which is a limitation for the system. Joint Factor Analysis captures both speaker-specific and session-specific variability in speech signals by decomposing the speech signal into a set of latent factors [19]. Support Vector Machine (SVM) classifier has been very successful for robust recognition tasks. However, such methods are very slow, complex and prone to degradation when applied to various real world scenarios. Despite of these advancements, the performance of traditional approaches drop in the presence of noise. Moreover, the performance degraded as the size of data increases. In real world applications there is often no knowledge of environmental noise, transmission channel used and number of speakers in the background. In such cases the traditional methods may degrade in performance.
**Deep Learning Methods.** Over the last few years, advances in computing resources and neural networks has led to more efficient methods. With these advancement, CNN are extensively used in tasks such as speaker recognition. For example, the work in [2, 3] propose a CNN based method to transform speech segments into spectrograms for speaker recognition task. With this advancement, speaker recognition task is moved from manually extracted features to data driven methods. Specifically, the work in [2] train a modified VGG-M on spectrogram extracted directly from speech segment.
### _Face-voice Association_
Recently, an association between faces and voices of speakers are established by leveraging cross-modal verification and matching tasks [6, 7, 9, 10, 20, 21]. The work in [7] used a triplet network to learn joint representation for face-voice association task. Similarly, the work in [22] used a triplet network [23] to minimize the distance between faces and voices by extracting features from face subnetwork [24] and voice subnetwork [25]. Nawaz et. al [6] learns shared latent space by taking advantage from class centers with a single stream network which eliminate the need of pair or triplet samples. On similar grounds, Saeed et. al [20, 21] proposed a light-weight, plug-and-play mechanism that exploits the complementary cues from faces and voices to form enriched fused embeddings and clusters them based on their identity labels via orthogonality constraints.
In contrast to existing methods, our goal is to extract robust features from a multimodal system trained on faces and voices for speaker recognition task.
## III Overall Framework
### _Baseline_
We extracted \(1024\)-D features of VoxCeleb1 dataset with VGGVox subnetwork to establish a baseline. SVM classifier is trained on these features for speaker recognition task. Decision function shape is set to one vs one for multi class classification using SVM, kernel parameter was set to poly while degree of polynomial kernel function was set to 3. After training SVM on the features extracted using VGGVox, accuracy of identification is 91%.
### _Two Branch Network_
Our proposed method consists of training a multimodal system using a two-branch network with face and voice information. Afterwards, the multimodal is used to extract features to train a classifier for speaker recognition task. Face and audio features are extracted from VGGFace [26] and VGGVox [2] subnetworks respectively. Afterwards, face and voice features are input to two independent branches, with each modality specific branch respectively. Features from both subnetworks are fused after passing from fully connected and normalization layers. Fig.2 shows the proposed framework.
Fig. 1: The training and testing strategy for the proposed study. (Green) Shows the face tracks used for training the model. Both audio and visual modalities are used during training phase (Red) Only Audio Modality is available during phase. This protocol will help in knowing the impact of one modality, on performance of another modality.
### _Multimodal Fusion_
We extracted features from face and voice information. These features are then fused and passed to fully connected layer to learn joint representations from both face and voices signals. After fusion a softmax layer is used to learn for the output classes. Softmax function is used as the activation function to predict a multinomial probability distribution where probability is required for multi class classification problems. Features extracted from this two branch network are then used to train a classifier.
### _Loss Formulation_
We want the fused feature to capture the semantics of the speaker or identity. In other words, these features should be able to predict the identity labels with good accuracy. It is possible if the samples belonging to the same class are placed nearby whereas the ones with different classes are far away. A popular choice to achieve this is softmax cross entropy (CE) loss, which also allows stable and efficient training. Now, the loss with fused embeddings is computed as
\[\mathcal{L}_{CE}=\sum_{i}^{C}\mathbf{l}_{i}log(f(\mathbf{l}_{i}), \tag{1}\]
Categorical cross entropy is a very good measure of how distinguishable two discrete probability distributions are from each other [27]. Adam was used as optimizer with learning rate ranging from \(0.01\) to \(0.13\). Network was trained using batch size of \(512\), \(1024\), \(2048\) and \(4096\). Maximum results were achieved with \(0.04\) as learning rate and \(2048\) as batch size.
## IV Experiments and Results Discussions
### _Training Detail and Dataset_
**Dataset.** VoxCeleb1 is a large-scale dataset of audio-visual human speech videos extracted 'in the wild' from YouTube. These videos contain real world noise with background chatter, overlapping speech, laughter and recording equipment noise. Table I provides shows statistics of the dataset.
**Training.** Inspired by [20], we propose a two-branch network to analyze the effect of multimodal information on speaker recognition task. Face embeddings were extracted from pre-trained VGGFace [26] while audio embeddings are extracted from VGGVox [2]. Face and voice embeddings were passed as input to two-branch network with subnetworks containing multiple dense layers followed by dropout and normalization layers. Dropout of 10% and 20% was used during training. Normalized embeddings from two subnetworks are then fused and passed through dense and normalization layers to softmax layer containing \(1251\) hidden units for the classes in dataset. Training is performed with multiple margin values, dropout, batch size, loss functions and learning rates.
**Testing** Features are extracted from the two branch network to train and test support vector machine classifier. We extracted feature with \(1024\)-D size form the fusion layer of the model. Feature extraction is performed in two ways:
* _Aiding with face signals:_ During this phase face and speech signals were provided as input to trained two branch network and speech features were extracted from it.
* _Aiding without face signals:_ During this phase only speech signals were provided as input to trained two branch network and speech features were extracted from it. For face subnetwork input vector was set to zero.
**Speaker identification.** Features extracted are normalized and used to train and test a SVM classifier. Kernel parameter of support vector machine is set to _poly_ while decision function shape was set to _ovo_. Remaining parameters are set to default during training.
### _Results from Voice Only Features_
We extracted the feature from VGGVox subsnetwork and trained a classifier to establish a baseline, resulting in 91%
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Train** & **Test** & **Total** \\ \hline \# of speakers & 1,251 & 1,251 & 1,251 \\ \hline \# of videos & 21,245 & 1,251 & 22,496 \\ \hline \# of utterances & 145,265 & 8,251 & 153,516 \\ \hline \end{tabular}
\end{table} TABLE I: VoxCeleb1 Identification Split
Fig. 2: (a) Independent modality-specific embedding networks are leverage for off-the-shelf feature extraction (Box) The proposed **Two-branch Model** with independent modality-specific FC layers. **Element-wise multiplication** is used for Fusion of two branches. (b) During testing phase only audio data is used. The visual data is set to 0. Features of audio samples from training and testing splits are extracted. Later on, an SVM is trained on these features to report % accuracy.
identification performance. Fig. (a)a shows confusion matrix of the baseline results. Moreover, stochastic neighbor embedding for a sample test set in Fig. (a)a shows that the network has distributed the features for multiple classes far apart which has reduced the accuracy of classifier on those features.
### _Results from Aided Facial Information_
Table II shows results of a classifer trained on features extracted from the two-branch network. Confusion matrix of two branch fused features can be seen in Fig.(b)b. Moreover, T-SNE plot for a sample test set in Fig.(b)b shows that the network has distributed the features more efficiently where same class features are close to each other that has resulted in better learning of SVM for speaker recognition task.
Experiments show that when speech signals are aided by faces during feature extraction, speaker recognition is improved significantly. Without facial information, the system is likely to be effected with noise. When we have face information aiding the voice information degraded in one mode can be recovered by the other.
## V Conclusion
In this work we proposed that presence of multimodal information improve the performance of speaker recognition task. We propose the two-branch network to extract features from
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Method** & **Loss** & **Top-1 \%** \\ \hline I-vectors + PLDA + SVM [2] & - & 60.8 \\ \hline CNN [2] & - & 80.5 \\ \hline VGGVox(Baseline) & - & 91.0 \\ \hline Network A (ndimms=128) [28] & Center+Softmax & 84.6 \\ \hline Network B (ndimms=128) [28] & Center+Softmax & 89.5 \\ \hline
**Ours** & **CE Loss** & **97.2** \\ \hline \end{tabular}
\end{table} TABLE II: Speaker identification performance on VoxCeleb1. (Higher is better)
Fig. 4: T-SNE plot of test data: (left) T-SNE plot of voice features extracted from VGGVox Network (right) T-SNE plot of voice features extracted from proposed two-branch network trained on multimodal data. Features of \(30\) random identities are selected.
Fig. 3: Confusion Matrix of test data: (left) Confusion Matrix of features extracted from VGGVox Network (right) Confusion Matrix of features extracted from proposed two-branch network trained on multimodal data. Confusion matrix shows results for \(20\) identities. (Best viewed in color and zoomed in)
both face and voice signals. SVM was used to classify speaker based on features from single domain and multi domain. We obtained promising results when we used both face and speech information as input to our model. The identification performance achieved using our approach is higher compared to VGGVox which only exploit single modality. Also the results using both speech and face signals while extracting features from our model are better as compared to inputting only speaker information which clearly indicates that face information can aid in speaker recognition. Also, this increase in speaker recognition performance with the aid of facial information gives us clue that though there is some association between face and voice of a person. Another very important contribution is that this work opens the research path for classification and retrieval tasks of other modalities.
**Acknowledgements.** Authors gratefully acknowledge the support of Swarm Robotics Lab, NCRA for providing the necessary equipment and resources for our experiments.
|
2308.10561 | Spatial Transform Decoupling for Oriented Object Detection | Vision Transformers (ViTs) have achieved remarkable success in computer
vision tasks. However, their potential in rotation-sensitive scenarios has not
been fully explored, and this limitation may be inherently attributed to the
lack of spatial invariance in the data-forwarding process. In this study, we
present a novel approach, termed Spatial Transform Decoupling (STD), providing
a simple-yet-effective solution for oriented object detection with ViTs. Built
upon stacked ViT blocks, STD utilizes separate network branches to predict the
position, size, and angle of bounding boxes, effectively harnessing the spatial
transform potential of ViTs in a divide-and-conquer fashion. Moreover, by
aggregating cascaded activation masks (CAMs) computed upon the regressed
parameters, STD gradually enhances features within regions of interest (RoIs),
which complements the self-attention mechanism. Without bells and whistles, STD
achieves state-of-the-art performance on the benchmark datasets including
DOTA-v1.0 (82.24% mAP) and HRSC2016 (98.55% mAP), which demonstrates the
effectiveness of the proposed method. Source code is available at
https://github.com/yuhongtian17/Spatial-Transform-Decoupling. | Hongtian Yu, Yunjie Tian, Qixiang Ye, Yunfan Liu | 2023-08-21T08:36:23Z | http://arxiv.org/abs/2308.10561v2 | # Spatial Transform Decoupling for Oriented Object Detection
###### Abstract
Vision Transformers (ViTs) have achieved remarkable success in computer vision tasks. However, their potential in rotation-sensitive scenarios has not been fully explored, and this limitation may be inherently attributed to the lack of spatial invariance in the data-forwarding process. In this study, we present a novel approach, termed Spatial Transform Decoupling (STD), providing a simple-yet-effective solution for oriented object detection with ViTs. Built upon stacked ViT blocks, STD utilizes separate network branches to predict the position, size, and angle of bounding boxes, effectively harnessing the spatial transform potential of ViTs in a divide-and-conquer fashion. Moreover, by aggregating cascaded activation masks (CAMs) computed upon the regressed parameters, STD gradually enhances features within regions of interest (RoIs), which complements the self-attention mechanism. Without bells and whistles, STD achieves state-of-the-art performance on the benchmark datasets including DOTA-v1.0 (82.24% mAP) and HRSC2016 (98.55% mAP), which demonstrates the effectiveness of the proposed method. Source code is available at [https://github.com/yuhongtian17/Spatial-Transform-Decoupling](https://github.com/yuhongtian17/Spatial-Transform-Decoupling).
1University of Chinese Academy of Sciences
2Institute of Automation, Chinese Academy of Sciences
[email protected], [email protected], [email protected], [email protected]
## Introduction
Recent years have witnessed substantial progress and notable breakthroughs in computer vision, which can be primarily attributed to the advent of Vision Transformer (ViT) models. Benefiting from the powerful self-attention mechanism, ViTs consistently achieve new state-of-the-art performance across vision tasks including classification [14, 15, 16, 17, 18], object detection [11, 19, 18], and semantic segmentation [12, 13]. Despite the progress made, the capability of ViTs in spatial transform invariance has not been fully explored and understood. In many scenarios, ViTs are treated as a universal approximator, expected to automatically handle various vision data irrespective of their orientations and appearances.
In this study, we aim to tap into the potential of ViTs in tackling the challenging spatial transform issue of vision tasks, \(e.g.\), detecting objects in remote sensing scenarios, where images are captured from a bird's-eye view and target objects may appear in arbitrary orientations. To determine an oriented bounding box, initial research efforts [14, 15] suggested a direct regression approach for spatial transform parameters, including the spatial coordinates (\(x\) and \(y\)), width and height (\(w\) and \(h\)), and the angle (\(\alpha\)). However, such a straightforward regression strategy often results in discontinuous boundaries due to the inconsistency in angle representation and periodicity, as well as the suboptimal design of loss functions [18, 19, 18].
Rather than solely concentrating on developing more sophisticated angle representations or refining training objectives, it is essential to tackle the foundational issue of effectively extracting rotation-related features. In particular, we enhance the conventional structure of the bounding box prediction head by allocating distinct feature maps to predict parameters associated with diverse semantic interpretations, such as the object's location, shape, and orientation. This
Figure 1: Conventional approaches (upper) estimate the position, size, and angle using a single RoI feature. In contrast, STD (lower) predicts and refines the parameters of bounding boxes in a divide-and-conquer (decoupled) manner.
approach fundamentally guides the feature extraction process in a controlled and effective manner. Furthermore, by estimating the parameters associated with a particular spatial transform at each stage, this step-wise strategy facilitates the progressive refinement of estimation results, which in turn can contribute to improving the overall accuracy of the model.
Building upon the insights and discussions presented earlier, we propose a Spatial Transform Decoupling (STD) approach, a straightforward yet effective solution for oriented object detection, which decouples the estimation of transformation parameters related to object positions, sizes, and angles, Fig. 1. Concretely, in STD, a multi-branch network design is utilized, where each individual branch is designated to predict parameters that correspond to distinct spatial transforms. From another perspective, STD supplements the self-attention mechanism by allocating distinct responsibilities to self-attention modules at different stages of parameter prediction, which effectively utilizes the spatial transform capabilities of ViTs in a divide-and-conquer fashion. Furthermore, STD integrates cascaded activation masks (CAMs) to enhance the features extracted by stacked Transformer blocks, effectively suppressing background information while highlighting foreground objects. By refining features within regions of interest (RoIs) using CAMs, the feature representation for oriented objects is both decoupled and progressively enhanced. As a simple-yet-effective design, STD can be integrated with various ViT-based detectors and achieve significant performance improvements over the state-of-the-art methods. For instance, STD achieves 82.24% mAP on DOTA-v1.0 and 98.55% mAP on HRSC2016, surpassing the accuracy of all existing detectors.
The contributions of this work are summarized as:
* The Spatial Transform Decoupling (STD) approach is introduced to address the challenge of oriented object detection by estimating parameters for spatial transforms through separate network branches. STD demonstrates remarkable generalizability and can seamlessly integrate with a variety of ViT detectors.
* Cascade activation masks (CAMs) are integrated into the self-attention module at each layer of ViT to progressively enhance the features. CAMs offer spatially dense guidance, directing the attention maps to focus more on foreground objects rather than the background.
* Extensive experimental results demonstrate that STD surpasses state-of-the-art methods by a significant margin across a variety of oriented object detection benchmarks.
## Related Work
### Oriented Object Detection
Existing methods have investigated oriented object detection from the perspectives of feature robustness, region proposal refinement, and target regression enhancement.
**Feature Invariance/Equivalence.** Invariance or equivalence is an essential problem when designing/learning visual feature representations. During the era of hand-crafted features, SIFT Lowe (1999) utilizes dominant orientation-based feature alignment to achieve invariance to rotation and robustness to moderate perspective transforms. With the rise of CNNs, STN Jaderberg et al. (2015) achieves rotation invariance by manipulating the feature maps according to the transformation matrix estimated using a sub-CNN. Group equivariant CNN Cohen and Welling (2016) proposes a natural generalization of CNNs, enabling them to group objects from the same categories regardless of orientations. ORN Zhou et al. (2017) introduces Active Rotating Filters (ARFs), which dynamically rotate during the convolution process and thereby produce feature maps with location and orientation explicitly encoded. ReDet Han et al. (2021) achieves rotation-equivariant convolution (e2cnn Weiler and Cesa (2019)) by incorporating a rotation-invariant backbone, which normalizes the spatial and orientational information of features.
**Region Proposal Refinement.** RoI Transformer Ding et al. (2019) enhances two-stage detectors by iteratively repeating the RPN-RoI head structure Ren et al. (2015); He et al. (2017). Oriented RCNN Xie et al. (2021) streamlines the process of oriented proposal generation and directly predicts oriented proposals based on the features extracted by the backbone and FPN (Feature Pyramid Network) Lin et al. (2017) module. Drawing inspiration from a similar concept, R\({}^{3}\)Det Yang et al. (2021) introduces a feature refinement stage to the orientation regression head.
**Target Regression Enhancement.** Gliding Vertex Xu et al. (2020) converts the task of rotated box prediction into regressing the offset for horizontal boxes along the four edges. CSL Yang and Yan (2020) addresses the potential abrupt change in loss computation by proposing a label-based solution for angle prediction. CFA Guo et al. (2021) and Oriented RepPoints Li et al. (2022) make improvements to the nine-point prediction methods Yang et al. (2019). GWD Yang et al. (2021), KLD Yang et al. (2021), and KFIoU Yang et al. (2022) use two-dimensional Gaussian distributions to solve the angle prediction problem.
Despite the progress of various approaches proposed, few of them explore the impact of decoupling spatial transform, \(e.g.\), position \((x,y)\), size \((w,h)\), and angle (\(\alpha\)), on the hierarchical feature representation.
### Vision Transformer
Drawing inspiration from the NLP field Vaswani et al. (2017); Devlin et al. (2018), ViTs divide the image into multiple patch tokens for feature extraction and processing Dosovitskiy et al. (2020); Liu et al. (2021); Zhang et al. (2022); Tian et al. (2023). It has attracted significant attention in recent years owing to its remarkable success in computer vision tasks. DETR Carion et al. (2020) is a representative work that extends ViTs towards object detection, establishing the fundamental paradigm for applying ViT to this task. MAE He et al. (2022) proposes a novel pre-training mode that deviates from the classic fully supervised pre-training era of CNNs He et al. (2019). Building upon MAE, ViTDet Li et al. (2022) and MIMDet Fang et al. (2019),
2022), _etc_, have made significant advancements in the development of ViT for object detection.
While Vision Transformers have demonstrated promising results in various visual tasks, they still encounter challenges in leveraging their advantages in handling object spatial transform, \(e.g.\), oriented object detection. Recently, RVSA (Zhang et al., 2022; Wang et al., 2022) made an initial attempt to improve the structure of ViT for oriented object detection tasks, which was achieved by updating Window Attention (Liu et al., 2021; Li et al., 2022; Fang et al., 2022) to Rotated Varied Size Attention. Nevertheless, these methods solely rely on the self-attention mechanism to handle various spatial transformations, without explicitly introducing dense guiding information.
## The Proposed Method
This section starts with an elucidation of the motivation behind Spatial Transform Decoupling (STD). Subsequently, a detailed explanation of the overall structure of STD is provided, offering an in-depth understanding of its architectural design and how it functions. Next, we delve into a detailed decoupling structure and introduce the cascaded activation masks (CAMs) for progressive feature refinement. Special emphasis is placed on their significant contribution to the overall performance enhancement of STD.
### Overview
The proposed STD can be readily seen as an extension of existing oriented object detectors, and an overview of the architecture is depicted in Figure 2. The primary innovation of STD resides within the detection head module, while for other components, such as the backbone, Region Proposal Network (RPN), and loss functions, we maintain consistency with mainstream detection frameworks (Ren et al., 2015; Xie et al., 2021). As a result, STD demonstrates significant generalizability, enabling its compatibility with a variety of detectors. Specifically, for the purpose of a clear explanation, we adopt STD within the Faster RCNN framework (Ren et al., 2015) as the default configuration. Throughout the experiments, we will also showcase the performance of STD in combination with other detectors, such as Oriented RCNN (Xie et al., 2021).
ViTs have demonstrated impressive performance across a broad spectrum of visual tasks. However, their utilization in the context of oriented object detection remains relatively unexplored. Nevertheless, existing pre-trained Transformer models are capable of extracting meaningful features, which contributes to establishing a strong foundation for achieving impressive performance in oriented object detection tasks. Therefore, we adopt a design inspired by the imTED (Zhang et al., 2022) detector and substitute the backbone as well as head modules of the two-stage detector with **Vision Transformer blocks** pre-trained using the MAE method.
Specifically, we employ the ViT-small model as the backbone instead of ResNet-50, and use a 4-layer Transformer block to replace the conventional detection head in Faster RCNN built with fully connected (FC) layers. Please note that the ViT-small backbone is obtained from the MAE pre-trained encoder, and the 4-layer Transformer block is derived from the pre-trained decoder, which forms the MAEB-BoxHead module. Once the regions of interest (RoIs) are obtained, the feature maps are uniformly divided into 7\(\times\)7 tokens, which are subsequently fed into the parameter re
Figure 2: The framework of the proposed Spatial Transform Decoupling (STD) method. The detailed structure of Transformer blocks integrated with activation masks (TBAM) is shown on the left.
gression head, as depicted in Figure 2. Experiments are conducted to validate the effectiveness of this framework in addressing the oriented object detection problem, and the results are presented in Table 1. In subsequent experiments, the pre-trained MAEBBoxHead is used as the baseline method by default.
Afterward, the proposed Spatial Transform Decoupling (STD) module is built upon the aforementioned backbone network. To enhance the performance of decoupling, we employ a hierarchical structure to predict the bounding box parameters in a layer-wise manner, and further enhance it by leveraging the guidance provided by the cascaded activation masks (CAMs). Detailed explanations of these contributions will be provided in the following two subsections.
### Decoupled Parameter Prediction
As highlighted in the Introduction Section, different parameters of an oriented bounding box are expected to possess distinct properties (e.g., rotation-variance or rotation-invariance), and therefore, they should be computed based on different feature maps. However, most conventional methods [14, 15] depend on a single feature map to predict all bounding box parameters, potentially resulting in the issue of coupled features. To solve this problem, we introduce a multi-branch network to achieve hierarchical and disentangled parameter prediction.
As shown in Figure 2, we compute different components of the oriented bounding box based on the feature map at various stages of the Transformer decoder in a cascaded manner. Specifically, \(\{x,y\}\), \(\alpha\), \(\{w,h\}\) and the class score are obtained based on the feature maps of the \(1st\), \(2nd\), \(3rd\), and \(4th\) layer of the Transformer block, respectively (the rationale behind this design will be detailed in ablation study). After obtaining the discrete output from each Transformer block, we first reshape them into 7\(\times\)7 feature maps and then apply convolutional layers to further enhance the features. Next, after globally averaging the resultant feature maps, FC layers are adopted to make the final predictions, which are then used to produce the bounding box and CAMs (details explained in the next subsection). Please note that the proposed mechanism is highly generalizable, as one can easily adjust the number of estimated parameters by simply adding or removing predicting branches.
### Cascaded Activation Masks
To further regulate the decoupling process and improve the accuracy of prediction results, we intend to provide dense guidance for bounding box prediction at each stage. To achieve this goal, cascaded activation masks (CAMs) are introduced to provide pixel-level supervision to enhance the features generated by the multiple branches.
An ideal activation mask with binary values should have the regions corresponding to foreground objects assigned with a value of 1, and all background locations set to 0. To align the activated regions with the foreground area as much as possible, we propose to generate activation masks by incorporating information from both the proposal and the predicted bounding box.
To be specific, the center point, size, and orientation of the estimated bounding box, \(i.e.\), \((x_{b},y_{b},w_{b},h_{b},\alpha_{b})\), could be expressed as
\[x_{b} =x_{p}+w_{p}\cdot dx, \tag{1}\] \[y_{b} =y_{p}+h_{p}\cdot dy,\] \[w_{b} =w_{p}\cdot e^{dw},\] \[h_{b} =h_{p}\cdot e^{dh},\] \[\alpha_{b} =d\alpha\]
where \((x_{p},y_{p})\) and \((w_{p},h_{p})\) respectively denote the center coordinates and shape of the proposal, and \((dx,dy,dw,dh,d\alpha)\) are the predicted values related to the oriented bounding box obtained from STD. Then, with the proposal placed in a rectangular coordinate system \((x,y)\) and its four vertices located at \((-1,-1)\), \((1,-1)\), \((1,1)\), and \((-1,1)\), the affine transformation against the bounding box \((x^{{}^{\prime}},y^{{}^{\prime}})\) could be formulated as (please refer to A.1. Detailed Derivation of the Spatial Transform):
\[\small\left(\begin{array}{c}x^{{}^{\prime}}\\ y^{{}^{\prime}}\end{array}\right)=\left(\begin{array}{cc}\cos d\alpha\cdot e ^{dw}&-\sin d\alpha\cdot e^{dh}\cdot\frac{h_{p}}{h_{p}}\\ \sin d\alpha\cdot e^{dw}&\frac{\pi_{w}}{h_{p}}&\cos d\alpha\cdot e^{dh}\end{array} \right)\left(\begin{array}{c}x\\ y^{{}^{\prime}}\end{array}\right)+\left(\begin{array}{c}2\cdot dx\\ 2\cdot dy\end{array}\right). \tag{2}\]
As illustrated in Figure 3, the activation mask could be produced by applying the affine transformation in Eq.(2) to a matrix \(\mathbf{AM}\) with all elements set to 1. After integrating \(\mathbf{AM}\) into the self-attention module in STD, the mapping function implemented by **Transformer Block with Activation Mask** (TBAM, see Figure 2) could be written as
\[\text{TBAM}(\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{AM})=\text{softmax}(\frac{\mathbf{Q}\mathbf{K}^ {T}}{\sqrt{d}})\mathbf{V}^{\prime} \tag{3}\]
where \(\mathbf{V}^{\prime}\) is obtained by performing an element-wise multiplication between \(\mathbf{V}\) and \(\mathbf{AM}\) (\(\mathbf{V}^{\prime}=\mathbf{V}\odot\mathbf{AM}\)). By multiplying with \(\mathbf{V}\), \(\mathbf{AM}\) could direct the model's attention by highlighting the foreground while suppressing the background. In the forward propagation process, the utilization
\begin{table}
\begin{tabular}{c|c|c|c} & 2FCBBoxHead & \begin{tabular}{c} MAEBBoxHead \\ (Not Pre-trained) \\ \end{tabular} &
\begin{tabular}{c} MAEBBoxHead \\ (Pre-trained) \\ \end{tabular} \\ \hline mAP & 69.67 & 69.16 & 71.07 \\ \end{tabular}
\end{table}
Table 1: Performance comparison of Faster RCNN with the same backbone (ViT-small) but different heads. The training is carried out on the DOTA-v1.0 dataset [20] for 12 epochs.
Figure 3: The translation between the predicted bounding box and the activation mask after affine transformation. The blue box represents the proposal region and the red box represents the activation mask.
of activation maps guides the decoupled predicted values in earlier stages to direct the self-attention mechanism of the subsequent Transformer blocks; while during the backward propagation process, the discrepancies in the decoupled predicted values from later stages are propagated through the activation maps, affecting the feature extraction process of the previously decoupled predicted values. This cascaded architecture enhances the interconnection between decoupled predicted values at various levels.
## Experiment
### Experimental Setting
#### Datasets
Experiments are conducted on two commonly-used datasets for oriented object detection, namely DOTA-v1.0 [11] and HRSC2016 [10]. **DOTA-v1.0** is a large-scale object detection dataset for optical remote sensing images, which comprises 2,806 images with diverse dimensions, spanning from 800 to 4,000 pixels in width and height. The dataset consists of a total of 188,282 individual instances distributed across 15 different classes, and it is partitioned into training, validation, and test sets containing 1,411, 458, and 937 images, respectively. **HRSC2016** is an optical remote sensing image dataset designed for ship detection. It comprises 1,680 images with diverse widths and heights, ranging from 339 pixels to 1333 pixels. The commonly used training, validation, and test sets consist of 436, 181, and 444 images, respectively.
#### Implementation Details
The experimental results are obtained on the MMRate platform [15]. We employ the checkpoints of ViT-small/-base [14] and HiViT-base [14], which are all pre-trained using the MAE [10] self-supervised strategy. We pre-train the ViT-small model and directly utilize the open-sourced checkpoints for the other models, wherein all the Transformer blocks of both the encoder and decoder are fully inherited.
For a fair comparison, we adopt a similar experimental configuration as used in the benchmark methods [23, 24, 25]. The performance evaluation on DOTA-v1.0 follows a _multi-scale_ setting, where the model is trained on the trainval-set and tested on the test-set. In contrast, for the ablation study on DOTA-v1.0, a _single-scale_ setting is adopted, where the model is trained on the train-set and tested on the val-set. All images would be cropped into patches of size 1024\(\times\)1024 with an overlap of 500/200 pixels in _multi-scale/single-scale_ setting. In the _multi-scale_ setting, images are resized by 0.5\(\times\), 1.0\(\times\), and 1.5\(\times\) before undergoing the cropping process, and no scale adjustment is adopted in the _single-scale_ setting. In the HRSC2016 dataset, images are resized in such a way that the larger dimension of width and height becomes 800, while maintaining their original aspect ratios.
During training, data augmentation techniques, including horizontal/vertical flipping and random rotation, are employed to increase the scale and diversity of training data. The model is trained for 12 epochs on DOTA-v1.0 and 36 epochs on HRSC2016. We adopt the AdamW optimizer [13] with an initial learning rate of \(1e^{-4}\)/\(2.5e^{-4}\) for DOTA-v1.0/HRSC2016, a weight decay of \(0.05\), and a layer decay of \(0.75\)/\(0.90\) for ViT/HiViT. All experiments are conducted on 8\(\times\)A100 GPUs with a batch size of \(8\).
### Ablation Study
#### Feasibility of Decoupled Parameter Prediction
Prior to assessing the feasibility of the decoupling approach, we first investigate the performance of bounding box prediction relying solely on a single feature map in various levels. As shown in Table 1(a), a consistent enhancement in performance is evident when employing feature maps from deeper layers for bounding box prediction. This observation suggests that while deep feature maps contribute to improved feature representations for object detection, shallower layers still contain valuable information for bounding box prediction, as their performance is only slightly lower.
We also compare the performance with the decoupled parameter estimation approach. As shown in Fig. 2, we used the feature maps from the first block to predict \(x\) and \(y\), the second block for \(\alpha\), the third block for \(w\) and \(h\), and the final stage for the class score \(cls\). Without the aid of CAMs, the performance of this decoupled configuration is slightly lower than predicting bounding boxes with the feature maps from the third/fourth block by a margin of 0.09%/0.28% mAP. These results provide evidence that suggests the feasibility and potential of designing a decoupled structure. As previously mentioned, we introduce CAMs to enhance the
Figure 4: Visualization of attention maps. Compare to the baseline Transformer, the attention maps in STD (bk1 to bk4) exhibit a stronger alignment with the semantic interpretation of the parameter estimated at the respective stage.
decoupling process, further reducing the performance gap between decoupled and non-decoupled approaches.
#### Rationality of Model Design
To showcase the rationale behind the detailed architecture of STD, we investigate the impact of both the order of parameter decoupling and the location of activation mask integration. As shown in Table (b)b, the order of decoupling has a significant influence on the performance of STD, and the optimal result is achieved under the configuration \(\{x,y\}\rightarrow\alpha\rightarrow\{w,h\}\). This phenomenon can be explained by the fact that alternative prediction orders fail to ensure that the RoI could consistently cover the entire foreground object.
#### Adaptability to Different Backbones
As discussed in the preceding section, as long as the RoI fully covers the foreground object, the activation masks can effectively activate the entire foreground region. Hence, our approach is expected to be adaptable to other RoI extraction methodologies. As indicated in Table 3, the decoupling module of STD also demonstrates strong performance when incorporated into the Oriented RCNN object detector [20], which showcases the remarkable generalizability of our method.
#### Visualization of Attention Maps
In Figure 4, we present visualizations of the attention maps from different decoder layers of STD and a baseline Transformer model (Rotated Faster RCNN+ViT-S). In comparison to the baseline Transformer, the attention maps generated by the STD model at each stage exhibit a closer alignment with the semantic meaning of the corresponding predicted parameter. Specifically, when obtaining the positional information \(x,y\), the attention tends to concentrate around the center of the object. Following this, the attention becomes more widespread, targeting one end and one edge of the object to capture information about its orientation \(\alpha\). Finally, the attention predominantly focuses on both ends of the object, aiming to capture details related to its scale. This phenomenon is likely a result of the decoupled bounding box prediction mechanism and the step-wise guidance provided by the activation masks, further confirming the effectiveness of the proposed architectural approach.
\begin{table}
\end{table}
Table 2: Results of diagnostic studies. (a) Comparison of detection accuracy achieved using feature maps from various levels of Transformer block (bk1 to bk4). \(\square\) denotes coupled bounding box prediction and \(\diamondsuit\) refers to class score estimation. \(\dagger\) indicates the performance of original MAEBBoxHead while \(\ddagger\) indicates our STD’s. (b) The influence of decoupling order on the overall performance of STD.
\begin{table}
\end{table}
Table 3: Comparison of object detection accuracy achieved by different RoI extraction networks.
Figure 5: Comparison of detection results. STD demonstrates superior performance in reducing false detections ((a), (b), and (c)), better discerning clustered objects ((c) and (e)), and improving the alignment with oriented objects ((c), (d), and (e)).
Qualitative ComparisonWe also present a qualitative comparison between the results of STD and the baseline Transformer in Figure 5. STD is capable of mitigating the occurrence of false negatives/positives (as depicted in Figure 5(a), (b)), while also achieving notably improved alignment with oriented foreground objects across different scales (as shown in Figure 5(c), (d), (e)). This observation highlights the capability of STD in effectively modeling the object orientations without compromising the precision in capturing spatial location and shape information.
### Performance Comparison
In this section, we present comprehensive experimental results obtained on the DOTA-v1.0 and HRSC2016 datasets. For additional results on the HRSID dataset [20] and the MS COCO dataset [21], please refer to A.3. Experiment Results on HRSID and A.4. Experiment Results on MS COCO.
Dota-v1.0We provide a comprehensive comparison of our method with state-of-the-art approaches on DOTA-v1.0, Table 4. We evaluate STD within Oriented RCNN frameworks (STD-O) and both ViT and HiViT models are used for evaluations. Remarkably, STD achieves new state-of-the-art performance in both frameworks. When coupled with ViT-B and HiViT-B backbones, STD achieves 81.66% and 82.24% mAP, respectively, surpassing the previous best results.
the decoupling process by incorporating cascaded activation masks, which introduce dense guidance into the self-attention mechanism. The extensive and convincing experiments have demonstrated the effectiveness of STD on multiple popular benchmarks. To the best of our knowledge, STD is a pioneering method that tackles oriented object detection in remote sensing with a structural perspective. Notably, the Transformer-based nature of STD enables seamless integration with various advanced pre-trained models, providing significant benefits to the research community.
|
2306.06877 | Boosting Breast Ultrasound Video Classification by the Guidance of
Keyframe Feature Centers | Breast ultrasound videos contain richer information than ultrasound images,
therefore it is more meaningful to develop video models for this diagnosis
task. However, the collection of ultrasound video datasets is much harder. In
this paper, we explore the feasibility of enhancing the performance of
ultrasound video classification using the static image dataset. To this end, we
propose KGA-Net and coherence loss. The KGA-Net adopts both video clips and
static images to train the network. The coherence loss uses the feature centers
generated by the static images to guide the frame attention in the video model.
Our KGA-Net boosts the performance on the public BUSV dataset by a large
margin. The visualization results of frame attention prove the explainability
of our method. The codes and model weights of our method will be made publicly
available. | AnLan Sun, Zhao Zhang, Meng Lei, Yuting Dai, Dong Wang, Liwei Wang | 2023-06-12T05:30:09Z | http://arxiv.org/abs/2306.06877v1 | # Boosting Breast Ultrasound Video Classification by the Guidance of Keyframe Feature Centers
###### Abstract
Breast ultrasound videos contain richer information than ultrasound images, therefore it is more meaningful to develop video models for this diagnosis task. However, the collection of ultrasound video datasets is much harder. In this paper, we explore the feasibility of enhancing the performance of ultrasound video classification using the static image dataset. To this end, we propose KGA-Net and coherence loss. The KGA-Net adopts both video clips and static images to train the network. The coherence loss uses the feature centers generated by the static images to guide the frame attention in the video model. Our KGA-Net boosts the performance on the public BUSV dataset by a large margin. The visualization results of frame attention prove the explainability of our method. _The codes and model weights of our method will be made publicly available._
Keywords:Breast ultrasound classification Ultrasound video Coherence loss.
## 1 Introduction
Breast cancer is a life-threatening disease that has surpassed lung cancer as leading cancer in some countries and regions [20]. Breast ultrasound is the primary screening method for diagnosing breast cancer, and accurately distinguishing between malignant and benign breast lesions is crucial. This task is also an essential component of computer-aided diagnosis. Since each frame in an ultrasound video can only capture a specific view of a lesion, it is essential to aggregate information from the entire video to perform accurate automatic lesion diagnosis. Therefore, in this study, we focus on the classification of breast ultrasound videos for detecting malignant and benign breast lesions.
Despite the fact that ultrasound videos contain more information than static images, most previous studies have focused on static image classification [11, 2, 27]. One major difficulty in using ultrasound videos for diagnosis lies in the collection of video data with pathology gold standard results. Firstly, during general ultrasound examinations, sonographers usually only record keyframe images and not entire videos. Secondly, for prospectively collected videos, additional effort must be made to track the corresponding pathological results. As a result, while there are many breast ultrasound image datasets [1, 28], video datasets are scarce. Currently, there is only one breast video dataset [15] available, which is relatively small, containing only 188 videos.
Given the difficulties in collecting ultrasound video data, we investigate the feasibility of enhancing the performance of ultrasound video classification using a static image dataset. To achieve this, we first analyze the relationship between ultrasound videos and images. The images in the ultrasound dataset are keyframes of a lesion that exhibit the clearest appearance and most typical symptoms, making them more discriminative for diagnosis. Although ultrasound videos provide more information, the abundance of frames may introduce redundancy or vagueness that could disrupt classification. From the aspect of feature distribution, as shown in Fig. 1, the feature points of static images are more concentrated, while the feature of video frames sometimes are away from the class centers. Frames far from the centers are harder to classify. Therefore, it is a promising approach to guide the video model to pay more attention to important frames close to the class center with the assistance of static keyframe images. Meanwhile, our approach aligns with the diagnosis of ultrasound physicians, automatically evaluates the importance of frames, and diagnoses based on the information of key frames. Additionally, our method provides interpretability through key frames.
Figure 1: Feature distribution of video frames from BUSV [15] and static images from BUSI [1]. We use a 2D ResNet trained on ultrasound images to get the features.
In this paper, we propose a novel Keyframe Guided Attention Network (KGA-Net) to boost ultrasound video classification. Our approach leverages both image (keyframes) and video datasets to train the network. To classify videos, we use frame attention to predict feature weights for all frames and aggregate them to make the final classification. The feature weights determine the contribution of each frame for the final diagnosis. During training, we construct category feature centers for malignant and benign examples respectively using center loss [26] on static image inputs and use the centers to guide the training of video frame attention. Specifically, we propose coherence loss, which promotes the frames close to the centers to have high attention weights and decreases the weights for frames far from the centers. Due to the feature centers being generated by the larger scale image dataset, it provides more accurate and discriminative feature centers which can guide the video frame attention to focus on important frames, and finally leads to better video classification.
Our experimental results on the public BUSV dataset [15] show that our KGA-Net significantly outperforms other video classification models by using an external ultrasound image dataset. Additionally, we visualized attention values guided by the coherence loss. The frames with clear diagnostic characteristics are given higher attention values. This phenomenon makes our method more explainable and provides a new perspective for selecting keyframes from video.
In conclusion, our contributions are as follows:
1. We analyze the relationship between ultrasound video data and image data, and propose the coherence loss to use image feature centers to guide the training of frame attention.
2. We propose KGA-Net, which adopts a static image dataset to boost the performance of ultrasound video classification. KGA-Net significantly outperforms other video baselines on the BUSV dataset.
3. The qualitative analysis of the frame attention verifies the explainability of our method and provides a new perspective for selecting keyframes.
## 2 Related Works
**Breast Ultrasound Classification.** Breast ultrasound (BUS) plays an important supporting role in the diagnosis of breast-related diseases. Recent research demonstrated the potential of deep learning for breast lesion classification tasks [18, 6, 23, 27, 19]. [18, 6] design ensemble methods to integrate the features of multiple models to obtain higher accuracy. [23, 27, 19] utilize multi-task learning to improve the model performance. However, all of them are based on image datasets, such as BUSI [1], while few works focus on the video modality. [14] designed a pre-training model based on contrastive learning for ultrasound video classification. [25, 13] develop a keyframe extraction model for ultrasound videos and utilized the extracted keyframes to perform various classification tasks. However, these methods rely on keyframe supervision, which limits their applicability. Fortunately, the recent publicly available dataset BUSV [15] has made the re
search on the task of BUS video-based classification possible. In this paper, we build our model based on this dataset.
**Video recognition based on neural networks.** Traditional methods are based on Two-stream networks [10, 24, 9]. Since I3D [3] was proposed, 3D CNNs have dominated video understanding for a long time. [22, 21] decompose 3D convolution in different ways to reduce computation complexity without losing performance. [8] designed two branches to focus on temporal information and spatial features, respectively. However, 3D CNNs have a limited receptive field, and thus struggle to capture long-range dependency. Vision Transformers [5, 16] have become popular due to their excellent capability of aggregating spatial-temporal information. In order to reduce computational complexity brought by global attention, MViT [7] used hierarchical structure by reducing spatial resolution and Video Swin [17] introduced 3D shifted window attention. Our proposed KGA-Net is a simple framework that aggregates multi-frame features based on the frame attention module.
## 3 Methodology
As shown in Fig. 2, our KGA-Net takes the video inputs and static image inputs simultaneously to train the network. The coherence loss is proposed to guide the frame attention by using the feature centers generated by the images. We will then elaborate on each component in the following sections.
Figure 2: Overview of our proposed keyframe-guided attention network.
### Video and Image Classification Network
**The video classification network** is illustrated in Fig. 2 (a). The model is composed of a 2D CNN backbone, a frame attention module, and a classification head. For an input video clip \(V\) composed of \(N\) frames, it is first processed by the backbone network and the feature vectors of the frames \(\{F_{i}\}_{i=1}^{N}\) are obtained. Then, the frame attention module predicts the attention weight for each frame using a FC and sigmoid layer, and then the features are aggregated by the weights to form an integrated feature vector. Formally,
\[w_{i}=\text{Sigmoid}(\text{FC}(F_{i})) \tag{1}\]
where \(w_{i}\) denotes the weight for the \(i_{\text{th}}\) frame and FC is the fully-connected layer. Then, the features are aggregated by \(F_{V}=\sum_{i=1}^{N}w_{i}\cdot F_{i}\). Finally, the classification head is applied to the final result of lesion classification. To train the model, the cross-entropy loss (CE Loss) is applied to the classification prediction of the video.
**The image classification network** is used to assist in training the video model. We use the same 2D CNN as the backbone network in the video classification network. The model weights are shared for the two backbones for better generalization. To promote the formation of feature centers, we apply the center loss [26] to the image model besides the cross-entropy loss. In addition, the frame-level cross-entropy loss is also applied to the video frames to facilitate training.
### Training with Coherence Loss
In this section, we introduce the coherence loss to guide the frame attention with the assistance of the category feature centers. We use the same method as center loss [26] to obtain the feature centers for the malignancy and benign lesions, which are denoted as \(\mathcal{C}^{\text{mal}}\) and \(\mathcal{C}^{\text{benign}}\), respectively.
The distances of frame features and the feature centers can measure the quality of the frames. The frame features close to the centers are more discriminative for the classification task. Therefore, we use these distances to guide the generation of frame attention. Specifically, we push the frames close to the centers to have higher attention weights and decrease the weights far from the centers. To do this, for each video frame with feature \(F_{i}\), we first calculate the feature distance from its corresponding class center. Formally,
\[d_{i}=\|F_{i}-\mathcal{C}^{Y}\|_{2}, \tag{2}\]
where \(Y\in\{\text{mal},\text{benign}\}\) is the label of the video \(V\) and \(d_{i}\) is the computed distance of frame \(i\).
Afterward, we apply coherence loss to the attention weights \(\mathbf{w}=[w_{1},w_{2},...,w_{N}]^{\intercal}\) to make them have a similar distribution with the feature distances \(\mathbf{d}=[d_{1},d_{2},...,d_{N}]^{\intercal}\). To supervise the distribution, the coherence loss is defined as the L2 loss of the gram matrix of these two vectors
\[\text{L}_{\text{Coh}}=\|\text{Gram}_{\mathbf{w}}-\text{Gram}_{\mathbf{d}}\|_{ 2}, \tag{3}\]
where \(\mathrm{Gram}_{\mathbf{w}}=\frac{(\mathbf{1}-\mathbf{w})\cdot(\mathbf{1}-\mathbf{w} )^{\intercal}}{\|\mathbf{1}-\mathbf{w}\|_{2}^{2}}\) is the gram matrix of normalized attention weights, and \(\mathrm{Gram}_{\mathbf{d}}=\frac{\mathbf{d}\cdot\mathbf{d}^{\intercal}}{\| \mathbf{d}\|_{2}^{2}}\) is the gram matrix of normalized feature distances. Note that lower distances correspond to stronger attention, hence we use the opposite of \(\mathbf{w}\) to get \(\mathrm{Gram}_{\mathbf{w}}\).
### Total Training Loss
To summarize, the total training loss of our KGA-Net
\[L_{\mathrm{total}}=L_{\mathrm{CE}}^{V}+L_{\mathrm{CE}}^{I}+L_{\mathrm{Center}}+ \lambda\cdot L_{\mathrm{Coh}}. \tag{4}\]
\(L_{\mathrm{CE}}^{V}\) and \(L_{\mathrm{CE}}^{I}\) denote the cross-entropy for video classification and image and frame classification. \(L_{\mathrm{Center}}\) means the center loss. \(\lambda\) is the weight for coherence loss. Empirically, we set \(\lambda=1\) in our experiments.
During inference, to perform classification on video data, the video classification network can be utilized individually for prediction.
## 4 Experiments
### Implementation Details
**Datasets.** We use the public BUSV dataset [15] for video classification and the BUSI dataset [1] as the image dataset. BUSV consists of 113 malignant videos and 75 benign videos. BUSI contains 445 images of benign lesions and 210 images of malignant lesions. For the BUSV dataset, we use the official data split in [15]. All images of the BUSI dataset are adopted to train our KGA-Net.
**Model Details.** ResNet-50 [12] pretrained on ImageNet [4] is used as backbone. We use SGD optimizer with an initial learning rate of 0.005, which is reduced by 10\(\times\) at the 4,000th and 6,000th iteration. The total learning iteration number is 8,000. The learning rate warmup is used in the first 1,000 iterations. For each batch, the video clips and static images are both sampled and sent to the network. We use a total batchsize of 16 and the sample probability of video clips and images is 1:1. We implement the model based on Pytorch and train it with NVIDIA Titan RTX GPU cards.
During inference, we use the video classification network individually. In order to satisfy the fixed video length requirement of MViT [7], we sample up to 128 frames of each video to form a video clip and predict its classification result using all the models in experiments.
### Comparison with Video Models
In this section, we compare our KGA-Net with other competitive video classification models. Comparing with ultrasound-video-based work presents difficulty. [14, 13] is not accompanied by open-source code and relies on private datasets, making comparisons exceedingly challenging. [25] relies on a private dataset
with keyframe annotations for supervised training. The released code does not include keyframe detection, which makes direct comparison impossible. Since the research on ultrasound video classification is uncomparable, we compare our method with other strong video baselines on natural images. The CNN-based models including I3D [3], SlowFast [8], R(2+1)D [22] and CSN [21] are involved. Meanwhile, the recently popular transformer-based model (MViT [7]) is also adopted. For a fair comparison, we use both the video and image data to train these models. The images are regarded as static videos to train the networks. During evaluation, we report the metrics on the test set of BUSV.
As shown in Table. 1, by leveraging the guidance of the image dataset, our KGA-Net significantly surpasses all other models on all of the metrics. The video classification model of our KGA-Net is composed of a standard 2D ResNet-50 and a light feature attention module, while the baseline models are with net structures carefully designed for video analysis. Therefore, the success of our KGA-Net lies in the correct usage of the image guidance. The feature centers formed by the image dataset with larger data size and clear appearance effectively improve the accuracy of frame attention hence boosting the video classification performance.
### Ablation Study
In this section, we ablate the contribution of each key design in our KGA-Net. We observe their importance by removing these key components from the whole network. The results are shown in Table 2. The results of KGA-Net are shown in the last row in Table 2, while the components are ablated in the first three rows. We use the same training schedule for all of the experiments.
**Image guidance** is the main purpose of our method. To portray the effect of using the image dataset, we train the KGA-Net using BUSV dataset alone in the first row of Table 2. Without the image dataset, we generate the feature centers from the video frames. As a result, the performance significantly drops due to the decrease in dataset scale. It also shows that the feature centers generated by the image dataset are more discriminative than that of the video dataset. It is not only because the lesion number of BUSI is larger than BUSV, but also because
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Model & AUC(\%) & ACC(\%) & Sensitivity(\%) & Specificity(\%) \\ \hline I3D [3] & 88.31 & 81.58 & 84.00 & 76.92 \\ SlowFast [8] & 82.54 & 79.49 & 76.92 & 84.62 \\ R(2+1)D [22] & 86.46 & 81.58 & 84.00 & 76.92 \\ CSN [21] & 83.38 & 81.58 & 84.00 & 76.92 \\ MViT [7] & 90.53 & 82.05 & 80.77 & 84.62 \\ KGA-Net (Our) & **94.67** & **89.74** & **88.46** & **92.31** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison with other video models.** Classification thresholds are determined by Youden index.
the images in BUSI are all the keyframes that contain typical characteristics of lesions.
**Frame attention and coherence loss** are two essential modules of our KGA-Net. We train a KGA-Net without the coherence loss in the third row of Table 2. In the second row, we further replace the feature attention module with feature averaging of video frames. It can be seen that both of these two modules contribute to the overall performance according to AUC and ACC. It is worth noting that these two models without coherence loss obtain very low sensitivity and high specificity, which means the model predictions are imbalanced and intend to make benign predictions. It is because that clear malignant appearances usually only exist in limited frames in a malignant video. Without our coherence loss or frame attention, it is difficult for the model to focus on typical frames that possess malignant features. This phenomenon certifies the effectiveness of our KGA-Net to prevent false negatives in diagnosis.
### Visual Analysis
In Fig. 3, we illustrate video frames with their corresponding frame attention weights predicted by KGA-Net. Overall speaking, the frames with high attention weights do have clear image appearances for diagnosis. For example, the first three frames in Fig. 3(b) clearly demonstrate the edge micro-lobulation and irregular shapes, which lead to malignant judgment. Furthermore, we plot the relationships between the predicted attention values and the feature distances to the centers. As shown in Fig. 3(e), these two variables are linearly related, which indicates that KGA-Net the attention weights are effectively guided by the feature distances.
The qualitative analysis proves the interpretability of our method, which will benefit clinical usage. Moreover, the attention weights reveal the importance of each frame for lesion diagnosis. Therefore, it can provide a new perspective for the keyframe extraction task of ultrasound videos.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Model & AUC(\%) & ACC(\%) & Sensitivity(\%) & Specificity(\%) \\ \hline w/o image guidance & 85.21 & 76.92 & 73.08 & 84.62 \\ w/o coherence loss \& attention & 88.17 & 74.36 & 61.54 & **100.0** \\ w/o coherence loss & 92.90 & 87.18 & 80.77 & **100.0** \\ \hline KGA-Net & **94.67** & **89.74** & **88.46** & 92.31 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation studies.** Model components are removed in the first three lines to analyze their contributions in KGA-Net. Classification thresholds are determined by Youden index.
## 5 Conclusion
We propose KGA-Net, a novel video classification model for breast ultrasound diagnosis. Our KGA-Net takes as input both the video data and image data to train the network. We propose the coherence loss to guide the training of the video model by the guidance of feature centers of the images. Our method significantly exceeds the performance of other competitive video baselines. The visualization of the attention weights validates the effectiveness and interpretability of our KGA-Net.
Figure 3: **Visual Analysis.** (a-d) Visualization of video frames and corresponding frame attention weights. (e) Relationship between attention weight and feature distance. |
2310.05440 | Modeling and Simulation of Chemo-Elasto-Plastically Coupled Battery
Active Particles | As an anode material for lithium-ion batteries, amorphous silicon offers a
significantly higher energy density than the graphite anodes currently used.
Alloying reactions of lithium and silicon, however, induce large deformation
and lead to volume changes up to 300%. We formulate a thermodynamically
consistent continuum model for the chemo-elasto-plastic diffusion-deformation
behavior of amorphous silicon and it's alloy with lithium based on finite
deformations. In this paper, two plasticity theories, i.e. a rate-independent
theory with linear isotropic hardening and a rate-dependent one, are formulated
to allow the evolution of plastic deformations and reduce occurring stresses.
Using modern numerical techniques, such as higher order finite element methods
as well as efficient space and time adaptive solution algorithms, the
diffusion-deformation behavior resulting from both theories is compared. In
order to further increase the computational efficiency, an automatic
differentiation scheme is used, allowing for a significant speed up in
assembling time as compared to an algorithmic linearization for the global
finite element Newton scheme. Both plastic approaches lead to a more
heterogeneous concentration distribution and to a change to tensile tangential
Cauchy stresses at the particle surface at the end of one charging cycle.
Different parameter studies show how an amplification of the plastic
deformation is affected. Interestingly, an elliptical particle shows only
plastic deformation at the smaller half axis. With the demonstrated efficiency
of the applied methods, results after five charging cycles are also discussed
and can provide indications for the performance of lithium-ion batteries in
long term use. | Raphael Schoof, Johannes Niermann, Alexander Dyck, Thomas Böhlke, Willy Dörfler | 2023-10-09T06:29:35Z | http://arxiv.org/abs/2310.05440v3 | # Efficient Modeling and Simulation of Chemo-Elasto-Plastically Coupled Battery Active Particles
###### Abstract
As an anode material for lithium-ion batteries, amorphous silicon offers a significantly higher energy density than the graphite anodes currently used. Alloying reactions of lithium and silicon, however, induce large deformation and lead to volume changes up to 300%. We formulate a thermodynamically consistent continuum model for the chemo-elasto-plastic diffusion-deformation based on finite deformations. In this paper, a plastic deformation approach with linear isotropic hardening and a viscoplastic deformation ansatz are investigated and compared to allow the evolution of plastic deformations and reduce occurring stresses. For both models, a return mapping can be derived to update the equivalent plastic strain for the next time step. Using a finite element method and an efficient space and time adaptive solution algorithm a large number of charging cycles can be examined. We derive a linearization for the global Newton scheme and compare it to an automatic differentiation technique regarding the numerical performance and physical results. Both plastic approaches lead to a stronger heterogeneous concentration distribution and to a change to tensile tangential Cauchy stresses at the particle surface at the end of one charging cycle. Different parameter studies show how an amplification of the plastic deformation is affected. Interestingly, an elliptical particle shows only plastic deformation at the smaller half axis. With the
demonstrated efficiency of the applied methods, results after five charging cycles are also discussed and can provide indications for the performance of lithium-ion batteries in long term use.
**Keywords:** lithium-ion battery, finite deformation, (visco-)plasticity, finite elements, numerical simulation, automatic differentiation
**MSC Classification:** 74C15, 74C20, 74S05, 65M22, 90C33
## 1 Introduction
Lithium (Li)-ion batteries gained an enormous amount of research interest in the past two decades [1], as a mean of storing electric energy and propelling electro-mobility [2, 3]. However, due to the complex electro-chemo-mechanically coupled processes occurring during charging and discharging of Li-ion batteries, ongoing research still aims at improving battery lifetime, reducing costs and increasing capacity by, e.g., varying the materials composing the battery [1, 3]. State of the art is the usage of graphite as anode material [1]. A promising candidate to be used as anode material in Li-ion batteries is amorphous silicon (aSi), due to its large capacity and capability to form an alloy with the diffusing Li-ions, increasing battery capacity [2]. A disadvantage is the large volume increase aSi particles undergo during alloying, which can reach up to \(300\,\mathrm{\char 37}\)[4]. Numerous simulative studies have shown that these large deformations are accompanied by plastic deformations of aSi which are inherently linked to battery lifetime and capacity, see e.g. [3, 5, 6, 7, 8]. With the goal of using aSi as anode material, it is therefore imperative to study plastic deformation mechanisms at the particle level of aSi anodes and their interplay with battery performance during charging and discharging using physical models and computational investigations.
To this end, geometrically and physically nonlinear chemo-mechanically coupled continuum theories have proven to be a valuable tool, see e.g. [6, 9, 10, 11]. For the mechanical part of the model, most works rely on a multiplicative split of the deformation gradient [12] into a chemical, an elastic and a plastic part using finite deformation. Discrepancies in modeling strategies occur in the nonlinear strain measure used, ranging from the Green-Lagrange strain tensor [10, 11] to the Hencky strain tensor [6]. In addition, several models consider plastic deformation to be rate-dependent [6, 7], while others rely on a rate-independent plasticity theory [9, 13]. Unfortunately, neither the atomic-level structural evolution, nor the mechanical behavior of the aSi during lithiation and delithiation cycles is well understood [14]. This also holds for the detailed mechanism of plastic deformation. However, several studies concluded that plasticity does occur during charging and discharging. In experimental studies, c.f. [15], a rate dependent plastic behavior is considered to explain the observed behavior. In contrast, a numerical study conducted on a molecular level in [16] seems to indicate rate independent plasticity. The chemical part of the models, describing diffusion of Li-ions during charging and discharging, is based on a diffusion equation relating changes in concentration to the gradient of the species' chemical potential and the species'
mobility [6]. Models differ in their approach to define the chemical contribution to the Helmholtz free energy, where approaches either rely on open-circuit voltage (OCV) curves [11] or assumptions for the entropy of mixing [6]. In addition, the mobility is defined to be either derived as the change of the chemical part of the chemical potential with respect to the concentration [6] or the entire chemical potential [17]. The coupling of deformations and diffusion arises due to the strains induced by Li-ions as well as the influence of mechanical stresses on the chemical potential.
Both finite difference [5, 17] and finite element [6, 10] schemes have been proposed to discretize the resulting equations, where the latter have been predominantly used lately, due to their superior applicability to complex geometries. Solving discretized non-linear coupled systems of equations is time consuming and expensive in terms of computational resources, due to small mesh sizes and small time step sizes required to resolve all mechanisms. Space and time adaptive solution algorithms, such as the one proposed in [10], allow drastic reduction in computational resources. In addition, parallelization schemes [18] reduce simulation times considerably. Introducing plastic deformation is another challenge, as the additional variables are either considered as degrees of freedom [17] or static condensation is used to arrive at a primal formulation [6, 19], where the variables are only computed at integration point level.
The goal of this work is to introduce a chemo-mechanically coupled model for large chemo-elasto-plastic deformation processes in aSi anode particles, that takes into account plastic deformation of the aSi particles, where we consider the initial yield stress to be a function of lithium concentration [16]. As no consensus exists in the experimental literature regarding the mechanisms of plastic deformation in aSi, we formulate both a rate-dependent viscoplasticity, as well as a rate-independent plasticity theory and discuss the implications on particle behavior [14, 15, 16]. We use static condensation to arrive at a primal formulation for the mechanical equations and consider plastic deformation at integration point level within each finite element [19]. We explicitly derive a projector onto the admissible stresses, relying on the classical return mapping method [20, Chapter 3] and [19], which is rather straightforward for the Hencky strains used in our theory. The diffusion of Li into and out of the aSi anode particle follows classical diffusion theory, where we rely on a measured OCV curve to model the chemical part of the free energy [18]. As a boundary condition, various charging rates (C-rates) are applied for the lithium flux. For solving the coupled system of equations we extend the solution scheme proposed in [10, 18, 21], relying on a spatial and temporal adaptive algorithm. We consider radial-symmetric and two dimensional computational domains and compare stress and plastic strain development as well as concentration distributions after a various number of half cycles for both rate-dependent and rate-independent plasticity models. In addition, we investigate the computational performance and numerical efficiency of our implementation scheme.
The remainder of this article is organized as follows: in Section 2 we introduce the theoretical basis for our work and derive the equations describing chemo-mechanically coupled diffusion processes in aSi anodes. Section 3 summarizes the numerical approach taken in this work to solve the derived system of equations. Subsequently, in Section 4, we present results for various investigated cases. We close with a conclusion and an outlook in Section 5.
## 2 Theory
In a first step we review and summarize our used constitutive theory adapted from [6, 10, 11, 17] to couple chemical, elastic and plastic material behavior. We base our model on a thermodynamically consistent theory for the chemo-mechanical coupling during lithiation and delithiation.
### Finite Deformation
Considering a mapping \(\mathbf{\Phi}\colon\mathbb{R}_{\geq 0}\times\Omega_{0}\rightarrow\Omega\), \(\mathbf{\Phi}\left(t,\boldsymbol{X}_{0}\right)\coloneqq\boldsymbol{x}\) from the Lagrangian domain \(\Omega_{0}\) to the Eulerian domain \(\Omega\), see for more information [12, Section 2], [22, Section 8], [23, Chapter VI] and [6, 10, 11, 17, 24], the deformation gradient \(\mathbf{F}=\partial\mathbf{\Phi}/\partial\boldsymbol{X}_{0}=\mathbf{Id}+ \boldsymbol{\nabla}_{0}\boldsymbol{u}\) with the identity \(\mathbf{Id}\) and displacement \(\boldsymbol{u}\) is multiplicatively decomposed into chemical, elastic and plastic parts
\[\mathbf{F}=\mathbf{F}_{\mathrm{ch}}\mathbf{F}_{\mathrm{el}}\mathbf{F}_{ \mathrm{pl}}=\mathbf{F}_{\mathrm{rev}}\mathbf{F}_{\mathrm{pl}} \tag{1}\]
with various expressions for the volume change defined as
\[J =\det(\mathbf{F})=J_{\mathrm{el}}J_{\mathrm{ch}}J_{\mathrm{pl}}=V _{\Omega}/V_{\Omega_{0}}>0, J_{\mathrm{ch}} =\det(\mathbf{F}_{\mathrm{ch}})>0, \tag{2}\] \[J_{\mathrm{el}} =\det(\mathbf{F}_{\mathrm{el}})>0, J_{\mathrm{pl}} =\det(\mathbf{F}_{\mathrm{pl}})\stackrel{{!}}{{=}}1, \tag{3}\]
respectively. The chemical and elastic deformations are reversible and summarized in \(\mathbf{F}_{\mathrm{rev}}\). The polar decomposition of the elastic deformation gradient tensor \(\mathbf{F}_{\mathrm{el}}\) is given by its rotational and stretch part [12, Chapter 2.6]:
\[\mathbf{F}_{\mathrm{el}}=\mathbf{R}_{\mathrm{el}}\mathbf{U}_{\mathrm{el}}, \quad\mathbf{R}_{\mathrm{el}}^{\mathsf{T}}\mathbf{R}_{\mathrm{el}}=\mathbf{Id}, \tag{4}\]
Figure 1: Schematic of the multiplicative decomposition of the total deformation gradient \(\mathbf{F}\) into its plastic, elastic and chemicals parts \(\mathbf{F}_{\mathrm{pl}}\), \(\mathbf{F}_{\mathrm{el}}\), \(\mathbf{F}_{\mathrm{ch}}\), respectively, according to Fig. 1 from [17].
with the right stretch tensor \(\mathbf{U}_{\mathrm{el}}\) being unique, positive definite and symmetric. With the symmetric elastic right Cauchy-Green tensor
\[\mathbf{C}_{\mathrm{el}}=\mathbf{F}_{\mathrm{el}}^{\mathsf{T}}\mathbf{F}_{ \mathrm{el}}=\mathbf{U}_{\mathrm{el}}^{2}, \tag{5}\]
the (Lagrangian) logarithmic Hencky strain can be defined as strain measure with a spectral decomposition
\[\mathbf{E}_{\mathrm{el}}=\ln\left(\mathbf{U}_{\mathrm{el}}\right)=\ln\left( \sqrt{\mathbf{F}_{\mathrm{el}}^{\mathsf{T}}\mathbf{F}_{\mathrm{el}}}\right)= \ln\left(\sqrt{\mathbf{C}_{\mathrm{el}}}\right)=\sum_{\alpha=1}^{3}\ln\bigl{(} \sqrt{\eta_{\mathrm{el},\alpha}}\bigr{)}\,\boldsymbol{r}_{\mathrm{el},\alpha} \otimes\boldsymbol{r}_{\mathrm{el},\alpha}, \tag{6}\]
where \(\sqrt{\eta_{\mathrm{el},\alpha}}\) and \(\boldsymbol{r}_{\mathrm{el},\alpha}\) are the eigenvalues and eigenvectors of \(\mathbf{U}_{\mathrm{el}}\), respectively. In literature, typically the Green-St-Venant (GSV) strain tensor, often called _the_ Lagrangian strain tensor [22, Section 8.1], is used
\[\mathbf{E}_{\mathrm{el},\mathrm{GSV}}=\frac{1}{2}\bigl{(}\mathbf{F}_{\mathrm{ el}}^{\mathsf{T}}\mathbf{F}_{\mathrm{el}}-\mathbf{Id}\bigr{)}. \tag{7}\]
We will later compare results obtained for both strain measures.
We consider an isotropic, volumetric swelling due to the Li concentration with \(\mathbf{F}_{\mathrm{ch}}\) being defined as
\[\mathbf{F}_{\mathrm{ch}}=\lambda_{\mathrm{ch}}\mathbf{Id}=\sqrt[3]{J_{\mathrm{ ch}}}\mathbf{Id}, \tag{8}\]
where \(\lambda_{\mathrm{ch}}=\sqrt[3]{1+v_{\mathrm{pmv}}c}\), \(v_{\mathrm{pmv}}\) is the constant partial molar volume of lithium inside the host material [10] and \(c\) is the concentration in the reference configuration. The plastic and elastic deformation gradients \(\mathbf{F}_{\mathrm{pl}}\) and \(\mathbf{F}_{\mathrm{el}}\) are further discussed in Subsection 2.5.
### Free Energy
To obtain a thermodynamically consistent material model, which guarantees a strictly positive entropy production, we introduce a Helmholtz free energy \(\psi\), being a function of the lithium concentration \(c\) and the displacement gradient \(\boldsymbol{\nabla}_{0}\boldsymbol{u}\), due to the coupling of chemical and mechanical effects [25, 26, 27, 17]. This form is additively split into a chemical part \(\psi_{\mathrm{ch}}\) and mechanical part \(\psi_{\mathrm{el}}\) according to
\[\psi(c,\boldsymbol{\nabla}_{0}\boldsymbol{u})=\psi_{\mathrm{ch}}(c)+\psi_{ \mathrm{el}}(c,\boldsymbol{\nabla}_{0}\boldsymbol{u}) \tag{9}\]
in the Lagrangian frame. Following [11, 17, 18], we define the chemical part by incorporating an experimentally obtained open-circuit voltage (OCV) curve \(U_{\mathrm{OCV}}(c)\)
\[\rho\psi_{\mathrm{ch}}(c)=-\int_{0}^{c/c_{\mathrm{max}}}\mathrm{ Fa}\,U_{\mathrm{OCV}}(z)\,\mathrm{d}z, \tag{10}\]
with the Faraday constant Fa and the maximal concentration \(c_{\mathrm{max}}\) of the host material. The mechanical part is given as a linear elastic approach via a St-Venant-Kirchhoff
model being quadratic in the elastic Hencky strain, compare [12, Section 6.5], [23, Chapter VI SS3] and [10, 11, 17]:
\[\rho\psi_{\mathrm{el}}(c,\mathbf{\nabla}_{0}\mathbf{u})=\frac{1}{2}\mathbf{E}_{\mathrm{ el}}\!:\!\mathds{C}\left[\mathbf{E}_{\mathrm{el}}\right]\qquad\text{with}\qquad \mathds{C}\left[\mathbf{E}_{\mathrm{el}}\right]=\lambda\operatorname{tr} \!\left(\mathbf{E}_{\mathrm{el}}\right)\!\mathbf{Id}+2G\mathbf{E}_{\mathrm{el}}, \tag{11}\]
with the first and second Lame constants \(\lambda=2G\nu/\left(1-2\nu\right)\) and \(G=E/\big{(}2\left(1+\nu\right)\big{)}\), depending on the elastic Young's modulus \(E\) and Poisson's ratio \(\nu\) of the host material.
### Chemistry
Inside the host material, we use a continuity equation to describe the change in lithium concentration via
\[\partial_{t}c=-\mathbf{\nabla}_{0}\!\cdot\!\mathbf{N}\qquad\text{in }(0,t_{\mathrm{end}}) \times\Omega_{0}, \tag{12}\]
where \(\mathbf{N}\coloneqq-m(c,\mathbf{\nabla}_{0}\mathbf{u})\mathbf{\nabla}_{0}\mu\) is the lithium flux,
\[m(c,\mathbf{\nabla}_{0}\mathbf{u})=-D\left(\partial_{c}\mu\right)^{-1} \tag{13}\]
is the mobility of Li in aSi and \(D\) the diffusion coefficient for lithium atoms inside the active material [11, 17]. The chemical potential \(\mu\) is given as the variational derivative of the Ginzburg-Landau free energy [28] using Equation (9)-(11)
\[\mu=-\mathrm{Fa}\,U_{\mathrm{OCV}}-\frac{v_{\mathrm{pmv}}}{3}\lambda_{\mathrm{ ch}}^{-3}\mathbf{Id}\!:\!\mathds{C}\left[\mathbf{E}_{\mathrm{el}}\right]=- \mathrm{Fa}\,U_{\mathrm{OCV}}-\frac{v_{\mathrm{pmv}}}{3}\lambda_{\mathrm{ch}}^ {-3}\operatorname{tr}\!\left(\mathds{C}\left[\mathbf{E}_{\mathrm{el}}\right]\right) \tag{14}\]
Following [10, 11] we apply a uniform and constant external flux \(N_{\mathrm{ext}}\) with either positive or negative sign for cycling the host particle in terms of the \(C\)-rate. The simulation time \(t\) and the state of charge (SOC) can be connected via
\[\mathrm{SOC}=\frac{1}{V_{\Omega_{0}}}\int_{\Omega_{0}}\frac{c}{c_{\mathrm{max }}}\,\mathrm{d}\mathbf{X}_{0}=\frac{c_{0}}{c_{\mathrm{max}}}+N_{\mathrm{ext}}\left[ \mathds{C}\right]\cdot t[\mathrm{h}], \tag{15}\]
with the volume \(V_{\Omega_{0}}\) of \(\Omega_{0}\) and a constant initial condition \(c_{0}\in(0,c_{\mathrm{max}})\).
### Mechanics
The deformation in the Lagrangian domain is considered by static balance of linear momentum [10, 11, 17]
\[\mathbf{0}=-\mathbf{\nabla}_{0}\!\cdot\!\mathbf{P}\qquad\text{in }(0,t_{\mathrm{end}}) \times\Omega_{0}, \tag{16}\]
with the first Piola-Kirchhoff stress tensor \(\mathbf{P}(c,\mathbf{\nabla}_{0}\mathbf{u})=\partial_{\mathbf{F}}\psi(c,\mathbf{\nabla}_ {0}\mathbf{u})=\mathbf{F}\!\left(\mathbf{F}_{\mathrm{rev}}^{\mathsf{T}}\mathbf{F}_ {\mathrm{rev}}\right)^{-1}\!\left(\mathbf{F}_{\mathrm{pl}}^{-1}\right)^{ \mathsf{T}}\!\mathbf{F}_{\mathrm{pl}}^{-1}\mathds{C}\left[\mathbf{E}_{\mathrm{ el}}\right]\) compare [12, Section 6.1]. The Cauchy stress \(\mathbf{\sigma}\) in the Eulerian frame is related via \(\mathbf{P}=\det\left(\mathbf{F}\right)\mathbf{\sigma}\mathbf{F}^{-\mathsf{T}}\)[12, Section 3.1]. Furthermore, we
introduce the Mandel stress \(\mathbf{M}=\mathbf{C}_{\mathrm{rev}}\mathbf{S}_{\mathrm{rev}}=J_{\mathrm{rev}} \mathbf{F}_{\mathrm{rev}}^{\mathsf{T}}\boldsymbol{\sigma}\mathbf{F}_{\mathrm{rev }}^{-\mathsf{T}}=J_{\mathrm{el}}J_{\mathrm{ch}}\mathbf{F}_{\mathrm{el}}^{ \mathsf{T}}\boldsymbol{\sigma}\mathbf{F}_{\mathrm{el}}^{-\mathsf{T}}\) with the second Piola-Kirchhoff stress tensor \(\mathbf{S}_{\mathrm{rev}}=J_{\mathrm{rev}}\mathbf{F}_{\mathrm{rev}}^{-1} \boldsymbol{\sigma}\mathbf{F}_{\mathrm{rev}}^{-\mathsf{T}}\), see for further information [17]. Based on the derivations presented in [6, 24, 29], for \(\mathbf{C}_{\mathrm{el}}\) and \(\hat{\mathbf{C}}_{\mathrm{el}}\) being coaxial and isotropic material behavior, a hyperelastic law relating the free energy density in the stress-free configuration and the Mandel stress \(\mathbf{M}\) is retrieved. In the case considered in this work \(\mathbf{M}\) is linear in \(\mathbf{E}_{\mathrm{el}}\) and given by
\[\mathbf{M}=\frac{\partial(\rho\psi)}{\partial\mathbf{E}_{\mathrm{el}}}= \mathds{C}\left[\mathbf{E}_{\mathrm{el}}\right]=\lambda\operatorname{tr} \bigl{(}\mathbf{E}_{\mathrm{el}}\bigr{)}\mathbf{Id}+2G\mathbf{E}_{\mathrm{el}}. \tag{17}\]
### Inelastic Constitutive Theory
Following [12, Chapter 2.7] and [6], the evolution equation for the plastic deformation gradient \(\mathbf{F}_{\mathrm{pl}}\) takes for fully isotropic materials (where a plastic spin is negligible) the form
\[\dot{\mathbf{F}}_{\mathrm{pl}}=\mathbf{L}_{\mathrm{pl}}\mathbf{F}_{\mathrm{pl} }=\mathbf{D}_{\mathrm{pl}}\mathbf{F}_{\mathrm{pl}}. \tag{18}\]
As mentioned in the introduction, we consider two inelastic models and compare their influence on battery performance. We start with a rate independent von Mises plasticity with isotropic hardening, which is formulated for the Mandel stress, see [20, Section 2.3] and [6, 17, 19, 30, 31]. The yield function reads
\[F_{\mathrm{Y}}(\mathbf{M},c,\varepsilon_{\mathrm{pl}}^{\mathrm{v}})=\left| \mathbf{M}^{\mathrm{dev}}\right|-\sigma_{\mathrm{F}}(c,\varepsilon_{\mathrm{ pl}}^{\mathrm{v}})\leq 0, \tag{19}\]
with the deviatoric stress tensor \(\mathbf{M}^{\mathrm{dev}}=\mathbf{M}-1/3\operatorname{tr}\bigl{(}\mathbf{M} \bigr{)}\mathbf{Id}\) and the yield stress \(\sigma_{\mathrm{F}}(c,\varepsilon_{\mathrm{pl}}^{\mathrm{v}})\coloneqq\sigma _{\mathrm{Y}}(c)+\gamma^{\mathrm{iso}}\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\). The yield stress consists of two parts: a concentration dependent part \(\sigma_{\mathrm{Y}}(c)\), which will describe a softening behavior, and a linear isotropic hardening part with a scalar parameter \(\gamma^{\mathrm{iso}}>0\). \(\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\) is the accumulated inelastic strain and will be introduced subsequently. Ideal plasticity is present for \(\gamma^{\mathrm{iso}}=0\). The softening behavior modeled by \(\sigma_{\mathrm{Y}}(c)\) is inspired by [6, 16] and is incorporated via
\[\sigma_{\mathrm{Y}}(c)\coloneqq\sigma_{\mathrm{Y,min}}c+(1-c)\sigma_{\mathrm{Y,max}}. \tag{20}\]
Plastic flow is only allowed if \(F_{\mathrm{Y}}(\mathbf{M},c,\varepsilon_{\mathrm{pl}}^{\mathrm{v}})=0\). To describe the plastic flow when this yield point is reached, we base on the maximum plastic dissipation principle [17, 20, 32]. With this postulate from plasticity theory we can define the associated flow rule constraining the plastic flow to the normal direction of the yield surface \(\partial_{\mathbf{M}}F_{\mathrm{Y}}(\mathbf{M},c,\varepsilon_{\mathrm{pl}}^{ \mathrm{v}})\):
\[\mathbf{D}_{\mathrm{pl}}=\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}\mathbf{N} _{\mathrm{pl}}=\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}\frac{\partial F_ {\mathrm{Y}}}{\partial\mathbf{M}}\left(\mathbf{M},c,\varepsilon_{\mathrm{pl}}^ {\mathrm{v}}\right)=\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}\frac{\mathbf{M }^{\mathrm{dev}}}{\left|\mathbf{M}^{\mathrm{dev}}\right|}. \tag{21}\]
Here, \(\mathbf{D}_{\mathrm{pl}}\) is the plastic strain rate measure [6, 33]. Now, we can define the scalar equivalent plastic strain
\[\varepsilon_{\mathrm{pl}}^{\mathrm{v}}=\int_{0}^{t_{\mathrm{end}}}\left\| \mathbf{D}_{\mathrm{pl}}\right\|\mathrm{d}t=\int_{0}^{t_{\mathrm{end}}} \dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}\,\mathrm{d}t, \tag{22}\]
which is used to describe an increase in yield stress \(\sigma_{\mathrm{F}}(c,\varepsilon_{\mathrm{pl}}^{\mathrm{v}})\). To be consistent with a one-dimensional tensile test, we scale our concentration dependent yield stress \(\sigma_{\mathrm{Y}}(c)\) with the factor \(\sqrt{2/3}\), compare [20, Chaper 2.3.1].
The second model studied is a viscoplastic material model which was proposed by Di Leo et al. [6], where a viscoplastic material behavior is considered without isotropic hardening, i.e. \(\gamma^{\mathrm{iso}}=0\). This results in a formulation for the equivalent plastic strain
\[\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}=\begin{cases}0,&\left\|\mathbf{M }^{\mathrm{dev}}\right\|\leq\sigma_{\mathrm{Y}}(c),\\ \dot{\varepsilon}_{0}\Bigg{(}\frac{\left\|\mathbf{M}^{\mathrm{dev}}\right\|- \sigma_{\mathrm{Y}}(c)}{\sigma_{\mathrm{Y^{*}}}}\Bigg{)}^{\beta},&\left\| \mathbf{M}^{\mathrm{dev}}\right\|>\sigma_{\mathrm{Y}}(c),\end{cases}\] (23a) where \[\sigma_{\mathrm{Y^{*}}},\dot{\varepsilon}_{0}\] and \[\beta\] are a positive-valued stress-dimensioned constant, a reference tensile plastic strain rate and a measure of the strain rate sensitivity of the material, respectively.
The classical loading and unloading conditions can be conveniently expressed via the Karush-Kuhn-Tucker (KKT) conditions [20, Section 1.2.1], [22, Section 3.2] and [17] for both inelastic theories by
\[F_{\mathrm{Y}}\leq 0,\quad\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}\geq 0, \quad F_{\mathrm{Y}}\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}=0. \tag{24}\]
Compared to classical notation of loading and unloading, the process is elastic if \(F_{\mathrm{Y}}<0\) requiring \(\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}}\equiv 0\) and no plastic deformation occurs. The consistency condition for the evolution of inelastic strains in the case of rate-independent plasticity reads
\[\text{when }F_{\mathrm{Y}}=0:\quad\dot{\varepsilon}_{\mathrm{pl}}^{\mathrm{v}} \geq 0,\quad\dot{F}_{\mathrm{Y}}\leq 0,\quad\dot{\varepsilon}_{\mathrm{pl}}^{ \mathrm{v}}\dot{F}_{\mathrm{Y}}=0, \tag{25}\]
so the plastic strain can increase during loading but not during unloading. All in all, the elastic deformation gradient tensor \(\mathbf{F}_{\mathrm{el}}\) can then be computed with the definition of the plastic deformation gradient tensor \(\mathbf{F}_{\mathrm{pl}}\) via \(\mathbf{F}_{\mathrm{el}}=\mathbf{F}_{\mathrm{ch}}^{-1}\mathbf{F}\mathbf{F}_{ \mathrm{pl}}^{-1}=\lambda_{\mathrm{ch}}^{-1}\mathbf{F}\mathbf{F}_{\mathrm{pl}}^ {-1}\).
## 3 Numerical Approach
In the following section we present the numerical treatment of our set of coupled partial differential equations of Section 2, i.e. the problem formulation, the normalization of the model parameters and the numerical solution procedure including the weak formulation, space and time discretization and our applied adaptive solution algorithm.
### Problem Formulation
Before we state our problem formulation we introduce a nondimensionalization of the model to improve numerical stability. Our cycle time \(t_{\text{cycle}}=1/\text{C}\)-rate depends on the C-rate, i.e. the hours for charging or discharging of the particle. Further, the particle radius \(L_{0}\) and the maximal concentration \(c_{\text{max}}\) in the Lagrangian frame are used as reference parameters. For the yield stress \(\sigma_{\text{Y}}(c)\) we use the same nondimensionalization as for the Young's modulus \(E\). The resulting dimensionless numbers \(\widetilde{E}\) and the _Fourier number_ Fo relate the mechanical energy scale to the chemical energy scale and the diffusion time scale to the process time scale, respectively. All dimensionless variables are listed in Table 1 and will be used for model equations from now on, neglecting the accentuation for better readability.
We state our general mathematical problem formulation [10, 11] by solving our set of equations for the concentration \(c\), the chemical potential \(\mu\) and the displacements \(\mathbf{u}\), whereas the quantities \(\mathbf{F}\), \(\mathbf{F}_{\text{el}}\), \(\mathbf{F}_{\text{pl}}\), \(\mathbf{E}_{\text{el}}\), \(\mathbf{P}\) and \(\mathbf{\sigma}\) are calculated in dependency of the solution variables.
The dimensionless initial boundary value problem with inequality boundary conditions is given as follows: let \(t_{\text{end}}>0\) be the final simulation time and \(\Omega_{0}\subset\mathbb{R}^{d}\) a representative bounded electrode particle in reference configuration with dimension \(d=1,2,3\). Find the normalized concentration \(c\colon[0,t_{\text{end}}]\times\overline{\Omega}_{0}\to[0,1]\), the chemical potential \(\mu\colon[0,t_{\text{end}}]\times\overline{\Omega}_{0}\to\mathbb{R}\) and the displacements \(\mathbf{u}\colon[0,t_{\text{end}}]\times\overline{\Omega}_{0}\to\mathbb{R}^{d}\) satisfying
\[\partial_{t}c =-\mathbf{\nabla}_{0}\!\cdot\!\mathbf{N}(c,\mathbf{\nabla}_{0}\mu,\mathbf{\nabla }_{0}\mathbf{u}) \text{in }(0,t_{\text{end}})\times\Omega_{0}, \tag{26a}\] \[\mu =\partial_{c}\psi(c,\mathbf{\nabla}_{0}\mathbf{u}) \text{in }(0,t_{\text{end}})\times\Omega_{0},\] (26b) \[\mathbf{0} =-\mathbf{\nabla}_{0}\!\cdot\!\mathbf{P}(c,\mathbf{\nabla}_{0}\mathbf{u}) \text{in }(0,t_{\text{end}})\times\Omega_{0},\] (26c) \[F_{\text{Y}} \leq 0,\quad\dot{\varepsilon}_{\text{pl}}^{\text{v}}\geq 0,\quad F_{ \text{Y}}\dot{\varepsilon}_{\text{pl}}^{\text{v}}=0 \text{in }(0,t_{\text{end}})\times\Omega_{0},\] (26d) \[\mathbf{N}\cdot\mathbf{n}_{0} =N_{\text{ext}} \text{on }(0,t_{\text{end}})\times\partial\Omega_{0},\] (26e) \[-\mathbf{P}\cdot\mathbf{n}_{0} =\mathbf{0} \text{on }(0,t_{\text{end}})\times\partial\Omega_{0},\] (26f) \[c(0,\cdot) =c_{0} \text{in }\Omega_{0},\] (26g) \[\mathbf{F}_{\text{pl}}(0,\cdot) =\mathbf{Id} \text{in }\Omega_{0},\] (26h) \[\varepsilon_{\text{pl}}^{\text{v}}(0,\cdot) =0 \text{in }\Omega_{0}, \tag{26i}\]
with a boundary-consistent initial concentration \(c_{0}\) and boundary conditions for the displacement excluding rigid body motions. Note that the original definition of the chemical deformation gradient \(\mathbf{F}_{\text{ch}}\) is done in three dimensions, but all variables and equations are also mathematically valid in dimensions \(d=1,2\). Then, the deviatoric part is computed with the factor \(1/d\).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\tilde{t}=t/t_{\text{cycle}}\) & \(\tilde{\mathbf{X}}_{0}=\mathbf{X}_{0}/L_{0}\) & \(\tilde{\mathbf{u}}=\mathbf{u}/L_{0}\) & \(\tilde{c}=c/c_{\text{max}}\) & \(\tilde{\mu}=\mu/R_{\text{gas}}T\) & \(\tilde{v}_{\text{pmv}}=v_{\text{pmv}}c_{\text{max}}\) \\ \(\tilde{U}_{\text{OCV}}=\text{Fa}\,U_{\text{OCV}}/R_{\text{gas}}T\) & \(\tilde{E}=E/R_{\text{gas}}Tc_{\text{max}}\) & \(\tilde{N}_{\text{ext}}=N_{\text{ext}}t_{\text{cycle}}/L_{0}c_{\text{max}}\) & \(\text{Fo}=Dt_{\text{cycle}}/L_{0}^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dimensionless variables of the used model equations.
### Numerical Solution Procedure
This subsection describes the way to obtain a numerical solution and especially the handling of the KKT condition in Equation (26d): formulating a primal mixed variational inequality, using static condensation to obtain a primal formulation as well as space and time discretization, finally completed with an adaptive solution algorithm.
#### 3.2.1 Weak Formulation
In a first step towards the numerical solution we state the weak formulation of Equation (26) as primal mixed variational inequality like in [19]. However, it can also be derived from a minimization problem [20, Section 1.4.2] and [34, Section 7.3]. We introduce the \(L^{2}\)-inner product for two functions \(f\), \(g\in L^{2}\left(\Omega_{0}\right)\) as \(\left(f,g\right)=\int_{\Omega_{0}}fg\,\mathrm{d}\mathbf{X}_{0}\), for two vector fields \(\mathbf{v}\), \(\mathbf{w}\in L^{2}\left(\Omega_{0};\mathbb{R}^{d}\right)\) as \(\left(\mathbf{v},\mathbf{w}\right)=\int_{\Omega_{0}}\mathbf{v}\cdot\mathbf{w}\,\mathrm{d}\mathbf{X }_{0}\), and for two tensor fields \(\mathbf{S}\), \(\mathbf{T}\in L^{2}\left(\Omega_{0};\mathbb{R}^{d,d}\right)\) as \(\left(\mathbf{S},\mathbf{T}\right)=\int_{\Omega_{0}}\mathbf{S}\!:\!\mathbf{T} \,\mathrm{d}\mathbf{X}_{0}\) and boundary integrals with the respective boundary as subscript. Defining the function space \(\mathbf{V}^{*}\coloneqq H_{*}^{1}\big{(}\Omega_{0},\mathbb{R}^{d}\big{)}\) which includes displacement boundary constraints for the precise applications case from Section 4, we multiply with test functions, integrate over \(\Omega_{0}\) and integrate by parts. Following [10, 19, 32, 33], we finally have the primal mixed variational inequality weak formulation: find solutions \(\left\{c,\mu,\mathbf{u},\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\right\}\) with \(c,\mu\in V\coloneqq H^{1}\left(\Omega_{0},[0,1]\right)\), \(\partial_{t}c\in L^{2}(\Omega_{0},\mathbb{R})\), \(\mathbf{u}\in\mathbf{V}^{*}\) and \(\big{(}\mathbf{P}(c,\mathbf{\nabla}_{0}\mathbf{u})\times\varepsilon_{\mathrm{pl}}^{ \mathrm{v}}\big{)}\in\Big{\{}L^{2}\big{(}\Omega_{0},\mathbb{R}^{d,d}\big{)} \times L^{2}\big{(}\Omega_{0},\mathbb{R}_{\geq 0}\big{)}:F_{\mathrm{Y}}\leq 0 \Big{\}}\eqqcolon\mathbf{Y}\) such that
\[(\varphi,\partial_{t}c) =-\Big{(}m(c,\mathbf{\nabla}_{0}\mathbf{u})\mathbf{\nabla}_{0}\varphi,\mathbf{ \nabla}_{0}\mu\Big{)}-\left(\varphi,N_{\mathrm{ext}}\right)_{\partial\Omega_{ 0}}, \tag{27a}\] \[0 =-\left(\varphi,\mu\right)+\big{(}\varphi,\partial_{c}\psi_{ \mathrm{ch}}(c)+\partial_{c}\psi_{\mathrm{el}}(c,\mathbf{\nabla}_{0}\mathbf{u})\big{)},\] (27b) \[\mathbf{0} =\Big{(}\mathbf{\nabla}_{0}\mathbf{\xi},\mathbf{P}(c,\mathbf{\nabla}_{0}\mathbf{u })\Big{)},\] (27c) \[\mathbf{0} \leq\Big{(}\mathbf{D}_{\mathrm{pl}},\mathbf{P}-\mathbf{P}^{*} \Big{)}+\gamma^{\mathrm{iso}}\Big{(}\varepsilon_{\mathrm{pl}}^{\mathrm{v}}, \varepsilon_{\mathrm{pl}}^{\mathrm{v*}}-\varepsilon_{\mathrm{pl}}^{\mathrm{v}} \Big{)} \tag{27d}\]
for all test functions \(\varphi\in V\), \(\mathbf{\xi}\in\mathbf{V}^{*}\) and \(\big{(}\mathbf{P}^{*},\varepsilon_{\mathrm{pl}}^{\mathrm{v*}}\big{)}\in \mathbf{Y}\).
Equation (27) becomes a saddle point problem requiring special techniques for solving the related linear system [19, 23, 35]. However, we apply static condensation and use a primal formulation with a projector onto the set of admissible stresses [19, 36]. Following [19] we introduce the projector onto the admissible Mandel stress for both inelastic constitutive theories. For the rate independent model, it reads
\[\mathbf{M}=\mathbf{P}_{\Pi}(\mathbf{M}^{\mathrm{tri}},c,\varepsilon _{\mathrm{pl}}^{\mathrm{v}})\coloneqq\] \[\begin{cases}\mathbf{M}^{\mathrm{tri}},&\left\|\mathbf{M}^{ \mathrm{tri},\mathrm{dev}}\right\|\leq\sigma_{\mathrm{Y}}(c)+\gamma^{\mathrm{ iso}}\varepsilon_{\mathrm{pl}}^{\mathrm{v}},\end{cases} \tag{28a}\] \[\left\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\right\|\leq\sigma_{ \mathrm{Y}}(c)+\gamma^{\mathrm{iso}}\varepsilon_{\mathrm{pl}}^{\mathrm{v}}, \tag{28b}\]
with \(\varkappa=1-\frac{2G}{\sigma_{\mathrm{Y}}(c)}\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\) and \(\mathbf{M}^{\mathrm{tri}}\) follows from a purely elastic deformation, denoted as the trial part of \(\mathbf{M}\). For ideal plasticity, the projector follows with \(\gamma^{\mathrm{iso}}=0\). Following [6, 24] for the rate-dependent viscoplastic approach the projector is
\[\mathbf{M}=\mathbf{P}_{\Pi}\big{(}\mathbf{M}^{\mathrm{tri}},c, \varepsilon_{\mathrm{pl}}^{\mathrm{v}}\big{)}\coloneqq\] \[\begin{cases}\mathbf{M}^{\mathrm{tri}},&\big{\|}\mathbf{M}^{ \mathrm{tri},\mathrm{dev}}\big{\|}\leq\sigma_{\mathrm{Y}}(c),\\ \frac{\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\|-2G\triangle\varepsilon_{ \mathrm{pl}}^{\mathrm{v}}}{\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\|}\mathbf{ M}^{\mathrm{tri},\mathrm{dev}}+\frac{1}{3}\operatorname{tr}\!\left(\mathbf{M}^{ \mathrm{tri}}\right)\mathbf{Id},&\big{\|}\mathbf{M}^{\mathrm{tri},\mathrm{dev} }\big{\|}>\sigma_{\mathrm{Y}}(c).\end{cases} \tag{29b}\]
Finally, we can reformulate Equation (27) using the projector formulation for the specific plastic behavior and arrive at the primal formulation: find solutions \(\{c,\mu,\mathbf{u}\}\) with \(c,\mu\in V\coloneqq H^{1}\left(\Omega_{0},[0,1]\right)\), \(\partial_{t}c\in L^{2}(\Omega_{0},\mathbb{R})\) and \(\mathbf{u}\in\mathbf{V}^{*}\), such that
\[(\varphi,\partial_{t}c) =-\Big{(}m(c,\mathbf{\nabla}_{0}\mathbf{u})\mathbf{\nabla}_{0}\varphi,\mathbf{ \nabla}_{0}\mu\Big{)}-(\varphi,N_{\mathrm{ext}})_{\partial\Omega_{0}}\,, \tag{30a}\] \[0 =-\left(\varphi,\mu\right)+\big{(}\varphi,\partial_{c}\psi_{ \mathrm{ch}}(c)+\partial_{c}\psi_{\mathrm{el}}(c,\mathbf{\nabla}_{0}\mathbf{u})\big{)},\] (30b) \[\mathbf{0} =\Big{(}\mathbf{\nabla}_{0}\mathbf{\xi},\mathbf{P}\big{(}c,\mathbf{\nabla}_{ 0}\mathbf{u},\mathbf{P}_{\Pi}\big{)}\Big{)} \tag{30c}\]
holds for all test functions \(\varphi\in V\), \(\mathbf{\xi}\in\mathbf{V}^{*}\) and with \(\mathbf{P}(c,\mathbf{\nabla}_{0}\mathbf{u},\mathbf{P}_{\Pi})=\lambda_{\mathrm{ch}}^{-2 }\mathbf{FP}_{\Pi}\big{(}\mathds{C}\left[\mathbf{E}_{\mathrm{el}}^{\mathrm{ tri}}\right]\!,c,\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\big{)}\). Note that by using the projector \(\mathbf{P}_{\Pi}\) we have transformed the plasticity inequality into a (non-smooth) nonlinearity.
#### 3.2.2 Space Discretization
Next, we introduce the spatial discretization and therefore choose a polytop approximation \(\Omega_{h}\) as computational domain for particle geometry \(\Omega_{0}\). For the approximation of curved boundaries, an isoparametric Lagrangian finite element method is chosen [23, Chapter III SS2] on an admissible mesh \(\mathcal{T}_{n}\). For the spatial discrete solution we define the finite dimensional subspaces for the basis functions with bases
\[V_{h} =\mathrm{span}\{\varphi_{i}\,:\,i=1,\ldots,N\}\subset V, \tag{31a}\] \[\mathbf{V}_{h}^{*} =\mathrm{span}\{\mathbf{\xi}_{j}\,:\,j=1,\ldots,dN\}\subset\mathbf{V}^{*}. \tag{31b}\]
On these finite dimensional subspaces we solve for the variables \(c_{h}\colon[0,t_{\mathrm{end}}]\to\{V_{h}:c_{h}\in[0,1]\}\), \(\mu_{h}\colon[0,t_{\mathrm{end}}]\to V_{h}\) and \(\mathbf{u}_{h}\colon[0,t_{\mathrm{end}}]\to\mathbf{V}_{h}^{*}\) the spatial discrete version of Equation (30). Relating the discrete solution variables with the finite basis functions
\[c_{h}(t,\mathbf{X}_{0}) =\sum_{i=1}^{N}c_{i}(t)\varphi_{i}(\mathbf{X}_{0}),\qquad\mu(t,\mathbf{X}_ {0})=\sum_{j=1}^{N}\mu_{j}(t)\varphi_{j}(\mathbf{X}_{0}), \tag{32a}\] \[\mathbf{u}_{h}(t,\mathbf{X}_{0}) =\sum_{k=1}^{dN}u_{k}(t)\mathbf{\xi}_{k}(\mathbf{X}_{0}), \tag{32b}\]
we gather all time-dependent coefficients in the vector valued function
\[\mathbf{y}\colon[0,t_{\mathrm{end}}]\to\mathbb{R}^{(2+d)N},\quad t\mapsto\mathbf{y}(t)= \begin{pmatrix}\mathbf{c}_{h}(t)\\ \mathbf{\mu}_{h}(t)\\ \mathbf{u}_{h}(t)\end{pmatrix}. \tag{33}\]
This leads to our spatial discrete problem formulated as general nonlinear differential equation DAE: Find \(\mathbf{y}\colon[0,t_{\mathrm{end}}]\to\mathbb{R}^{(2+d)N}\) satisfying
\[\mathbf{M}\partial_{t}\mathbf{y}-\mathbf{f}(t,\mathbf{y})=\mathbf{0}\qquad\text{for }t\in(0,t_{ \mathrm{end}}],\qquad\mathbf{y}(0)=\mathbf{y}^{0}. \tag{34}\]
The system matrix \(\mathbf{M}\) is singular since it has only one nonzero-block entry given by \(\mathbf{M}_{h}=\left[(\varphi_{i},\varphi_{j})\right]_{i,j}\) representing the mass matrix of the finite element space \(V_{h}\). The vector \(\mathbf{f}\) consists of matrices and tensors, given as \(\mathbf{f}\colon[0,t_{\mathrm{end}}]\times\mathbb{R}^{(2+d)N}\to\mathbb{R}^{(2+d )N}\),
\[(t,\mathbf{y})\mapsto\mathbf{f}(t,\mathbf{y})\coloneqq\begin{pmatrix}-\mathbf{K}_{m}(c_{h },\mathbf{\nabla}_{0}\mathbf{u}_{h})\mathbf{\mu}_{h}-\mathbf{N}_{\mathrm{ext}}\\ -\mathbf{M}_{h}\mathbf{\mu}_{h}+\mathbf{\Psi}_{\mathrm{ch}}(c_{h})+\mathbf{\Psi} _{\mathrm{el}}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h})\\ \mathbf{P}_{h}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h},\mathbf{P}_{\Pi})\end{pmatrix} \tag{35}\]
with the same indices as above: the mass matrix \(\mathbf{M}_{h}\), the stiffness matrix \(\mathbf{K}_{m}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h})=\left[(m(c_{h},\mathbf{\nabla}_{0 }\mathbf{u}_{h})\mathbf{\nabla}_{0}\varphi_{i},\mathbf{\nabla}_{0}\varphi_{j})\right]_{i,j}\), the vectors for the nonlinearities \(\mathbf{\Psi}_{\mathrm{ch}}(c_{h})=\left[(\varphi_{i},\partial_{c}\psi_{ \mathrm{ch}}(c_{h}))\right]_{i}\) and \(\mathbf{\Psi}_{\mathrm{el}}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h})=\left[(\varphi_ {i},\partial_{c}\psi_{\mathrm{el}}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h}))\right]_{i}\), \(\mathbf{P}_{h}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h},\mathbf{P}_{\Pi})=\left[(\mathbf{ \nabla}_{0}\mathbf{\xi}_{k},\mathbf{P}(c_{h},\mathbf{\nabla}_{0}\mathbf{u}_{h},\mathbf{P} _{\Pi}))\right]_{k}\) as well as the boundary condition \(\mathbf{N}_{\mathrm{ext}}=\left[(\varphi_{i},N_{\mathrm{ext}})_{\Gamma_{\mathrm{ ext}}}\right]_{i}\), respectively.
#### 3.2.3 Time Discretization
Before we write the space and time discrete problem, we have to consider the time evolution of the plastic deformation gradient \(\mathbf{F}_{\mathrm{pl}}\) since we apply the concept of static condensation and thus need to derive a time integration scheme. Therefore, we update the time integration separately from the time advancing of our Equation (34). Applying an implicit exponential map to Equation (18) leads to
\[\mathbf{F}_{\mathrm{pl}}^{n+1}=\exp\left(\Delta t\mathbf{D}_{\mathrm{pl}}^{n+1 }\right)\mathbf{F}_{\mathrm{pl}}^{n} \tag{36}\]
from one time step \(t_{n}\) to the next \(t_{n+1}=t_{n}+\tau_{n}\) with time step size \(\tau_{n}>0\). For the rate-independent plasticity we use the well known return mapping algorithm [20, Chapter 3] and [31] in an explicit form:
\[\|\mathbf{M}^{n+1,\mathrm{dev}}\|-\sigma_{\mathrm{F}}(c,\varepsilon _{\mathrm{pl}}^{\mathrm{v},n+1}) =\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\|-2G\triangle \varepsilon_{\mathrm{pl}}^{\mathrm{v}}-\left(\sigma_{\mathrm{Y}}(c)+\gamma^{ \mathrm{iso}}\varepsilon_{\mathrm{pl}}^{\mathrm{v},n+1}\right)\overset{!}{=}0 \tag{37}\] \[\iff\qquad\varepsilon_{\mathrm{pl}}^{\mathrm{v},n+1} =\frac{\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\|+2G\varepsilon _{\mathrm{pl}}^{\mathrm{v},n}-\sigma_{\mathrm{Y}}(c)}{2G+\gamma^{\mathrm{iso}}}, \tag{38}\]
which is straightforward for isotropic linear hardening. With the solution for \(\varepsilon_{\mathrm{pl}}^{\mathrm{v},n+1}\) from Equation (38) and the initial conditions \(\mathbf{F}_{\mathrm{pl}}(0,\mathbf{\cdot})=\mathbf{Id}\) and \(\varepsilon_{\mathrm{pl}}^{\mathrm{v}}(0,\mathbf{\cdot})=0\), all
necessary quantities can be updated. For more details regarding the time integration scheme for \(\mathbf{F}_{\mathrm{pl}}\) in the rate-independent case, see [29, Appendix C.5].
For the viscoplastic case, however, no explicit form can be retrieved for the accumulated plastic strain, due to the nonlinearity. Therefore, we have to use a scalar Newton-Raphson method for the current time increment \(\tau_{n}\) for \(\triangle\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\) of the implicit Eulerian scheme: solve Equation (23b) for \(\left\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\right\|>\sigma_{\mathrm{Y}}(c)\) using the relation \(\left\|\mathbf{M}^{n+1,\mathrm{dev}}\right\|=\left\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}}\right\|-2G\triangle\varepsilon_{\mathrm{pl}}^{\mathrm{v}}\) from Equation (37) with the residual of Equation (23b)
\[r_{\varepsilon}=\dot{\varepsilon}_{0}\bigg{(}\frac{\left\|\mathbf{M}^{ \mathrm{tri},\mathrm{dev}}\right\|-2G\triangle\varepsilon_{\mathrm{pl}}^{ \mathrm{v}}-\sigma_{\mathrm{F}}(c)}{\sigma_{\mathrm{Y}^{*}}}\bigg{)}^{\beta}- \frac{\triangle\varepsilon_{\mathrm{pl}}^{\mathrm{v}}}{\tau_{n}}. \tag{39}\]
For the time evolution of the DAE Equation (34), we apply the family of numerical differentiation formulas (NDFs) in a variable-step, variable-order algorithm, the _Matlab_'s ode15s [37, 38, 39, 40], since our DAE has similar properties as stiff ordinary differential equations [10]. An error control handles the switch in time step sizes \(\tau_{n}\) and order. We arrive at the space and time discrete problem to go on from one time step \(t_{n}\) to the next \(t_{n+1}\): find the discrete solution \(\boldsymbol{y}^{n+1}\approx\boldsymbol{y}(t_{n+1})\) satisfying
\[\alpha_{k_{n}}\mathbf{M}\left(\boldsymbol{y}^{n+1}-\boldsymbol{\chi}^{n} \right)-\tau_{n}\boldsymbol{f}\left(t_{n+1},\boldsymbol{y}^{n+1}\right)= \boldsymbol{0} \tag{40}\]
with \(\boldsymbol{\chi}^{n}\) composed of solutions on former time steps \(\boldsymbol{y}^{n},\ldots,\boldsymbol{y}^{n-k}\) and a constant \(\alpha_{k_{n}}>0\) dependent on the chosen order \(k_{n}\) at time \(t_{n}\)[38, Section 2.3]. The vector \(\boldsymbol{f}\) depends explicitly on the time \(t\) due to the time-dependent Neumann boundary condition \(N_{\mathrm{ext}}\).
#### 3.2.4 Adaptive Solution Algorithm
Since our DAE Equation (34) is nonlinear, we apply the Newton-Raphson method and thus need to compute the Newton update of the Jacobian in each time step. For this, we must linearize our equations, especially the projectors of Equation (28) and Equation (29). For the linearization of the projector defined in Equation (28), we follow [29, Appendix C.6] and propose a linearization of \(\mathbf{P}_{\Pi}(\mathbf{M}^{\mathrm{tri}},c,\varepsilon_{\mathrm{pl}}^{ \mathrm{v}})=\mathbf{P}_{\Pi}(\mathds{C}\left[\cdot\right],c,\varepsilon_{ \mathrm{pl}}^{\mathrm{v}})\) around \(\mathbf{E}_{\mathrm{el}}^{\mathrm{tri}}\) as
\[\mathds{I}_{\Pi}\Big{(}\mathds{C}\left[\mathbf{E}_{\mathrm{el}}^ {\mathrm{tri},\mathrm{dev}}\left(\boldsymbol{u}^{n}\right)\right]\!,c, \varepsilon_{\mathrm{pl}}^{\mathrm{v}}\Big{)}\coloneqq\\ \begin{cases}\mathds{C}_{K}+\mathds{C}_{G},&\left\|\mathds{C} \left[\mathbf{E}_{\mathrm{el}}^{\mathrm{tri},\mathrm{dev}}\left(\boldsymbol{u }^{n}\right)\right]\right\|\leq\sigma_{\mathrm{Y}}(c)+\gamma^{\mathrm{iso}} \varepsilon_{\mathrm{pl}}^{\mathrm{v},n},\\ \left(1-\frac{\gamma^{\mathrm{iso}}}{2G+\gamma^{\mathrm{iso}}}\varkappa\right) \frac{\sigma_{\mathrm{Y}}(c)}{\left\|\mathbf{M}^{\mathrm{tri},\mathrm{dev}} \right\|}\\ \quad\left(\mathds{C}_{G}-2G\frac{\mathbf{M}^{\mathrm{tri},\mathrm{dev}} \otimes\mathbf{M}^{\mathrm{tri},\mathrm{dev}}}{|\mathbf{M}^{\mathrm{tri}, \mathrm{dev}}|^{2}}\right)\\ \quad+\frac{\gamma^{\mathrm{iso}}}{2G+\gamma^{\mathrm{iso}}}\mathds{C}_{G}+ \mathds{C}_{K},&\left\|\mathds{C}\left[\mathbf{E}_{\mathrm{el}}^{\mathrm{tri},\mathrm{dev}}\left(\boldsymbol{u}^{n}\right)\right]\right\|>\sigma_{\mathrm{Y }}(c)+\gamma^{\mathrm{iso}}\varepsilon_{\mathrm{pl}}^{\mathrm{v},n}\end{cases} \tag{41b}\]
with \(\mathds{C}_{G}=2G\left(\mathbb{I}-\frac{1}{3}\mathbf{Id}\otimes\mathbf{Id}\right)\), \(\mathds{C}_{K}=K\mathbf{Id}\otimes\mathbf{Id}\), the bulk modulus \(K\) and \(\varkappa\) as in Subsection 3.2.1. The derivation for the linearization of Equation (41) is given in Appendix B as well as the formulation of the linearization for the projector of Equation (29).
Another possibility is the automatic differentiation (AD) framework, provided by [41]. More information is given in Subsection 4.1.2. To compute the Newton update, we use a direct LU-decomposition. The iteration number can be decreased if an appropriate initialization is chosen. Therefore, the starting condition for the first time step is stated in Subsection 4.1 while a predictor scheme is used during the further time integration [38].
Finally, we follow Algorithm 1 in [10] for the space and time adaptive solution algorithm. Both a temporal error estimator [37, 38, 39, 40] and a spatial error estimator are considered for the respective adaptivity. To measure spatial regularity, a gradient recovery estimator is applied [42, Chapter 4]. For marking the cells of the discrete triangulation for coarsening and refinement, two parameters \(\theta_{\mathrm{c}}\) and \(\theta_{\mathrm{r}}\) are applied with a maximum strategy [43]. Altogether, we use a mixed error control with the parameters \(\mathrm{RelTol}_{t}\), \(\mathrm{AbsTol}_{t}\), \(\mathrm{RelTol}_{x}\) and \(\mathrm{AbsTol}_{x}\). For further details we refer to [10].
## 4 Numerical Studies
This section deals with the investigation of the presented model from Section 2 with the numerical tools of Section 3. For this purpose we introduce the simulation setup in Subsection 4.1 and discuss the numerical results in Subsection 4.2 with 1D and 2D simulations, which are a 3D spherical symmetric particle reduced to the 1D unit interval and in addition a 2D quarter ellipse reduced from a 3D elliptical nanowire, respectively. The latter one is chosen to reveal the influence of asymmetric half-axis length on plastic deformation.
### Simulation Setup
As mentioned in the introduction, amorphous silicon is worth investigating due to its larger energy density and is therefore chosen as host material. The used model parameters are listed in Table 2. Most parameters are taken from [17, 18], however, for the plastic deformation we pick and adapt the parameters from [6] such that the yield stress is in the range of [44]. In particular, the ratio between \(\sigma_{\mathrm{Y,max}}\) and \(\sigma_{\mathrm{Y,min}}\) is maintained. If not otherwise stated, we apply an external flux of \(N_{\mathrm{ext}}=1\) C for lithiation and \(N_{\mathrm{ext}}=-1\) C for delithiation. Following [17], we charge between \(U_{\mathrm{max}}=0.5\) V and \(U_{\mathrm{min}}=0.05\) V, corresponding to an initial concentration of \(c_{0}=0.02\) and a duration of \(0.9\) h for one half cycle which is one lithiation of the host particle. The OCV curve \(U_{\mathrm{OCV}}(c)\) for silicon is taken from [45] and defined as \(U_{\mathrm{OCV}}\colon(0,1)\to\mathbb{R}_{>0}\) with
\[U_{\mathrm{OCV}}(c)\coloneqq\frac{-0.2453\,c^{3}-0.005270\,c^{2}+0.2477\,c+0. 006457}{c+0.002493}. \tag{42}\]
The curve is depicted in Appendix F in Figure 12.
#### 4.1.1 Geometrical Setup
We proceed by presenting two computational domains and present the boundary conditions due to new artificial boundaries. We choose a representative 3D spherical particle and reduce the computational domain to the 1D unit interval \(\Omega_{0}=(0,1)\), with a new artificial boundary \(\Gamma_{0}\), compare Figure 2(a), in the particle center with a no flux condition and zero displacement:
\[\mathbf{N}\cdot\mathbf{n}_{0}=0,\qquad u=0\qquad\text{on }(0,t_{\text{end}})\times \Gamma_{0}. \tag{43}\]
To ensure the radial symmetry, we adapt the quadrature weight to \(\,\mathrm{d}\mathbf{X}_{0}=4\pi r^{2}\,\mathrm{d}r\) in the discrete finite element formulation. In the 1D domain it is consistent to assume that the fields vary solely along the radius \(r\). As stated above, the initial concentration is \(c_{0}=0.02\), which leads to a one-dimensional stress-free radial displacement \(u_{0}=r\,(\lambda_{\text{ch}}(c_{0})-1)\). It follows that the initial chemical potential is \(\mu=\partial_{c}\psi_{\text{ch}}(c_{0})\).
In Figure 2(b), the 2D simulation case is shown in terms of a quarter ellipse. We create this geometry by considering a 3D nanowire with no change in \(z\)-direction as well as symmetry around the \(x\)- and \(y\)-axes. Here, further artificial boundaries on \(\Gamma_{0,x}\)
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Description** & **Symbol** & **Value** & **Unit** & **Dimensionless** \\ \hline Universal gas constant & \(R_{\text{gas}}\) & 8.314 & \(\mathrm{J\,mol^{-1}\,K^{-1}}\) & 1 \\ Faraday constant & Fa & 96485 & \(\mathrm{J\,V^{-1}\,mol^{-1}}\) & 1 \\ Operation temperature & \(T\) & 298.15 & K & 1 \\ \hline \multicolumn{5}{c}{Silicon} \\ \hline Particle length scale & \(L_{0}\) & \(50\times 10^{-9}\) & m & 1 \\ Diffusion coefficient & \(D\) & \(1\times 10^{-17}\) & \(\mathrm{m^{2}\,s^{-1}}\) & 14.4 \\ OCV curve & \(U_{\text{OCV}}\) & Equation (42) & V & \(F/R_{\text{gas}}T\cdot\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq
and on \(\Gamma_{0,y}\) with no flux conditions and only radial displacement:
\[\mathbf{N}\cdot\mathbf{n}_{0}=0, u_{y}=0, \text{on }(0,t_{\text{end}})\times\Gamma_{0,x}, \tag{44a}\] \[\mathbf{N}\cdot\mathbf{n}_{0}=0, u_{x}=0, \text{on }(0,t_{\text{end}})\times\Gamma_{0,y}, \tag{44b}\]
have to be introduced. We make use of an isoparametric mapping for the representation of the curved boundary on \(\Gamma_{\text{ext}}\). Again, we choose a constant initial concentration \(c_{0}=0.02\) and a chemical potential \(\mu=\partial_{c}\psi_{\text{ch}}(c_{0})\). However, we use for the displacement the condition \(\mathbf{u}_{0}=\mathbf{0}\).
#### 4.1.2 Implementation Details
All numerical simulations are executed with an isoparametric fourth-order Lagrangian finite element method and all integrals are evaluated through a Gauss-Legendre quadrature formula with six quadrature points in space direction. Our code implementation is based on the finite element library deal.II[41] implemented in C++. Further, we use the interface to the Trilinos library [46, Version 12.8.1] and the UMFPACK package [47, Version 5.7.8] for the LU-decomposition for solving the linear equation systems. A desktop computer with 64 GB RAM, Intel i5-9500 CPU, GCC compiler version 10.5 and the operating system Ubuntu 20.04.6 LTS is used as working machine. Furthermore, OpenMP Version 4.5 is applied for shared memory parallelization for assembling the Newton matrix, residuals and spatial estimates and message passing interface (MPI) parallelization with four MPI-jobs for the 2D simulations with Open MPI 4.0.3. Unless otherwise stated, we choose for the space and time adaptive algorithm tolerances of \(\text{RelTol}_{t}=\text{RelTol}_{x}=1\times 10^{-5}\), \(\text{AbsTol}_{t}=\text{AbsTol}_{x}=1\times 10^{-8}\), an initial time step size \(\tau_{0}=1\times 10^{-6}\) and a maximal time step size \(\tau_{\text{max}}=1\times 10^{-2}\). For the marking parameters of local mesh coarsening and refinement, \(\theta_{\text{c}}=0.05\) and \(\theta_{\text{r}}=0.5\) are set. A minimal refinement level of five is applied for the 1D simulations, in order to achieve a parameterization as general as possible for all different 1D simulation. Due to limited regularity of the nonlinearity of the plastic deformation, we limit the maximal order of the adaptive temporal adaptivity by two.
Figure 2: Computational domains in 1D in (a) and 2D in (b) used for the numerical simulations.
### Numerical Results
In this section we consider the numerical results of the 1D spherical symmetric particle and the 2D quarter ellipse computational domain. We analyze the computed fields such as stresses and concentrations as well as the computational performance of our presented model and implementation scheme. Further, we compare the computational times for using the derived linearization of the projector formulation and an automatic differentiation (AD) technique.
#### 4.2.1 1D Spherical Symmetry
In a first step, we analyze the effect of plastic deformation on the chemo-physical behavior of the 1D domain depicted in Figure 2(a). Detailed studies for purely elastic behavior can be found in, e.g., [11, 18] and are included in this study for comparison.
**Physical results after one half cycle.** In Figure 3, we compare the numerical results for the concentration, the plastic strains as well as the tangential Cauchy stresses between the elastic, plastic and viscoplastic model after one lithiation of the host particle, that means one half cycle at SOC = 0.92 over the particle radius \(r\). The changes of the concentration profiles due to plastic deformation are displayed in Figure 3(a). It is clearly visible that the concentration gradient increases in both the plastic and viscoplastic case in the vicinity of the particle surface, whereas lower concentration values occur in the particle center compared to the elastic case. This is due to the limited maximal stress in the plastic models. The lower stresses inside the particle lead to a lower mobility in the particle interior, c.f. Equation (13). This gradient in the Li mobility leads to the observed pile up of Li atoms at the particle surface. Comparing the plastic and viscoplastic case, a smoother transition from elastic to plastic is visible and therefore a little shift of the concentration values to the particle surface, see the lower magnifying glass in Figure 3(a). This is commonly referred to as viscoplastic regularization [20, Sec. 1.7.1.4.]. The second magnifier shows an area, where the slope of the concentration profile changes close to the particle surface, revealing a second plastic deformation process during lithiation, which is also observable at the change in slope of the equivalent plastic strain, c.f. Figure 3(b). This becomes more apparent, when the tangential Cauchy stresses are investigated as a function of the SOC, c.f. Figure 3(d) and Figure 4(b).
Figure 3(b) shows the equivalent plastic strain \(\varepsilon_{\text{pl}}^{\text{v}}\), revealing plastic deformations near the particle surface. During lithiation, the particle deforms plastically twice. The first plastic deformation process occurs in the initial stages of charging at low SOC \(\leq\) 0.13, c.f. Figure 3(d), and leads to equivalent plastic strains of 3.4 %. Upon further lithiation, this process repeats at SOC \(\approx\) 0.8, c.f. Figure 3(d) and the magnitude of the equivalent plastic strain increases to 4 %. For the tangential Cauchy stress \(\sigma_{\phi}\), c.f. Figure 3(c) and (d), there is a change of the stress direction from compressive stress to tensile stress at the particle surface for the plastic cases compared to the elastic case. This change in sign for the tangential stresses occurs in the area, where the particle undergoes plastic deformation beforehand. The heterogeneous plastic deformation thus leads to an eigenstrain, that results in tensile stresses near the particle surface, which cannot be observed in the elastic case. This means that for an
almost fully lithiated particle, there is a significant shift in the stress development and the plastic deformation leads to tensile stresses close to the particle surface. This crucial change in the stress profile at a small spatial area of the particle is important to recognize for the battery life time. Figure 3(d) displays the tangential Cauchy stress at the particle surface versus the SOC over the complete half cycle. Directly after the start compressive Cauchy stresses occur where the elastic approach increases to larger values compared to the plastic approaches, which are limited due to the onset of plastic deformations. The viscoplastic model shows larger negative tangential stresses, i.e. an overstress above the yield stress, when compared to the rate independent models, also allowing larger elastic strains, c.f. Figure 4(a). After reaching the maximal values the stresses reduce in all cases. However, the plastic approaches predict tensile stresses around SOC = 0.6. At SOC \(\approx 0.8\), the particle deforms plastically a second time, reducing the maximal tensile stresses in tangential direction. Striking is the difference between the elastic case revealing only compressive tangential Cauchy stresses compared to the plastic approaches featuring also tensile tangential Cauchy stresses. The
**Fig. 3**: Numerical results for the elastic (Ela.), plastic (Pla.) and viscoplastic (Vis.) approaches of the 1D radial symmetric case at SOC = 0.92 over the particle radius \(r\): concentration \(c\) in (a), equivalent plastic strain \(\varepsilon_{\rm pl}^{\rm v}\) in (b) and tangential Cauchy stress \(\sigma_{\phi}\) in (c) as well as tangential Cauchy stress \(\sigma_{\phi}\) at the particle surface \(r=1.0\) over SOC in (d).
radial Cauchy stress is not plotted since we have stress-free boundary condition at the particle surface. These findings are qualitatively comparable to numerical results from [48], Fig. 4(c), and [5], Fig. 5(d), compare also the numerical results at SOC \(=0.1\) in Figure 9 of Appendix C. We want to point out that both the plastic and the viscoplastic model lead to almost equivalent numerical results at the end of the first half cycle. This is, however, not the case for results after multiple half cycles, which is outlined below. Before we proceed by investigating the influence of several material parameters and multiple half cycles, we also compare the Green-St-Venant strain tensor or Lagrangian strain with our used logarithmic strain tensor. Both approaches predict almost identical results, see Appendix D, as also observed in the numerical results in [49].
**Parameter studies.** In a next step we take a closer look at the stress-strain curves of the different mechanical approaches and analyze the influence of the maximal yield stress \(\sigma_{\mathrm{Y,max}}\) on the tangential Cauchy stress \(\sigma_{\phi}\) at the particle surface \(r=1.0\) in Figure 4. Furthermore, we compare the dependency on the C-rate and the particle size in Figure 5. The AD concept is used for all parameter studies, see the next subsection _numerical efficiency_ for more details. The comparison in Figure 4(a) shows the effects of the different mechanical approaches: elastic, plastic and viscoplastic deformation as well as ideal plastic deformation with \(\gamma^{\mathrm{iso}}=0\). For the ideal plastic case, we choose a uniform grid with ten refinements and a backward differentiation formula (BDF) time stepping of order two to increase numerical performance. Comparing the elastic and all plastic cases, the maximal compressive Cauchy stress of the elastic solution is not reached in the plastic cases, however, the stresses decrease rapidly after reaching the yield stress. Again, the viscoplastic overstress becomes apparent, c.f. Figure 3(d). Further, the influence of the concentration dependent yield stress is clearly visible, since with a constant yield stress, the curve would just move straight
**Fig. 4**: Influence of the different plasticity approaches in (a) and of the maximal yield stress of the viscoplasticity approach in (b) for the tangential Cauchy stress \(\sigma_{\phi}\) at the particle surface \(r=1.0\).
upwards (red dotted reference line) instead featuring a shift to the right. The tangential stress decreases rapidly after an initial plastic deformation, see also Figure 4(b), so that no further plastic deformation occurs. In contrast to the elastic model, tangential tensile stresses occur for large SOC values in all plasticity models, which lead to another onset of plastic deformation. For larger SOC values, only the plasticity approaches show tensile tangential Cauchy stresses with a second plastic deformation at the end of the first lithiation. As stated below, the viscoplastic model including the viscoplastic regularization leads to better numerical properties. In addition, the experiments of Pharr et al. [15] indeed indicate some sort of rate-dependent inelastic behavior, which, in the opinion of the authors, makes the usage of a viscoplastic model more plausible. Therefore, we continue our investigation with the viscoplastic model unless otherwise stated. Figure 4(b) compares the influence of a varying maximal yield stress \(\sigma_{\mathrm{Y,max}}\) on the tangential stress at the particle surface over the SOC. For smaller values of \(\sigma_{\mathrm{Y,max}}\), the particle starts yielding at smaller tangential stresses, leading to an overall decrease in the observed minimal tangential stresses \(\sigma_{\phi}\) at small SOC. In contrast, the earlier the plastic deformation occurs, the larger the tangential tensile stresses at higher SOC. In addition, the decrease in yield stress with increasing concentrations can be observed, as the tangential tensile stresses lead to further plastic deformation at lower stress levels, indicated with the cyan arrow in Figure 4(b). However, no plastic deformation is visible for a maximal yield stress of 1.0 GPa, which shows a purely elastic response.
Next, we analyze the dependency of the plastic deformation on different C-rates and particle sizes in Figure 5(a) and (b). Again, the tangential Cauchy stress \(\sigma_{\phi}\) is plotted over the SOC. For fast charging batteries, high C-rates are desirable for a comfortable user experience. However, Figure 5(a) shows that for higher C-rates higher tangential Cauchy stresses arise which lead especially for higher SOC values to a large area of plastic deformation. The smaller the C-rate, the lower the occurring stresses and the smaller the plastically deformed regions in the anode particle. For the parameter set considered in this study, decreasing the C-rate by 50 % leads to
Figure 5: Tangential Cauchy stress \(\sigma_{\phi}\) over SOC at the particle surface \(r=1.0\) for varying C-rate in (a) and particle size in (b).
purely elastic deformations. Similar results are observed, when an increasing particle diameter is considered, as in Figure 5(b): the larger the particles, the greater the stresses and the area of plastic deformation. This results from larger concentration gradients for larger particles since the lithium needs longer to diffuse to the particle center. The simulation for particle size \(L_{0}=200\) nm terminates at \(\text{SOC}\approx 0.55\), since at this simulation time a concentration of one is reached at the particle surface. For the largest particle radius, the heterogeneity in the concentration profile is even more pronounced. This also explains the apparent lower yield stress when the \(L_{0}=200\) nm and \(L_{0}=100\) nm curves are compared. In the larger particle, the concentration at the boundary is larger at smaller SOCs, due to the heterogeneity in the mobility, as outlined above. In conclusion, Figure 5 shows that small particles and low C-rates are preferred in order to avoid high stresses and irreversible plastic deformations.
**Numerical efficiency.** To show the capabilities of the adaptive solution algorithm of Subsection 3.2.4, we consider in Figure 6(a) the time step size \(\tau_{n}\) and the used order of the NDF multi-step procedure over two half cycles, i.e., one lithiation and one delithiation step with \(t_{\text{end}}=1.8\) h in total, and in Figure 6(b) the refinement level for the spatial refinement after one lithiation at \(\text{SOC}=0.92\), comparing the lithium concentration distribution over the particle radius \(r\). Here, we use a maximal order for the time adaptivity of three for Figure 6(a) and a minimal refinement level of three in Figure 6(b), respectively. During the first lithiation, there are three changes in the time step size and used time order: after starting with order one and switching to order two and three, the first plastic deformation arises at around \(t=0.04\) h at the first vertical gray reference line. Here, the time step sizes decrease and the used order goes down to two. After recovering to larger step sizes and orders again, the plastic deformation has ended at the second gray line at \(t=0.12\) h and the particle deforms elastically again. Then, a large time range with the maximal time step size \(\tau_{\text{max}}\) and maximal order is passed through. Shortly before the end of the first half cycle, the step sizes and order decrease again (third gray line). In this instant, plastic
**Fig. 6**: Advantages of the adaptive solution algorithm: time step size \(\tau_{n}\) over simulation time \(t\) for two half cycles (one lithiation and one delithiation) in (a) and concentration \(c\) as well as refinement level of the spatial discretization over the particle radius \(r\) at \(\text{SOC}=0.92\) in (b).
deformation occurs again, due to the tensile stresses at the particle surface. Changing the external lithium flux direction from charging to discharging is not trivial for the adaptive algorithm: the order decreases to one and the time step sizes drops over five orders of magnitude at the red reference line. After recovering from this event, one further plastification occurs shortly after the red reference line before the maximal time step size and the maximal order are reached again. At the time of \(t=1.7\) h, indicated with the fourth gray reference line, the next plastic deformation happens accompanied with a reduction of time step sizes and lower order. In total, there are two further plastifications during delithiation. In Figure 6(b), the focus is on the spatial adaptivity. We see the concentration distribution \(c\) over the particle radius \(r\) and the refinement level of the cells of our triangulation. It is clearly visible, that areas with larger concentration gradients have a higher refinement level due to the used gradient recovery error estimator.
So far, we have used the derived linearization of the projector formulation like in [29, Appendix C.6]. deal.II offers the possibility to use an AD framework via Sacado (a component of Trilinos) [46]. We use the tapeless dynamic forward-mode Sacado number type (one time differentiable). In Table 3 we consider the average number of Newton iterations per time step, the assembling time for the Newton matrix and the assembling time for the right hand side of the linear equation system for one half cycle. We compare the results for the derived linearization and the AD technique in the 1D and the 2D simulation setup. More numerical results for the latter case are given in Subsection 4.2.2. The number of time steps and the average number of Newton iterations per time step are similar for the 1D and 2D simulations, respectively. However, the results look different for the assembling time. The assembling of the Newton matrix and the right hand side has to be summed up for case of the linearization, whereas only the assembling of the right hand side is necessary for AD. There are significant differences: the linearization is twice as slow as AD for the 1D simulation and even ten times slower for the 2D simulation. This is a remarkable acceleration in the assembling times when solving for the Newton update.
**Physical results after nine half cycles.** Before proceeding to investigate the 2D computational domain, we use the capabilities of the spatially and temporally adaptive algorithm together with the AD technique, to study multiple charging and
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Time steps** & **Average Newton** & **Matrix** & **rhs** \\ \hline Linearization 1D & 227 & 1.22 & 3.64 sec & 0.32 sec \\ AD 1D & 229 & 1.21 & — & 1.80 sec \\ Linearization 2D & 314 & 1.15 & 5050 sec & 5.56 sec \\ AD 2D & 338 & 1.18 & — & 418 sec \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of number of time steps, average number of Newton iterations per time step, assembling time for the Newton matrix and assembling time for the right hand side (rhs) of the linear equation system between the derived linearization and AD techniques for the 1D and the 2D simulation case.
discharging cycles. To be more precise, we consider five charging and four discharging cycles. Figure 7 shows that the viscoplastic case with the derived linearization or use of AD deliver identical results. The differences between the plastic and viscoplastic deformation are remarkable after nine half cycles, especially for the concentration in Figure 7(a). The plastic approach is almost identical to the elastic results, whereas the viscoplastic results are more similar to Figure 3. The similarity between the elastic and plastic results can be explained by the isotropic hardening, which leads to a large increase in the yield stress. Thus, after nine half cycles, occurring stresses inside the particle do not reach the yield stress, preventing the particle to deform plastically any further. This in turn leads to a more homogeneous distribution of stresses and thus no pile up of Li atoms close to the particle surface. The lower magnifier reveals also for the viscoplastic approach some difference for higher cycling numbers: a small wave occurs due to the change in the sign of the lithium flux for charging or discharging the active particle. This is important to notice because the small wave is already
Figure 7: Numerical results for the elastic (Ela.), plastic (Pla.), viscoplastic (Vis.) and viscoplastic with AD (AD) approaches of the 1D radial symmetric case at \(t=8.12\) h over the particle radius \(r\): concentration \(c\) in (a), equivalent plastic strain \(\varepsilon_{\rm pl}^{\rm v}\) in (b) and tangential Cauchy stress \(\sigma_{\phi}\) in (c) as well as tangential Cauchy stress \(\sigma_{\phi}\) over SOC in (d).
more pronounced for the tangential Cauchy stress in Figure 7(c) and is a candidate for possible further inhomogeneities. It should also be noted the higher magnitude in Figure 7(b) compared to Figure 3(b), indicating large extent of plastic deformation which could eventually lead to damage of the particle. Figure 7(d) shows the development of the tangential Cauchy stress at the particle surface over the SOC. For the elastic case there is no difference recognizable after the first half cycle and all further half cycles are identical. Also for the viscoplastic case, here computed with AD, only a small difference is visible at the lower left corner, see the magnifier in the bottom center. The second and all further lithiation cycles feature a higher compressive stress, even slightly higher than the elastic case, than in the first half cycle. This can be explained by the constant initial data. The magnifying glass at the right top shows the effect of the change from lithiation to delithiation with a short increase and followed decrease indicating an additional plastic deformation. A larger difference between the elastic and viscoplastic approach at lower SOC is observable at the end of the discharging cycle at the left top corner with the magnifier at the left center. The plastic ansatz with isotropic hardening, however, shows the biggest difference in each cycling. As already discussed, the tangential Cauchy stresses tend from cycle to cycle to the values of the elastic case indicated with the cyan arrows. It should be noted that the small wave of Figure 7(a) and (c) is not visible in this representation.
#### 4.2.2 2D Quarter Ellipse
We now evaluate numerical results for the 2D quarter ellipse simulation setup. Here, we adapt the initial time step size to \(\tau_{0}=1\times 10^{-8}\) and a minimal refinement level to three. Further, we set \(\theta_{\text{c}}=0.005\) and \(\tau_{\text{max}}=5\times 10^{-3}\). With the capabilities of the adaptive space and time solution algorithm as well as the presented AD technique, we consider now the effects of half-axes of different lengths on plastic deformation and concentration distribution.
**Physical results after one half cycle.** Figure 8 shows the numerical results at the end of one half cycle with \(t_{\text{end}}=0.9\) h and SOC \(=0.92\). The numbers of degrees of freedom (DoFs) are in a range of \([320,121472]\) and the total computation time is less than 15 minutes. In all subfigures, the adaptive mesh is visible in the background, indicating higher refinement levels at the external surface near the smaller half-axis. In Figure 8(a), the concentration profile is displayed in a range of \([0.91,0.94]\). It is noticeable that the gradient in the concentration is stronger at the particle surface on the short half-axis. Here, a larger area with lower concentration values in blue can be located with a small but steeply growing area near the particle surface. This property is comparable to the 1D simulation results with a higher increase near the particle surface in Figure 3(a) or in Figure 7(a).
Having a look on the scalar equivalent plastic strain \(\varepsilon_{\text{pl}}^{\text{v}}\) in Figure 8(b) the area of plastic deformation can be confirmed: plastic deformation occurs at the particle surface of the smaller half-axis. Large equivalent plastic strains of up to \(22\,\mathrm{\char 37}\) occur. Figure 8(c) shows the von Mises stress in the general plane state
\[\sigma_{\text{vM}}=\sqrt{\sigma_{11}^{2}+\sigma_{22}^{2}-\sigma_{11}\sigma_{2 2}+3\sigma_{12}^{2}}. \tag{45}\]
A significant inhomogeneity of the von Mises stress distribution in the particle is identifiable. Whereas the stress values at the larger half-axis are lower than 0.45 GPa, the von Mises stresses are more than 1.5 times higher at the smaller half-axis and feature two peaks near the particle surface: one due to tensile stresses because of the plastic deformation next to the surface and one due to compressive stresses a little further inside of the particle, both again like in the 1D case in Figure 3(c) or in the 1D case in Figure 3(d).
**Fig. 8**: 2D quarter ellipse after one lithiation at SOC = 0.92 for concentration \(c\) in (a), equivalent plastic strain \(\varepsilon_{\rm pl}^{\rm v}\) in (b) and von Mises stress \(\sigma_{\rm vM}\) in (c) using AD techniques.
in Figure 7(c). This inhomogeneity of the stress distribution is especially important for further investigations on particle damage or fracture.
However, the finding of the higher stresses and plastic deformation at the smaller half-axis are in contrast to the numerical outputs in Figure 5(b). There, larger particle size results in larger stresses, so it would be reasonable to feature higher stresses at the longer half-axis. Though, the longer half-axis is responsible for lower concentration values in the particle center. On the smaller half-axis, the diffusion would create higher concentrations in the particle center. Thus, the influence of the lower concentration values at the longer half-axis is responsible for higher concentration gradients at the smaller half-axis resulting in higher stress and plastic deformation. In this context it is important to recall that a constant external lithium flux is used at the particle surface, which favors this behavior.
## 5 Conclusion and Outlook
We conclude our work with a summary and an outlook.
### Conclusion
Within this work, a large deformation, chemo-elasto-plastic model to predict the diffusion-deformation behavior of aSi anode particles within Li-Ion batteries is presented. The used plasticity model is in close analogy to the model presented in [6]. As an extension, we specify our theory for both rate-dependent and rate-independent plastic deformations, see Subsection 2.5, and compare numerical results of both theories. The chemical model relies on [11], where an experimental OCV curve is used to model the chemical contribution to the Helmholtz free energy. The derived model is implemented in an efficient finite element scheme, first presented in [10], relying on space and time adaptive algorithms and parallelization [18].
For both plastic models, a return mapping algorithm [31] is used in the context of static condensation, allowing the evaluation of the plastic deformations at the finite element integration point level instead of treating them as additional degrees of freedom. We present in Subsection 3.2.1 a projection onto the set of admissible stresses inspired by [19], which can be given explicitly in the case of the rate-independent model with linear hardening, whereas in the viscoplastic case the projector is implicit, due to the non-linearity introduced by the flow rule. The linearization of these projectors, necessary for computing Newton updates in the nonlinear solution scheme, is approximated in Subsection 3.2.4 inspired by [6]. In addition we apply AD techniques to circumvent the necessity of tedious analytic computations and assembling operations and compare the numeric performance of both approaches. We incorporate our DAE system into an existing efficient space and time adaptive solver, presented in Section 3.
The numerical results in Subsection 4.2 for a given set of parameters show a heterogeneous concentration distribution within the particle, when plastic models are considered. We attribute this to lower stresses inside the particle, which in turn affect the mobility of Li atoms and lead to a pile up at the particle's surface. In our studies, plastic deformations are limited to the outer parts of the particle and result in an eigenstrain that leads to tensile stresses at the particle surface during charging, which
is not observed in the elastic model. Both the concentration pile up and the plastic deformation can be mitigated by decreasing the particle size and or decreasing the charging rate.
Comparing the plastic and viscoplastic model in Figure 7, differences become visible after multiple charging and discharging cycles. On the one hand, the plastic model hardens isotropic and after several cycles the occurring stresses do not reach the increased yield limit, preventing further plastification. On the other hand, the viscoplastic model does show an increased amount of plastic deformations, which results in a differing stress and concentration distribution. As a result of Section 4, the differences are more pronounced the more cycles are considered, which seems not to be investigated before and affects battery performance and lifetime. This could be achieved by the use of several modern numerical techniques.
Investigating the performance of the projector linearization in comparison with the AD scheme in Table 3, we conclude that the AD scheme is way more efficient, due to the fact that the assembling of the Newton matrix is done simultaneously with the residual. This leads to decreased computation times. In addition, the use of AD circumvents the tedious analytical derivation as well as the fault prone implementation of the linearization.
When studying a two-dimensional problem in Subsection 4.2.2, an asymmetry in the concentration and plastic strain distribution is observed, which we attribute to the ellipsoidal particle shape and the resulting asymmetric concentration distribution.
To conclude, our study indicates that small particle sizes with a spherical shape result in a smaller build up of stresses and is therefore desirable. Moreover, semi-axes with different lengths should be avoided to obtain homogeneously distributed mechanical stresses. Regarding material behavior, the considered isotropic hardening mechanism is favorable, as less plastic strains build up after multiple charging and discharging cycles. In addition, this would prevent the sharp concentration gradients observed in our study and thus less interference with battery performance is to be expected. A further combination with the viscoplastic approach could be a possible extension.
### Outlook
To further extend the theory originating from this work, numerous paths can be taken. We consider the following ones to be especially interesting:
* Previous research pointed towards the usage of nano-wires as electrode geometry. The reason lies in the observation that lower electrode sizes decrease occurring stresses and plastic deformations. This is in agreement with the results of decreasing stresses for smaller particle radii. Still, a study of various nano-wire shaped electrode geometries is a reasonable application for the derived model and the proposed implementation scheme.
* Fracture and SEI formation typically occur in aSi anode particles [17]. Both have been considered previously, see, e.g. [17, 18, 24], and should be included in future extensions of this work, to better capture the physical behavior of the anode particles.
* A paper published recently by several of the authors [11], introduced an efficient scheme to compute particle deformation behavior when contact between particles or walls is introduced. A study of the elasto-plastic material behavior derived in this work, in combination with the obstacle, could provide further insight into the mechanics of anode materials during charging and discharging.
* Finally, a more robust and scalable solver can be developed to provide additional speedup compared to the currently used LU-decomposition when using MPI parallelization.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work in this paper.
## Credit authorship contribution statement
**R. Schoof:** Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Software, Validation, Visualization, Writing - original draft **J. Niermann:** Data curation, Formal Analysis, Methodology, Software, Validation, Writing - review & editing **A. Dyck:** Conceptualization, Formal Analysis, Investigation, Methodology, Writing - original draft **T. Bohlke:** Funding acquisition, Project administration, Resources, Supervision, Writing - review & editing **W. Dorfler:** Funding acquisition, Project administration, Resources, Supervision, Writing - review & editing
## Acknowledgement
The authors thank G. F. Castelli for the software basis and L. von Kolzenberg and L. Kobbing for intensive and constructive discussions about modeling silicon particles. R.S. acknowledges financial support by the German Research Foundation (DFG) through the Research Training Group 2218 SiMET - Simulation of Mechano-Electro-Thermal processes in Lithium-ion Batteries, project number 281041241. T.B. acknowledges partial support by the German Research Foundation (DFG) within the Priority Programme SPP2013 'Targeted Use of Forming Induced Residual Stresses in Metal Components' (Bo 1466/14-2). The support by the German Research Foundation (DFG) is gratefully acknowledged.
## ORCID
R. Schoof: [https://orcid.org/0000-0001-6848-3844](https://orcid.org/0000-0001-6848-3844)
J. Niermann: [https://orcid.org/0009-0002-0422-0521](https://orcid.org/0009-0002-0422-0521)
A. Dyck: [https://orcid.org/0000-0001-7239-9653](https://orcid.org/0000-0001-7239-9653)
T. Bohlke: [https://orcid.org/0000-0001-6884-0530](https://orcid.org/0000-0001-6884-0530)
W. Dorfler: [https://orcid.org/0000-0003-1558-9236](https://orcid.org/0000-0003-1558-9236) |
2303.05578 | Log-Normal Waiting Time Widths Characterize Dynamics | Many astronomical phenomena, including Fast Radio Bursts and Soft Gamma
Repeaters, consist of brief, separated, seemingly aperiodic events. The
intervals between these events vary randomly, but there are epochs of greater
activity, with shorter mean intervals, and of lesser activity, with longer mean
intervals. This variability can be quantified by a single dimensionless
parameter, the width of a log-normal fit to the distribution of waiting times
between events. If the distribution of event strengths is a power law, as is
often the case, this parameter is independent of the detection threshold and is
a robust measure of the intrinsic variability of the waiting times and of the
underlying dynamics. | J. I. Katz | 2023-03-09T21:01:20Z | http://arxiv.org/abs/2303.05578v3 | # Log-Normal Waiting Time Widths Characterize Dynamics
###### Abstract
Many astronomical phenomena, including Fast Radio Bursts and Soft Gamma Repeaters, consist of brief distinct aperiodic events. The intervals between these events vary randomly, but there are periods of greater activity, with shorter mean intervals, and of lesser activity, with longer mean intervals. A single dimensionless parameter, the width of a log-normal function fitted to the distribution of waiting times between events, quantifies the variability of the activity. This parameter describes its dynamics in analogy to the critical exponents and universality classes of renormalization group theory. If the distribution of event strengths is a power law, the width of the log-normal fit is independent of the detection threshold and is a robust measure of the dynamics of the phenomenon.
keywords: radio continuum, transients: fast radio bursts, methods: statistical
## 1 Introduction
Many episodic natural phenomena originate in complex and imperfectly understood physical processes. Astronomical examples include Fast Radio Bursts (FRB) and Soft Gamma Repeaters (SGR). In some, such as FRB, the responsible physical processes and their environment are not known. In others we may know which physical processes are responsible (magnetic reconnection in SGR), but lack sufficient understanding to calculate them quantitatively.
The universality classes of critical phenomena (Pelissetto and Vicari, 2002) suggest a path to qualitative categorization. Phenomena of different microphysical origin are sorted into universality classes on the basis of their critical exponents. Detailed knowledge of their microphysics, such as intermolecular potentials in liquid-gas critical points or the Hamiltonian of a ferromagnet, is not required to identify these classes and to establish fundamentally common dynamics among phenomena involving different physical processes.
Burst activity may be described by the distribution of intervals (waiting times) between detected bursts. This reflects two distinct properties of the underlying dynamics: the correlation (or lack thereof) between consecutive bursts (short-term memory) and slower variations in the mean level of activity (long-term memory). The distributions of waiting times in repeating FRB (Aggarwal et al., 2022; Hewitt et al., 2022; Li et al., 2021; Niu et al., 2022; Zhang et al., 2022) and in SGR burst storms (Hurley et al., 1994; Younes et al., 2020) have been well fit by log-normal functions. The width of such a distribution may be a robust descriptor of the underlying dynamics, analogous to the critical exponents that characterize universality classes in renormalization group theory. This paper describes the application of log-normal fits to waiting time distributions, and suggests their dimensionless standard deviation \(\sigma\) as a robust and fundamental quantitative metric.
## 2 Bursting Phenomena
Bursting phenomena may be described by their distribution of strengths (energy, flux, fluence, electromagnetic field) and by their temporal distribution. Distributions of strengths are often power laws (Younes et al., 2020; Zhang et al., 2022). This is a consequence of the absence of a characteristic scale over a wide range of some parameter (Kolmogorov, 1941a,b; Katz, 1986) because any deviation from a power law would define a characteristic scale, the value of that parameter where the distribution deviates from a straight line on a log-log plot. In some FRB a break is observed in the distribution of energies (Zhang et al., 2022), defining a characteristic value, and some SGR have shown extreme outliers, also not described by a power law distributions (Katz, 2021); power laws are widespread but not universal.
Most episodically outbursting phenomena display periods of greater and lesser activity with shorter and longer mean waiting times, respectively. The width \(\sigma\) of the waiting time distribution thus quantifies the _variability_ of their activity. In one limit outbursts are regularly periodic and \(\sigma=0\), while in the other limit brief periods of frequent activity are separated by long periods of quateate and \(\sigma\gg 1\); shot noise, with random events occurring at a constant mean rate, is intermediate.
## 3 Log-Normal fits
Log-normal fitting functions are widely used because with only three parameters, a midpoint, a maximum and a width, they provide good fits to a broad range of single-peaked distributions, wide as well as narrow (Wikipedia, 2022). These fits
may be entirely phenomenological, without basis in a causal model. Most uses of log-normal fits describe the distribution of some parameter of the individual events, but here we are concerned with the waiting times between them.
The logarithm of the product of a series of multiplications of a random variable is the sum of the logarithms of the variables, and executes a random walk. If there are many factors in the product the central limit theorem applies to the sum of the logarithms, and the result approaches a log-normal distribution (Shockley, 1957; Montroll and Shlesinger, 1982). The rate of convergence depends on the distributions of these logarithms, and for pathological distributions the sum may not converge at all.
A log-normal distribution of a variable is therefore a natural consequence if it is the result of a series of independent stochastic multiplicative steps. The observed \(\sigma\) of a waiting time distribution would be \(\propto N^{-1/2}\), where \(N\) is the number of independent steps, in series, weighted by their frequency: if there is a necessary step that occurs at a low rate, the observed waiting time distribution would reflect the distribution of that infrequent step, while a step that occurs at a high rate would have little effect on \(\sigma\). A log-normal distribution will be a good fit if there are several required steps, each with a comparable rate, making the central limit theorem applicable to the sum of their logarithms.
Power-law distributions are not fit by log-normal functions because they do not have a peak. All power-law distributions of data must have at least one cutoff, either a threshold for detection or an intrinsic characteristic scale, in order that the total number of events and the total energy (or an analogous quantity) be finite. The distribution peaks at the cutoff. Abruptly cut-off power laws can be fit by log-normal functions, although not closely, but a more gradual cutoff and a semilog plot can hide many sins.
A general form of a log-normal distribution of waiting time \(\Delta t\) between events is
\[f(\Delta t)=A\exp{[-(\ln\Delta t-\ln\Delta t_{0})^{2}/2\sigma^{2}]}, \tag{1}\]
where \(\Delta t_{0}\) is the most frequent waiting time and \(\sigma\) is the dimensionless standard deviation of the distribution. If there are \(N\) values of \(\Delta t\) in the data the normalization factor \(A=N/(\sqrt{2\pi}\sigma)\).
For data with the empirical distribution \(f(\ln\Delta t)\), \(\ln\Delta t_{0}\) is taken to be the mean
\[\ln\Delta t_{0}=\frac{\int_{-\infty}^{\infty}\!\!f(\ln\Delta t)\ln\Delta t\,d \ln\Delta t}{\int_{-\infty}^{\infty}\!\!f(\ln\Delta t)\,d\ln\Delta t}. \tag{2}\]
The standard deviation
\[\sigma=\sqrt{\frac{1}{\pi}\frac{\int_{-\infty}^{\infty}\!\!f(\ln\Delta t)(\ln \Delta t-\ln\Delta t_{0})^{2}\,d\ln\Delta t}{\int_{-\infty}^{\infty}\!\!f( \ln\Delta t)\,d\ln\Delta t}}. \tag{3}\]
## 4 Simple models
### Exact Periodicity
One limiting case of a waiting time distribution is that of periodic pulses; all waiting times are the same. This is a excellent approximation for radio pulsars, whose period derivative \(10^{-21}\lesssim\dot{P}\lesssim 10^{-9}\). Then \(\sigma\sim T\dot{P}/P\), where \(T\) is the duration of observations (generally, very intermittent rather than continuous). For observed pulsars \(\sigma\) is in the range \(10^{-14}\)-\(0.03\), with the largest values for young, rapidly slowing, pulsars (like the Crab) observed for decades, and the smallest values for a recycled (low-field) pulsar briefly observed. For a steadily slowing but nearly strictly periodic phenomenon like pulsar emission, \(\sigma\) is not very meaningful because of its dependence on \(T\).
### Shot Noise
For shot noise (Poissonian statistics) with mean rate \(\nu\)
\[f(\Delta t)=\nu\Delta t\exp{(-\nu\Delta t)}=\exp{[\ln\nu\Delta t-\exp{(\ln\nu \Delta t)}]}. \tag{4}\]
Performing the integrals in Eq. 2
\[\ln{(\nu\Delta t)_{0}}=-0.577, \tag{5}\]
and Eq. 3
\[\sigma=0.723. \tag{6}\]
### Two Widely Separated \(\Delta t\)
Another limiting case is that of only two possible values of \(\Delta t\), \(\Delta t_{1}\) and \(\Delta t_{2}\):
\[f(\ln\Delta t)=a\delta(\ln\Delta t-\ln\Delta t_{1})+(1-a)\delta(\ln\Delta t- \ln\Delta t_{2}). \tag{7}\]
Then
\[\ln\Delta t_{0}=a\ln\Delta t_{1}+(1-a)\ln\Delta t_{2} \tag{8}\]
and
\[\sigma=\sqrt{\frac{1}{\pi}\left[a\left(\ln{(\Delta t_{1}/\Delta t_{0})} \right)^{2}+(1-a)\left(\ln{(\Delta t_{2}/\Delta t_{0})}\right)^{2}\right]}. \tag{9}\]
This is the limit of a distribution with two narrow but separated peaks.
Double-peaked distributions are frequently found for the waiting times between fast radio bursts (Katz, 2019; Li et al., 2021; Zhang et al., 2021; Aggarwal et al., 2022; Hewitt et al., 2022; Niu et al., 2022; Zhang et al., 2022). The peaks at shorter waiting times are likely attributable to substructure within individual bursts rather than to the intervals between distinct bursts, and are not considered further.
## 5 Some data
Table 1 shows the values of \(\sigma\) of waiting time distributions of shot noise and of several astronomical datasets. These include very active FRB, burst storms from SGR 1806\(-\)20 and SGR 1935+2154 (associated with a Galactic FRB) and microglhetes of the Vela pulsar.
## 6 Discussion
The virtue of \(\sigma\) as a measure of the dynamics underlying a bursting source is its independence of the observing sensitivity if the distribution of event strengths is a power law. This is simply demonstrated: If \(\sigma\) depended on a detection threshold \(\mathcal{T}\), then the function \(\sigma(\mathcal{T})\) would define the characteristic signal strength equal to \(\mathcal{T}\) where \(\sigma\) is some specific value, such as 0.723 (its value for shot noise). That would be inconsistent with a power law distribution of signal strength,
a straight line on a log-log plot of number of events _vs._ signal strength, with no characteristic value (Kolmogorov, 1941a,b; Katz, 1986). Most of the phenomena under consideration (astronomical fast radio bursts, soft gamma repeaters and pulsar glitches) do not have obvious characteristic strengths, at least over a range several orders of magnitude wide, but there have been exceptions (Zhang _et al._, 2022).
Very small (\(\ll 1\)) values of \(\sigma\) of a waiting time distribution indicate periodicity. Values \(<0.723\), the value for shot noise, indicate a memory effect, like that of a noisy relaxation oscillator, that does not produce periodicity but instead has a characteristic repetition time scale, with shorter or longer waiting times less likely than for shot noise. This is often described as quasi-periodicity, and is shown in Table 1 for the post-microglitch changes of the spin-down rate of the Vela pulsar. These \(\delta\nu\) have a preferred scale, although the timing of the microglhells is consistent with shot noise.
Values of \(\sigma>0.723\) indicate varying rates of activity (shot noise has the most random possible statistics if the mean or statistically expected rate of activity is unchanging). Most of the data shown in Table 1 have \(\sigma\) somewhat, but not greatly, larger than 0.723, indicating changing levels of activity. Much larger values of \(\sigma\), as found for SGR 1806\(-\)20, indicate periods of much greater ("burst storms") and of lesser activity; a toy model is discussed in Sec. 4.3. The fact that SGR and FRB show periods of greater and lesser activity has long been known; Zhou _et al._ (2022) show an extreme example for FRB 20201124A, that resumed activity after an extended quiet period (Niu, 2022).
Table 1 shows that for the several repeating FRB for which sufficient waiting time data are available, \(\sigma\) has approximately the same value, indicating common underlying dynamics. This is hardly surprising. Few other repeating FRB have sufficient data to determine \(\sigma\) (unfortunately, the Galactic FRB 200428, remarkably associated with a SGR, does not). In contrast, the two SGR waiting time distributions have widths whose difference far exceeds their uncertainties; perhaps SGR 1935+2154, the source of FRB 200428, is fundamentally different from other SGR, even the most intense of which has not been associated with a FRB (Tendulkar, Kaspi & Patel, 2016).
## Acknowledgment
I thank D. Eardley, J. Goodman, R. Grober, M. Maharbiz, W. Press, A. Rollett and J. Tonry for useful discussions.
## Data availability
This theoretical study did not generate any new data.
|
2305.00956 | Non-Binary LDPC Code Design for Energy-Time Entanglement Quantum Key
Distribution | In energy-time entanglement Quantum Key Distribution (QKD), two users extract
a shared secret key from the arrival times (discretized as symbols) of
entangled photon pairs. In prior work, Zhou et al. proposed a multi-level
coding (MLC) scheme that splits the observed symbols into bit layers and
utilizes binary Low-Density Parity-Check (LDPC) codes for reconciliation of the
symbols. While binary LDPC codes offer low latency for key generation,
splitting the symbols into bits results in a loss of key generation rate due to
error propagation. Additionally, existing LDPC codes do not fully utilize the
properties of the QKD channel to optimize the key rates. In this paper, we
mitigate the above issues by first generalizing the MLC scheme to a
non-binary(NB) MLC scheme that has layers with non-binary symbols and utilizes
NB-LDPC codes. We show the NB-MLC scheme offers flexibility in system design.
Additionally, we show that the NB-MLC scheme with a small symbol size per layer
offers the best trade-off between latency and key rate. We then propose a
framework to jointly optimize the rate and degree profile of the NB-LDPC codes
that is tailored towards the QKD channel resulting in higher key rates than
prior work. | Debarnab Mitra, Lev Tauz, Murat Can Sarihan, Chee Wei Wong, Lara Dolecek | 2023-05-01T17:39:02Z | http://arxiv.org/abs/2305.00956v1 | # Non-Binary LDPC Code Design for Energy-Time Entanglement Quantum Key Distribution
###### Abstract
In energy-time entanglement Quantum Key Distribution (QKD), two users extract a shared secret key from the arrival times (discretized as symbols) of entangled photon pairs. In prior work, Zhou _et al._ proposed a _multi-level coding_ (MLC) scheme that splits the observed symbols into bit layers and utilizes binary Low-Density Parity-Check (LDPC) codes for _reconciliation_ of the symbols. While binary LDPC codes offer low latency for key generation, splitting the symbols into bits results in a loss of key generation rate due to error propagation. Additionally, existing LDPC codes do not fully utilize the properties of the QKD channel to optimize the key rates. In this paper, we mitigate the above issues by first generalizing the MLC scheme to a non-binary(NB) MLC scheme that has layers with non-binary symbols and utilizes NB-LDPC codes. We show the NB-MLC scheme offers flexibility in system design. Additionally, we show that the NB-MLC scheme with a small symbol size per layer offers the best trade-off between latency and key rate. We then propose a framework to jointly optimize the rate and degree profile of the NB-LDPC codes that is tailored towards the QKD channel resulting in higher key rates than prior work.
## I Introduction
Quantum Key Distribution (QKD) provides a physically secure way to share a secret key between two users, Alice and Bob, over a quantum communication channel in the presence of an eavesdropper Eve [1, 2, 3]. Energy-time entanglement QKD (ET-QKD) protocols have been studied extensively in literature due to their ability to extract multiple bits per generated entangled photon pairs [3, 4]. At a high level, an ET-QKD protocol consists of the following steps [5]: i) In the first step, called _generation_, Alice and Bob generate _raw keys_ using a quantum channel that are represented as sequences of symbols. Due to imperfections in the quantum channel, the raw keys at Alice and Bob may disagree in some positions; ii) In the second step, called _information reconciliation_ (IR), Alice and Bob communicate over a public channel (accessible to Eve) to reconcile the raw keys; iii) In the third step, called _privacy amplification_ (PA), Alice and Bob amplify the privacy of the reconciled key by accounting for Eve's knowledge to generate the final shared secret key. Channel coding is utilized in the IR step to ensure that Alice and Bob arrive at an identical sequence of symbols. In this paper, similar to [5], we focus on the IR step of the protocol and assume that if we communicate \(m\) bits during the IR step, then PA results in a loss of \(m\) bits from the length of the reconciled key to get the shared secret key. The _key rate_ of the system is defined as the average length of the shared secret key obtained by Alice and Bob after PA.
A promising coding framework proposed to get high key rates is called the _multi-level coding_ (MLC) scheme [5] that has been considered for works such as [3, 6]. In the MLC scheme, the sequence of symbols after the generation step is converted into multiple bit layers and then each bit layer is sequentially reconciled using binary LDPC codes. Binary LDPC codes have low complexity and fast decoding algorithms and hence result in low latency and complexity for key generation. However, the MLC coding scheme suffers from error propagation where a decoding error in one of the bit layers results in decoding errors in subsequent bit layers leading to reduced key rates. Contrary to the MLC scheme with binary LDPC codes, non-binary (NB) LDPC codes that directly encode the generated symbols do not suffer from error propagation. Hence, NB-LDPC code can naturally lead to higher key rates. However, the symbols in the generation step can belong to a Galois field of size as large as \(2^{10}\) and it is known that iterative decoding of NB-LDPC codes has a very high complexity (log-linear in the field size [7]) leading to high latency for the key generation. Hence, baseline NB-LDPC codes with large field sizes are not favorable in QKD applications requiring low latency, such as in [8, 9].
In addition to the above latency vs. key rate trade-off, the LDPC codes used previously in the IR step of ET-QKD protocols have not fully utilized the properties of the ET-QKD channel. For example, [5] used a standard LDPC ensemble without optimization. Similarly, spatially-coupled (SC) LDPC codes, irregular repeat accumulate (IRA) codes, SC-IRA codes, and multi-edge-type (MET) codes have been discussed for the continuous-variable (CV) QKD [10, 11]. However, these works focus on channel models such as binary input additive white Gaussian noise (BIAWGN) that do not match the ET-QKD channel [12].
A unique property of the ET-QKD problem considered in this paper is that the key rate of the system is closely dependent on both the rate of the code and the frame error rate (FER) performance. Fig. 2 shows the FER and key rates obtained by a random LDPC code for different values of rate. From this graph, we see that increasing the code rate can improve the key rate even at the cost of higher FER, a phenomenon we see in both binary and non-binary LDPC codes. Additionally, the maximum in the key rate occurs for a relatively large value of FER (\(\sim 5\)%). While the conventional code design approach is to minimize the FER to a very small value for a given rate, in this case, the goal is to jointly optimize both the rate and the FER to achieve the largest key rate.
The degree distribution of an LDPC code is known to affect its FER performance. Degree distribution optimization techniques for LDPC codes based on code thresholds (e.g., [13]) optimize the degree distribution for a fixed rate and hence are not directly applicable to the current ET-QKD problem
that needs a joint rate and FER optimization. Additionally, the optimized degree distributions are designed for non-QKD channels (e.g., BIAWGN in [13]) and they do not result in large key rates as we demonstrate in Section V.
In the paper, we mitigate the above issues of latency vs. key rate trade-off and code design considering the properties of the ET-QKD channel using a two-pronged approach. Firstly, we generalize the MLC scheme of [5] to a non-binary MLC scheme by splitting the symbols after the generation step into multiple layers with non-binary symbols belonging to a smaller Galois field. The NB-MLC scheme offers a natural trade-off between latency and key rate depending on the size of the symbols in a layer, allowing flexibility in system design. Additionally, we demonstrate that the NB-MLC scheme with a small symbol size per layer results in higher key rates compared to a fully binary scheme [5] as well as using a fully non-binary scheme without layering. Secondly, we provide a joint rate and degree distribution optimization (JRDO) framework based on differential evolution [14] for the construction of the NB-LDPC codes in each layer of the NB-MLC scheme. The JRDO framework uses the QKD channel information and we demonstrate that it results in a higher key rate compared to the LDPC codes used in the MLC scheme [5] and that obtained by utilizing degree distributions optimized for conventional channels such as the BIAWGN channel [13]. The rest of this paper is organized as follows. In section II, we provide the preliminaries and the system model. In section III, we describe the NB-MLC scheme. In section IV, we provide the JRDO framework. Finally, we provide simulation results in section V and conclude the paper in section VI.
## II Preliminaries and System Model
#### Ii-1 ET-QKD system model
As shown in Fig. 1, in ET-QKD [3], energy-time entangled photon pairs are generated by a third party in the generation step. Alice and Bob then receive one photon each out of the pair who then record the arrival times of the received photons. The raw key information is derived from the arrival times. In this method, Alice and Bob both synchronize their timelines and then discretize their time into frames where each frame is further divided into \(2^{q}\) bins of equal size, where \(q\) is a positive integer. Alice and Bob retain only time frames in which they both detect a single photon arrival and discard all other frames. The photon arrival time in a non-discarded frame is then converted (discretized) into a symbol in \(\mathbb{GF}(2^{q})\) based on the bin number the received photon occupies within each frame. The discretized sequences received by Alice and Bob are then divided into blocks each having \(N\) symbols where \(N\) is the code length. Let \(\mathbf{X}=\{X_{1},\ldots,X_{N}\}\), \(X_{i}\in\mathbb{GF}(2^{q})\) and \(\mathbf{Y}=\{Y_{1},\ldots,Y_{N}\}\), \(Y_{i}\in\mathbb{GF}(2^{q})\) be the sequences of length \(N\) recorded by Alice and Bob, respectively. Due to imperfections in the generation step (e.g., timing jitters, transmission loss [5]) \(\mathbf{Y}\) is a noisy version of \(\mathbf{X}\). We assume the sequences \(\mathbf{X}\) and \(\mathbf{Y}\) are memoryless and each \(Y_{i}\) is the output of the ET-QKD channel characterized by transition law \(P_{Y|X}\) and input \(X_{i}\).
A simple IR protocol based on NB-LDPC codes in \(\mathbb{GF}(2^{q})\) proceeds as follows. Alice sends Bob \(\mathbf{S}=\mathbf{H}\mathbf{X}\) over the public channel (which is accessible to Eve) where \(\mathbf{H}\in\mathbb{GF}(2^{q})^{M\times N}\) is the parity check matrix of an NB-LDPC code. Bob decodes \(\mathbf{X}\) using the received \(\mathbf{S}\) and side information \(\mathbf{Y}\). LDPC decoding using side information is encountered in the Slepian-Wolf (SW) problem [16]. SW LDPC decoding is very similar to the sum-product decoder used in conventional decoding of LDPC channel codes with small differences in the way the log-likelihood messages are initialized and the CN to VN messages. We refer the reader to [16] for details about SW LDPC decoding. The goal of the NB-LDPC code is to make the decoding output equal to \(\mathbf{X}\) with high probability while ensuring that the information leaked to Eve is minimized. Finally, the sequence \(\mathbf{X}\) is the reconciled key that is passed to the PA step. The key rate \(r\) (in bits per photon) of the above scheme (similar to [5]) is given as follows:
\[r=q(1-E)\frac{N-M}{N}, \tag{1}\]
where \(E\) is the FER incorporated in the decoding of \(\mathbf{X}\). Note that we subtract \(M\) in Eqn. (1) since \(M\) symbols are sent over the public channel and hence will be lost due to PA.
#### Ii-2 ET-QKD channel
In this paper, we use empirical data from a practical ET-QKD system testbed [4] to estimate the channel transition law \(P_{Y|X}\) directly. For interested readers, authors in [12] have demonstrated a modeling that provides a good approximation of the ET-QKD channel. Succinctly, the ET-QKD channel is a mixture of _local_ and a _global_ channel with Gaussian and uniform distributions. The uniform distribution causes a low SNR in our system resulting in a high operating FER (\(\sim 1-10\%\)). Note that the ET-QKD channel is different from conventional channels such as AWGN, BSC, etc. As such, codes that have been optimized for these channels are not necessarily the best ones for the ET-QKD channel as we demonstrate in Section V.
#### Ii-3 NB-LDPC codes
A NB-LDPC code over \(\mathbb{GF}(2^{q})\) is defined by a sparse parity check matrix \(\mathbf{H}\in\mathbb{GF}(2^{q})^{M\times N}\). The matrix \(\mathbf{H}\) has a Tanner graph representation comprising
Fig. 1: QKD system model. The arrival times of photons are discretized using pulse position modulation. Each frame has \(2^{q}\) bins and the spacing between frames in called binwidth.
Fig. 2: KeyMatrix-EER vs. coding rate. Left-right-LDPC code in \(\mathbb{GF}(2^{6})\); Right panel: Binary LDPC code. Maximum in the key rate occurs at FER around 0.05 in both figures.
of \(M\) check nodes (CNs) and \(N\) variable nodes (VNs) corresponding to rows and columns of \(\mathbf{H}\). A CN is connected to a VN by an edge if the corresponding entry in \(\mathbf{H}\) is non-zero where the edge is additionally labeled by the non-zero entry. The interconnection between VNs and CNs of a code is represented by degree distributions \(L(x)=\sum_{d}L_{d}x^{d}\) and \(P(x)=\sum_{d}P_{d}x^{d}\), where \(L_{d}\) and \(P_{d}\) represent the fraction of nodes connected respectively to VNs and CNs of degree \(d\). The coding rate \(R\) of the code is given by \(R=1-\frac{L^{\prime}(1)}{P^{\prime}(1)}\). The FER performance of the code depends on the degree distributions \(L(x)\) and \(P(x)\). In this paper, we optimize the rate \(R\) and VN degree distribution \(L(x)\). For given \(R\) and \(L(x)\), we find a two-element distribution \(P(x)\) that results in rate \(R\). Now for the degree distributions \(L(x)\) and \(P(x)\), parity check matrix \(\mathbf{H}\) is randomly sampled among the ensemble of LDPC codes that match these degree distributions [15] and label each edge uniformly at random with a non-zero element of \(\mathbb{GF}(2^{q})\). In the next section, we propose the NB-MLC scheme for IR.
## III Non-Binary Multi-Level Coding
The NB-MLC scheme offers a tradeoff between key rate \(r\) and latency/complexity of key generation through an integer parameter \(a\), \(1\leq a\leq q\). Let \(b\) and \(r\) be integers such that \(q=ab+r\), where \(b=\lfloor\frac{q}{a}\rfloor\) and \(r\) is the remainder when \(q\) is divided by \(a\). Each symbol \(X\) in \(\mathbf{X}\) received by Alice is an element of \(\mathbb{GF}(2^{q})\). We split \(X\) into \(b+1\) symbols \((X_{1},X_{2},\ldots X_{b+1})\), where \(X_{i}\in\mathbb{GF}(2^{a}),1\leq i\leq b\) and \(X_{b+1}\in\mathbb{GF}(2^{r})\) using an injective mapping \(u:\mathbb{GF}(2^{q})\rightarrow\mathbb{GF}(2^{a})^{b}\times\mathbb{GF}(2^{r})\). Using the above conversion, we split the sequence \(\mathbf{X}\) into \(b+1\) layers \((\mathbf{X}_{1},\mathbf{X}_{2},\ldots,\mathbf{X}_{b+1})\), where \(\mathbf{X}_{i}\in\mathbb{GF}(2^{a})^{N},1\leq i\leq b\) and \(\mathbf{X}_{b+1}\in\mathbb{GF}(2^{r})^{N}\). Let \(\alpha_{i}\) denote the bit size of the symbols in the \(i\)th layer. We have \(\alpha_{i}=a,1\leq i\leq b\) and \(\alpha_{b+1}=r\). For each layer \(i\), we use a NB-LDPC code \(\mathbf{H}_{i}\) where \(\mathbf{H}_{i}\in\mathbb{GF}(2^{\alpha_{i}})^{m_{i}\times N}\) for \(1\leq i\leq b+1\). Now, Alice generates a message \(\mathbf{S}=\{\mathbf{S}_{1},\mathbf{S}_{2},\ldots,\mathbf{S}_{b+1}\}\) by setting \(\mathbf{S}_{i}=\mathbf{H}_{i}\mathbf{X}_{i}\), \(1\leq i\leq b+1\) and sends it to Bob over the public channel. Using \(\mathbf{S}\) and \(\mathbf{Y}\), Bob decodes every layer \(\mathbf{X}_{i},1\leq i\leq b+1\) and hence \(\mathbf{X}\) which is the reconciled key.
Let \(\widetilde{\mathbf{X}}_{1}^{i-1}:=\{\widetilde{\mathbf{X}}_{1},\widetilde{ \mathbf{X}}_{2},\ldots,\widetilde{\mathbf{X}}_{i-1}\}\) be the decoding result of layers \(1,2,\ldots i-1\). Similar to [5], Bob decodes layer \(\mathbf{X}_{i}\) with received message \(\mathbf{S}_{i}\), and side information \(\mathbf{Y}\) and \(\widetilde{\mathbf{X}}_{1}^{i-1}\) using SW decoding for NB-LDPC codes [16]. The equivalent channel for the \(i\)th layer takes input \(\mathbf{X}_{i}\) and outputs \(\{\mathbf{Y},\mathbf{X}_{1}^{i-1}\}\) with transition law \(\gamma^{i}:=P(Y=y,X_{1}^{i-1}=x_{1}^{i-1}|X_{i}=x_{i})\). We derive the transition law \(\gamma^{i}\) empirically from our QKD testbed and use it in Section IV for code optimization.
The size of massage \(\mathbf{S}_{i}\) sent by Alice for the reconciliation of the \(i\)th layer is \(m_{i}\). Let \(E_{i}\) be the FER for the \(i\)th layer. The key rate of the NB-MLC scheme is obtained by adding the key rates of each layer (similar to [5]) and is given by
\[r=\sum_{i=1}^{b+1}\alpha_{i}(1-E_{i})\frac{N-m_{i}}{N}. \tag{2}\]
The key rate depends on the coding rates \(R_{i}=\frac{N-m_{i}}{N}\) of \(\mathbf{H}_{i}\) used in layer \(i\). The parameter \(a\) in the NB-MLC scheme affects the key rate as well as the latency and hardware complexity (which are, respectively, the sum of the decoding latencies and the sum of the complexities of all the layers). Note that \(a=1\) gives us the binary MLC scheme of [5] and \(a=q\) provides a completely non-binary scheme with only one layer. As \(a\) is increased from \(1\) to \(q\), the complexity monotonically increases. However, as we demonstrate in Section V, the key rates are not monotonic in \(a\).
Finally, the performance of the system in terms of the key rate and latency also depends on the mapping \(u(X)\) used to split the symbols \(X\in\mathbb{GF}(2^{q})\) into symbols of different layers. For convenience, we use the following mapping. We first convert \(X\) into its binary representation \(X_{b}\). We then split the bits in \(X_{b}\) into \(b+1\) groups with the \(i\)th group having \(\alpha_{i}\) bits. We then treat the bits in each group \(i\) as a binary representation and convert them back to a symbol in \(\mathbb{GF}(2^{\alpha_{i}})\). A study on the effects of different mappings on the key rate and latency is beyond the scope of this paper and is part of future work. In the next section, we provide the JRDO framework based on differential evolution to jointly optimize the rate and degree distribution of the NB-LDPC codes.
## IV Joint Rate and
### Degree Distribution Optimization
In this section, we provide the framework to design parity check matrices \(\mathbf{H}_{i}\), \(1\leq i\leq b+1\) for use in the \(i\)th layer of the NB-MLC scheme with channel transition probability \(\gamma^{i}:=P(Y=y,X_{1}^{i-1}=x_{1}^{i-1}|X_{i}=x_{i})\). The construction method is the same for all layers, hence we drop index \(i\). In particular, we design the VN degree distribution \(L(x)\) and rate \(R\) for \(\mathbf{H}\) (see Section II-3 for how a non-binary \(\mathbf{H}\) is generated from \(L(x)\) and \(R\)). The channel transition probability is \(\gamma\).
Our framework utilizes differential evolution (DE) [14] to find \(L(x)\) and \(R\). DE is a popular and effective population-based evolutionary algorithm that can be used for a maximization (or minimization) of any function \(f()\). The algorithm iteratively improves a candidate solution (that maximizes \(f()\)) using an evolutionary process and can explore large design spaces with low complexity. DE has been extensively used in coding theory literature to design good irregular LDPC codes for the erasure channel [17], AWGN channel [13], Rayleigh fading channel [18], etc. The goal in these works is to design degree distributions that have low FER. This goal is achieved using DE where the function \(f()\) is generally set as some low complexity predictor of the FER performance of the code such as the threshold obtained by density evolution [13]. However, as discussed in section I, the goal for us in this paper is to maximize the key rate and not merely to minimize the FER. Additionally, the techniques for optimizing the degree distributions using code thresholds work for a fixed code rate and we have not found any previous work that jointly optimizes the code rate along with maximizing the threshold.
In this paper, following the expression for key rate in Eqn. (2), we jointly optimize the degree distribution \(L(x)\) and the coding rate \(R\) using DE by setting \(f(L(x),R)=(1-E)R\). Here, \(E\) is the expected FER of a code ensemble with degree distribution \(L(x)\) and rate \(R\) on a channel with transition law \(\gamma\). Note that to be able to optimize the above function using DE feasibly, the cost of computing the function must
be low (since the DE algorithm evaluates the function \(f()\) a certain fixed number of times at every iteration). However, as discussed in Section II-2, since the FER of the code is high (\(\sim 1-10\%\)), the FER \(E\) can be easily computed using Monte-Carlo (MC) simulations with a small number of MC experiments (e.g., 200-300). The overall JRDO algorithm is provided in Algorithm 1 where the procedures DiffMutation() and CrossOver() have regular meanings as per [14]. The contribution in Algorithm 1 is the use of the objective function \(f(L(x),R)=(1-E)R\) and its feasible evaluation using MC simulations owing to the high FER property of the ET-QKD channel thus making the joint optimization possible.
```
1:Initialize population \(\Pi=\{(L_{1},R_{1}),\ldots,(L_{N_{pop},R_{N_{pop}}})\}\)
2:for max number of iterations do
3:for\(j=1:N_{pop}\)do
4:\((L_{j}^{m},R_{j}^{m})=\)DiffMutation\((j,\Pi)\)
5:\((L_{j}^{c},R_{j}^{c})=\)CrossOver\(\big{(}(L_{j}^{m},R_{j}^{m}),(L_{j},R_{j})\big{)}\)
6: Evaluate \(f(L_{j}^{c},R_{j}^{c})\) using Monte-Carlo simulations
7:for\(j=1:N_{pop}\)do
8:if\(f(L_{j}^{c},R_{j}^{c})>f(L_{j},R_{j})\)then
9: Update population: \((L_{j},R_{j})\leftarrow(L_{j}^{c},R_{j}^{c})\)
10:Output:\((L,R)\) from \(\Pi\) with largest \(f(L,R)\)
```
**Algorithm 1** JRDO: Joint Rate and deg. Dist. optimization
## V Simulation Results
In this section, we demonstrate the performance of the NB-MLC scheme and the performance of the codes designed using the JRDO algorithm. We compare the performance with the MLC scheme of [5] as well as with codes used in [5] and codes designed for the BIAWGN channel [13]. For the JRDO framework, we optimize degree distribution \(L(x)=\sum_{d=2}^{5}L_{d}\) where we set \(L_{1}\) to be zero and a maximum VN degree of 5. For non-JRDO codes, we find and use the rate that results in the largest key rate (for that particular layer of the NB-MLC scheme) by exhaustively iterating over the message length \(m_{i}\). The latency of the NB-MLC scheme is calculated as the sum of the decoding latencies of all the layers in the NB-MLC scheme. We use a code length \(N=2000\) and FFT-based sum-product LDPC decoding (SW version [16]) in our simulations.
In Fig. 3, we study the effect of the MLC bit size \(a\) on
the key rate and latency of the QKD system. Recall that the QKD system has \(2^{q}\) bins per frame implying a total bit size of \(q\). From Fig. 3 left panel, we can see that for all values of \(q\), the key rate is non-monotonic in \(a\) and has a maximum when \(a\) is strictly between \(1\) and \(q\). The reason the key rate is non-monotonic in \(a\) is the following. Increasing the value of \(a\) makes the NB-MLC scheme use NB-LDPC codes from a larger Galois field which are stronger resulting in greater FER performance and hence better key rates per layer. However, due to layering, correct decoding in the earlier layers still contributes to the overall key rate even if decoding failures exist in the later layers. This additive effect (due to Eqn. (2)) improves the overall key rate with more layers (small \(a\)). Due to the above two effects, the overall key rate is non-monotonic. In Fig. 3 middle panel, we plot the latency of the NB-MLC scheme as a function of \(a\) for different values of \(q\). From the plot, we see that the latency becomes significantly large as \(a\) becomes large (close to \(q\)). This trend is because a larger \(a\) implies an NB-LDPC code from a larger Galois field and hence a larger decoding latency. Note that as \(a\) increases, the increase in decoding complexity is sometimes offset by the decrease in the number of decoding layers in the NB-MLC scheme, and thus latency is non-monotonic in \(a\) (also evident in Fig. 3 right panel). In Fig. 3 right panel, we plot the key rate vs. latency achieved due to different values of \(a\) in the NB-MLC scheme. From the figure, it is clear that the best trade-off is obtained for a small value of \(a\) (3 or 4). Increasing \(a\) further results in higher latency at no increase in key rates.
In Fig. 4 left panel, we compare the key rates across different values of binwidths in a QKD system with \(q=6\). We compare the key rates for \(a=1,3\), and \(6\). Similar to Fig. 3, we can again see that across all binwidths, \(a=3\) has a higher key rate compared to \(a=1\) (MLC scheme of [5] with binary LDPC codes) and \(a=6\) (a scheme with complete NB-LDPC codes and no layering). Overall, the NB-MLC scheme with a small value of \(a>1\) results in the best system performance.
In Fig. 4 middle and right panels, we compare the key rates obtained by different code constructions. The diamond marked curves correspond to LDPC codes used in [5]. As per [5], these LDPC codes are randomly constructed such that each VN has a constant degree of 3. Note that there is no limitation on
Fig. 3: Key rate and latency for different \(q\) as the NB-MLC bit size \(a\) is varied. The ET-QKD system has a binwidth of 300ps. Left panel: Key rate vs. \(a\); Middle panel: Latency vs. \(a\); Right panel: Key rate vs. latency where each point on a curve for a particular \(q\) represents a different value of \(a\) (the values of \(a\) are marked on the curves). All curves use LDPC codes mentioned in Section II-3 with \(L(x)=x^{3}\).
the CN degree distribution in [5]. However, the LDPC codes considered in this paper (see Section II-3) have a two-element CN degree distribution. The triangle marked curves correspond to the degree distribution provided in [13, Table I] with a maximum VN degree 5. Note that this degree distribution is optimized for the BIAWGN channel. The circle marked curves correspond to LDPC codes with regular VN degree distribution \(L(x)=3\) (similar to [5]) but with a two-element CN degree distribution. Finally, the square marked curves correspond to degree distributions obtained using the JRDO algorithm. From the figures, we make the following observations. The key rates for the diamond marked curves are worse compared to the circle marked curves. This trend suggests that it is better to use a two-element CN degree distribution (as done in our paper). The key rates for the triangle marked curves (BIAWGN optimized degree distribution) are worse compared to using regular LDPC codes with VN degree 3 (circle marked curves). This trend demonstrates that codes optimized for non-QKD channels do not perform well when used for the QKD channel. Finally, in Fig. 4 middle and right panels, we see that JRDO-LDPC codes (square marked curves) result in the largest key rates which is because the JRDO-LDPC codes are optimized for the QKD channel. In Fig. 4 right panel, we additionally plot the key rates achieved using the techniques of [5] i.e., MLC scheme (\(a=1\)) and VN degree 3 regular LDPC codes and no limitation on CN degree distribution (plus marked curve). We see that our techniques (square marked curve) provide around 40% improvement in key rates compared to [5].
## VI Conclusion
In this paper, we considered the problem of information reconciliation in ET-QKD and proposed a generalization of the multi-level coding (MLC) scheme of [5] called NB-MLC that uses NB-LDPC codes. We showed that the NB-MLC scheme offers flexibility in system design in terms of key rate and latency, and the NB-MLC scheme with a small bit size per layer results in the best trade-off between key rate and latency. Finally, we proposed a framework based on different evolution called JRDO that jointly optimizes the rate and degree distribution for the LDPC codes used in the NB-MLC scheme. JRDO-LDPC codes are optimized for the ET-QKD channel and result in a significant improvement in the key rates compared to LDPC codes used in prior work. Ongoing research is focused on optimizing the edge weight distributions (along with JRDO) to further improve the key rates.
## Acknowledgement
The authors acknowledge the NSF grant QuIC-TAQS no. 2137984 and NSF grant EFRI-ACQUIRE no. 1741707.
|
2303.16262 | Programming hydrogel adhesion with engineered polymer network topology | Hydrogel adhesion that can be easily modulated in magnitude, space, and time
is desirable in many emerging applications ranging from tissue engineering, and
soft robotics, to wearable devices. In synthetic materials, these complex
adhesion behaviors are often achieved individually with mechanisms and
apparatus that are difficult to integrate. Here, we report a universal strategy
to embody multifaceted adhesion programmability in synthetic hydrogels. By
designing the surface network topology of a hydrogel, supramolecular linkages
that result in contrasting adhesion behaviors are formed on the hydrogel
interface. The incorporation of different topological linkages leads to
dynamically tunable adhesion with high-resolution spatial programmability
without alteration of bulk mechanics and chemistry. Further, the association of
linkages enables stable and tunable adhesion kinetics that can be tailored to
suit different applications. We rationalize the physics of chain slippage,
rupture, and diffusion that underpins emergent programmable behaviors. We then
incorporate the strategy into the designs of various devices such as smart
wound patches, fluidic channels, drug-eluting devices, and reconfigurable soft
robotics. Our study presents a simple and robust platform in which adhesion
controllability in multiple aspects can be easily integrated into a single
design of a hydrogel network. | Zhen Yang, Guangyu Bao, Shuaibing Jiang, Xingwei Yang, Ran Huo, Xiang Ni, Luc Mongeau, Rong Long, Jianyu Li | 2023-03-28T19:15:46Z | http://arxiv.org/abs/2303.16262v2 | # Programming hydrogel adhesion with engineered polymer network topology
###### Abstract
Hydrogel adhesion that can be easily modulated in magnitude, space, and time is desirable in many emerging applications ranging from tissue engineering, and soft robotics, to wearable devices. In synthetic materials, these complex adhesion behaviors are often achieved individually with mechanisms and apparatus that are difficult to integrate. Here, we report a universal strategy to embody multifaceted adhesion programmability in synthetic hydrogels. By designing the surface network topology of a hydrogel, supramolecular linkages that result in contrasting adhesion behaviors are formed on the hydrogel interface. The incorporation of different topological linkages leads to dynamically tunable adhesion with high-resolution spatial programmability without alteration of bulk mechanics and chemistry. Further, the association of linkages enables stable and tunable adhesion kinetics that can be tailored to suit different applications. We rationalize the physics of chain slippage, rupture, and diffusion that underpins emergent programmable behaviors. We then incorporate the strategy into the designs of various devices such as smart wound patches, fluidic channels, drug-eluting devices, and reconfigurable soft robotics. Our study presents a simple and robust platform in which adhesion controllability in multiple aspects can be easily integrated into a single design of a hydrogel network.
**Keywords:** controlled adhesion, hydrogel adhesives, tough hydrogels, polymer entanglement
## Introduction
The ability to program hydrogel adhesion has significant implications in engineering, biology, and medicine. The variables of hydrogel adhesion include adhesion energy, spatial distribution, and kinetics. Among them, controlling adhesion energy is needed for bonding reinforcement after placement[1] or for easy detachment without damaging the adherend surface[2, 3]. Controlling the spatial distribution enables independent modulation of adhesion at different locations and can be useful for wound dressings that require strong attachment to healthy tissues while preventing stickiness to fragile and delicate wound beds. While most existing research efforts focused on the adhesion magnitude at the equilibrium stage, controlling adhesion kinetics, i.e., to modulate transient adhesion temporally, allows one to tune the operating time window for adhesive placement and is equally important but less explored. Programming the multifaceted adhesion with high-level control could enable and improve various applications ranging from tissue repair to soft robotics, yet remains extremely challenging.
In nature, marine animals such as _flatworms_ control adhesion to substrates using sophisticated adhesion organs that contain two glands which respectively release adhesive and de-adhesive agents[4]. Such programmable adhesion is difficult to achieve for synthetic adhesives because they require the addition of complex chemistry and apparatus that are potentially difficult to integrate. For instance, adhesion based on covalent bonds is generally strong but difficult to modulate [5] unless introducing specific chemistries [6]. Physical interactions offer more flexibility to modulate adhesion energy, but require specific material properties (e.g., viscoelasticity) or additional apparatus (light, ultrasound, etc.) [7, 8, 9]. In terms of adhesion kinetics, the rate of covalent bonding is fundamentally dependent on the specific chemical reactions involved. Although using different bonds with varying reaction kinetics can in principle enable tunable adhesion kinetics, incorporation of multiple reactions into one system is challenging. Physical interactions often form instantaneously and hence do not offer sufficient tunability in the kinetics for applications that might desire rather slow kinetics[10]. Achieving spatial control of adhesion requires forming (or suppressing) the interactions at selective locations through sophisticated surface patterning, while the outcomes could be compromised by uncontrolled diffusion of chemical reagents. A universal design strategy that inherently allows for robust and multifaceted adhesion programming on diverse surfaces is still missing.
Here we report that engineering surface network topology provides a facile, predictive, and robust methodology to program multifaceted hydrogel adhesion. This approach leverages the principles of polymer entanglement that are applicable to a wide range of materials. The underlying strategy is through two adhesion units constructed by interfacial polymer entanglements of different topologies, referred to as the slip and stitch linkages (Fig 1a). The slip linkage has the topology of a long polymer chain entangled with another crosslinked network (chain to network). Such a linkage can dissociate in a rate-dependent manner so that the adhesion energy can be varied over many folds without background dissipation by varying loading rate (Fig 1c). The association kinetics of the slip linkage dominates over other operating condition-dependent sub-kinetics and is controllable through tuning its governing length scale, enabling a stable kinetic time that can be tuned in a wide range from \(\sim\)50s to \(\sim\)1000s (Fig 1d). In contrast, the stitch linkage has the topology of two crosslinked polymer networks entangled together (network to network), as found in the hydrogel topological adhesion and offers less tunability in terms of adhesion energy and kinetics[7]. Through a simple fabrication technique, we pattern the slip and stitch linkages spatially at the same interface (Fig 1b and 1e). Their contrasting adhesion behaviors enable pre-defined and spatially varying adhesion in a scalable manner. As such, we can embody adhesion programmability in multiple aspects in a hydrogel adhesive through a single design of the network structure, which we refer to as the topologically engineered adhesives (TEA) in this paper. The robust, facile, and predictive strategy for unprecedented control over hydrogel adhesion opens up numerous opportunities in engineering and medicine. We demonstrate the applications of our strategy in a wide range of devices, including wound patches, drug-eluting depots, fluidic channels, and soft actuators.
### Design of the interfacial topological linkages
To robustly program the adhesion between hydrogel adhesives and targeted surfaces, we create a diffusive interface by placing a third species of diffusive polymer, called bridging polymer, to the interface[7, 11]. Formation of the chain-to-network topology of the slip linkage demands the following conditions: (1) the hydrogel network needs to contain dangling chains and (2) a thermodynamic driving force is needed to facilitate the diffusion of bridging polymers into the gel network. Meanwhile, the diffusion needs to be halted once the linkage forms to prevent the over-diffusion of bridging polymers into the bulk gel, which may reduce the number of linkages at the interface.
To meet the first condition: we choose polyacrylamide (PAAm) as a model hydrogel network and polymerize it on mold with low surface tension such as Poly(methyl methacrylate) (PMMA).
Fig. 1: **Engineered network topology and linkages for multifaceted programming of hydrogel adhesion.** (a) Schematics of the stitch linkage (Top) and slip linkage (Bottom) formed between a bridging polymer and networks without and with surface dangling chains. The thickness of the dangling chain layer and the penetration depth of the bridging polymer are denoted as \(h_{\text{dc}}\) and \(h_{\text{pen}}\), respectively. (b) Hydrophilic and hydrophobic molds are used to form a regular network (Top) and a network carrying surface dangling chains (Bottom), respectively. (c) Rate dependence and magnitude of the adhesion energy depend on the interfacial linkage types: stitch linkages with \(h_{\text{pen}}/h_{\text{dc}}\rightarrow\infty\), slip linkages with \(h_{\text{pen}}/h_{\text{dc}}\ll 1\), and their hybrid with \(h_{\text{pen}}/h_{\text{dc}}\approx 1\). (d) The slip linkage offers programmable adhesion kinetics through tuning \(h_{\text{dc}}\) (Top), which is also insensitive to processing conditions such as the thickness of bridging polymer solution \(h_{\text{sol}}\) (Bottom). (e) Spatially controllable adhesion obtained from patterning the topological linkages at the interface.
The hydrophobicity and other associated effects inhibit the free-radical polymerization of the gel in the vicinity of the mold [12, 13, 14]. This results in a surface layer of branched dangling chains with thickness \(h_{\mathrm{dc}}\approx 10\sim 100\mu\)m, "protruding" from the crosslinked bulk network. In contrast, gels polymerized on molds with high surface tension such as glass are not subject to the hydrophobic mold effect, and hence contain crosslinked networks instead of branched dangling chains on their surfaces. The gels with and without engineered surface dangling chains are hereafter referred to as the TEA and regular gels, respectively. To meet the second criterion, stimuli-responsive polymers such as chitosan or gelatin were chosen as bridging polymers. The polarity of the hydrogel network and chitosan chains and the entropy of mixing promote the diffusion of chitosan chains into the hydrogel; meanwhile, the chitosan chains can be triggered to crosslink into a bridging network through a reaction-diffusion process in responding to pH changes, leading to penetration depths \(h_{\mathrm{pen}}\) on the order of tens of microns[11]. The network formed in-situ provides a more efficient way to engage the dangling chains as opposed to a preformed network in which the dangling chains have to diffuse slowly through reptation to form entanglement. Other strategies to form the chain-to-network topology of slip linkage at soft material interfaces can be found in Supplementary note 3.
Additionally, to encode adhesion kinetics and ensure the repeatable formation of slip linkages, the dominating kinetic mechanisms for the association of slip linkage should intrinsically rely on the gel network rather than being sensitive to external processing conditions that are difficult to control such as the thickness of cast solution \(h_{\mathrm{sol}}\). The formation of the slip linkage is associated with the diffusion of the bridging polymers (kinetic time \(t_{\mathrm{d}}\)) and their gelation process (kinetic time \(t_{\mathrm{gel}}\)), which depend on different governing length scales. Specifically, \(t_{\mathrm{d}}\) and \(t_{\mathrm{gel}}\) are associated with the diffusion of bridging polymers and gelling triggers over the thicknesses of the dangling chain layer \(h_{\mathrm{dc}}\) and that of the cast solution \(h_{\mathrm{sol}}\), respectively. Since \(h_{\mathrm{dc}}\) is a well-defined material property while \(h_{\mathrm{sol}}\) is sensitive to various processing conditions[7, 15], a well-defined and controllable adhesion kinetics ensues if \(t_{\mathrm{slip}}\geq t_{\mathrm{gel}}\). A simple scaling analysis allows us to determine a rough criterion to fulfill this requirement: \(h_{\mathrm{dc}}^{2}D_{\mathrm{eff,gel}}/h_{\mathrm{sol}}^{2}D_{\mathrm{eff}}\geq 1\), where \(D_{\mathrm{eff,gel}}\) and \(D_{\mathrm{eff}}\) are the effective diffusion coefficients of gelling triggers and bridging polymers, respectively. Taking \(D_{\mathrm{eff}}\approx 5\cdot 10^{-12}\)m\({}^{2}\)s\({}^{-1}\)(Supplementary note 3 ) and \(D_{\mathrm{eff,gel}}\approx 10^{-11}\)m\({}^{2}\)s\({}^{-1}\)[15] leads to \(h_{\mathrm{dc}}/h_{\mathrm{sol}}\gtrapprox 1\). \(h_{\mathrm{sol}}\) at a hydrogel interface is often in the range of 10\(\sim\)100 \(\mu\)m [7, 15], which means that \(h_{\mathrm{dc}}\) needs to be in the comparable range and is readily satisfied by our fabrication techniques.
We hypothesize that the condition \(h_{\mathrm{dc}}/h_{\mathrm{sol}}\gtrapprox 1\) can lead to well-defined association kinetics of the slip linkage that encodes the overall adhesion kinetics. Once the slip linkage is formed, the long PAAm chains entangle with the crosslinked bridging network. This means that the linkage can dissociate via chain slippage, which is expected to be a thermally activated process that results in rate-sensitive adhesion as generally seen at the cell and elastomeric interfaces, and other bonds[16, 17, 18, 19]. Unlike slip linkage, stitch linkages form when the bridging polymer diffuses into a regular gel and crosslinks into the bridging network, and their failures must involve the breaking of one of the networks, thereby leading to strong and rate-insensitive adhesion[7].
### Structural characterization
Based on the above principles, we fabricate a model TEA using single-network TEA gel made of PAAm and use chitosan as the bridging polymer. To probe the engagement length between the dangling chains and the bridging polymer chains, we used confocal microscopy to visualize how fluorescently labelled chitosan chains penetrate the TEA gel at equilibrium. The fluorescence intensities exponentially decrease from the outermost surface to the bulk of the TEA gels with different crosslinker-to-monomer ratios \(C\) (colored dash lines, Fig 2a). For different \(C\), we measured similar distances where the intensities meet the lower plateau (black dash line, Fig 2a), defining the penetration depth of the bridging polymer \(h_{\mathrm{pen}}\approx 70\mu m\). This value depends on the reaction-diffusion process and thus may vary with the type of bridging polymers. For instance, \(h_{\mathrm{pen}}\) for gelatin is expected to be temperature-dependent.
As directly imaging the dangling chains is challenging, we made a first-order estimation of the thickness \(h_{\mathrm{dc}}\) of the dangling chain layer from the experimentally measured elastic moduli. The TEA gel has a total thickness of \(h\) and is idealized with a tri-layer model (Fig 2b): a layer of a regular network is sandwiched by two layers of branched dangling chains. The elastic modulus of the sandwiched regular network \(E_{\mathrm{reg}}\) can be measured from a regular hydrogel formed at the same conditions except using a hydrophilic mold, given their observed structural similarity[13, 14, 20]. The elastic modulus of the dangling chain layer is assumed to be negligible since it cannot carry any transverse loads. As such, we can estimate \(h_{\mathrm{dc}}\) from the ratio of measured elastic moduli of the TEA and regular gels \(E_{\mathrm{tea}}/E_{\mathrm{reg}}\) in uniaxial tensile tests (Eqn. 1 and Fig S1). The estimations of \(h_{\mathrm{dc}}\) show a decreasing trend with the increasing value of \(C\) (Fig 2c). The trend \(h_{\mathrm{dc}}\sim C^{-1}\) may be attributed to the competition between bulk elasticity of the gel network and interface tension during gelation on hydrophobic mold[12] (Supplementary Note 1), demonstrating a controlled method for fabricating the dangling chain layer of different sizes.
With the measured length scales, we calculate their ratio \(h_{\mathrm{pen}}/h_{\mathrm{dc}}\) to quantify the extent to which the bridging polymers engage the dangling chains, which is expected to govern the formation of different topological linkages at the TEA gel interface (Fig S2d). When \(h_{\mathrm{pen}}/h_{\mathrm{dc}}\ll 1\), the bridging network only engages a part of the dangling chain layer, so that the interface only comprises slip linkage. If \(h_{\mathrm{pen}}/h_{\mathrm{dc}}\approx 1\), a complete engagement ensues which indicates that part of the bridging polymers may diffuse across the dangling chain layer to stitch the underlying network of the TEA gel. In this case, the linkage is expected to behave as the combination of the slip and stitch linkage and is referred to as the hybrid linkage (Fig 1c). Lastly, a regular hydrogel interface that only comprises stitch linkage corresponds to \(h_{\mathrm{pen}}/h_{\mathrm{dc}}\rightarrow\infty\) since \(h_{\mathrm{dc}}\to 0\). Fig 2c shows \(h_{\mathrm{pen}}/h_{\mathrm{dc}}\approx 0.2\) when \(C=0.024\%\) and increases to unity as \(C\) increases to \(0.06\%\) for the TEA gel interface. By tuning \(C\), we can vary the degree of engagement and consequently the formation of different linkages, which will be shown later to modulate the resulting adhesion energy.
## Interfacial topological linkages to program rate-dependent adhesion energy
To test our hypothesis, we first focus on two extremities: the interfaces containing either slip or stitch linkages. To form slip linkage-mediated adhesion, we adhere two TEA gels using chitosan as the bridging polymer with \(h_{\mathrm{pen}}/h_{\mathrm{dc}}\approx 0.2\) (\(C=0.024\%\)), followed by a T-peeling specimen to measure the adhesion energy \(G\) as a function of crack speed \(V_{\mathrm{crack}}\) (Methods and Fig S2a). Fig 2d shows that the slip linkage-mediated adhesion \(G^{1/2}\) varies logarithmically with \(V_{\mathrm{crack}}\), the crack speed. We observed a factor of 25 in the change of \(G\) as \(V_{\mathrm{crack}}\) varies by two decades. Together plotted in Fig 2d is the stitch linkage-mediated adhesion formed between two regular hydrogels for the same \(C\) and chitosan concentration \(c_{\mathrm{chi}}\), showing higher magnitude but much weaker rate-dependence. The contrast between slip and stitch adhesion is the most pronounced at low \(V_{\mathrm{crack}}\) but diminishes at high \(V_{\mathrm{crack}}\). We also observed adhesive failure and mixed adhesive-cohesive failure at the slip and stitch linkage-mediated interfaces, repspectively. Our experiments further confirmed the similar bulk mechanics between the TEA and the regular gels: they both show minimal hysteresis in cyclic loadings and weak rate dependences, indicating near-perfect elasticity (Fig S1(a)-(d)). The data suggest that different interfacial network topologies regulate hydrogel adhesion independent of the bulk properties.
These results motivate us to further analyze the data with a kinetic model proposed by Chaudhary[18]. The model considers the breaking of linkages as thermally activated processes[16, 17, 18, 19], and treats each linkage as a linear spring with stiffness \(k_{\mathrm{i}}\) and an activation energy of dissociation \(E_{\mathrm{i}}\) (i can be slip or stitch). These parameters influence the dissociation rates of the linkages (Fig 2e), and consequently the rate-dependence of the hydrogel adhesion energy. As detailed in Supplementary note 2, the model states the adhesion energy for linkage
**Fig. 2: Design and characterization of topology-engineered adhesive (TEA).** (a) Intensities of florescent chitosan chains diffused in TEA gels. The shaded area represents the standard deviation from 5 measures. The stronger chitosan intensity for higher \(C\) may be due to more chitosan chains trapped by the denser dangling chains on the interface. (b) An idealized model used to estimate the thickness of branched dangling chain layer. (c) Left panel: estimated \(h_{\rm dc}\) and measured \(h_{\rm pen}\) as functions of the crosslinker-to-monomer ratio \(C\). Error bars represent standard deviation. Right panel: relative engagement length \(h_{\rm pen}/h_{\rm dc}\) as a function of \(C\). **Dissimilar topological linkages lead to contrasting adhesion behaviors.** (d) Slip and stitch linkages-mediated \(G^{1/2}\) plotted as functions of \(\ln(V_{\rm crack})\) for \(C=0.024\%\) and \(c_{chi}=2\%\) g/mL. (e) Illustration showing the dissociations of slip and stitch linkages as thermally activated processes, with reaction rate of dissociation \(r_{i}\) (\(i\) can be slip or stitch). Upon a separation force \(F\), the activation energy of the linkage is decreased by \(-Fx\), where \(x\) is the seperation distance. (f) The formation of topological linkages and the resulting adhesion depend on \(h_{\rm pen}/h_{\rm dc}\), which is controlled by \(C\). Slip and hybrid linkages are achieved for TEA gels with \(C=0.024\%\) and \(0.048\%\), respectively. The inset shows the same curves as (d) but for \(C=0.048\%\).
\(i\) relates to the crack speed via \(G^{1/2}\sim\ln V_{\rm crack}\), which agrees perfectly with our experimental data for the slip linkage-mediated adhesion (blue dash lines in Fig 2d). Further, the model shows that the slope of the linear relation scales inversely to \(k_{i}^{1/2}\), while the intercept depends on \(E_{i}\). By fitting this model to our data we were able to determine \(k_{i}\) and \(E_{i}\), which are otherwise difficult to characterize directly. Specifically, we found \(k_{\rm slip}=1.7\times 10^{-7}\) N/m and \(E_{\rm slip}\) = 75 kJ/mole for the slip linkage with \(C=0.024\%\) and \(h_{\rm pen}/h_{\rm dc}\approx 0.2\). It is plausible that the hydrogel dangling chains determine \(k_{\rm slip}\) which is of entropic type, so \(k_{\rm slip}\sim k_{B}T/R^{2}\) with \(k_{B}T\) the energy in temperature and \(R\) the average end-to-end distance of the dangling chains. We estimated that \(R\approx 250\) nm, which is 50 times larger than the mesh size of the underlying network \(\xi\approx 5\) nm (Supplementary note 1). The fitted value of \(E_{\rm slip}\) is larger than the typical activation energy of hydrogen bond (4-50 kJ/mole), suggesting potential synergistic contributions of multiple hydrogen bonds (between chitosan and PAAm) to a single slip linkage. Besides, the model captures the rate-insensitivity of \(G_{\rm stitch}^{1/2}\) of the stitch linkage with \(k_{\rm stitch}\geq\sim 300k_{\rm slip}\) and \(E_{\rm stitch}\approx 185\) kJ/mol (red dash line, Fig 2b). The much larger \(k_{\rm stitch}\) may be due to the full extension of the entangled networks prior to network rupture, driving the polymer chains far beyond the entropic limit. The estimation of \(E_{\rm stitch}\) is in the range of the bond energy of the C-C bond (350 kJ/mole) [21] and the theoretically estimated energy stored in each bond prior to rupture using molecular parameters (60 kJ/mole)[22], in line with the assumption that the stitched networks must rupture during separation. This model reveals quantitatively that the slip linkages exhibit much lower stiffness and dissociation energy compared to those of stitch linkages.
Additionally, the model predicts that the hybrid linkage formed when \(h_{\rm pen}/h_{\rm dc}\) is close to 1, would impart tunable dependence on loading rate through the relation \(G_{\rm hybrid}=G_{\rm slip}+G_{\rm stitch}\). In this case, \(G_{\rm hybrid}^{1/2}\) is predicted to be a nonlinear function of \(\ln V_{\rm crack}\) with a finite and constant value of \(G_{\rm stitch}\) (Fig 1c), indicating that the hybrid linkage behaves as slip or stitch linkage respectively in different ranges of loading rates. To test the hypothesis, we prepared TEA gels with \(h_{\rm pen}/h_{\rm dc}\approx 0.6\) (\(C=0.048\%\), Fig 2c), and the resulting \(G^{1/2}\) shows a nonlinear trend as expected: at high crack speed, the data collapses onto a master curve with that with \(h_{\rm pen}/h_{\rm dc}\approx 0.2\) (\(C=0.024\%\)), following \(G_{\rm slip}^{1/2}\sim\ln V_{\rm crack}\) (Fig 2f). Note that in this regime, the slip linkage-mediated adhesion is higher than that mediated by stitch linkage for the same \(C\) between two regular gels (Fig 2f inset). Below \(V_{\rm crack}\)=0.5mm/s, the data converges to a plateau corresponding to rate-independent adhesion energy of \(\sim 50\) Jm\({}^{-2}\). This baseline adhesion is also close to the value of \(G_{\rm stitch}\) for the same \(C\) ( 60 Jm\({}^{-2}\), Fig 2f inset), confirming the coexistence of stitch- and slip-linkages on the interface. Fixing \(G_{\rm stitch}=50\) Jm\({}^{-2}\), our model captures the experimentally measured \(G_{\rm hybrid}^{1/2}\) with fitting parameters \(k_{\rm slip}=1\times 10^{-7}\) N/m and \(E_{\rm slip}=71\) kJ/mole (Fig 2f, cyan dot line), closed to the values of the sample with \(h_{\rm pen}/h_{\rm dc}\approx 0.2\) (\(C=0.024\%\)). The ability to control the formation of linkage by tuning the entanglement length between TEA gel and bridging polymers offers a high level of adhesion programmability: not only can we predictably tune the adhesion energy by varying loading rates, but also program rate dependence in different ranges of loading rate. The finite adhesion energy at low loading rates provided by the hybrid linkage can effectively prevent the adhesive from failing at static load to ensure good durability.
## Programming adhesion kinetics
In addition to the equilibrium state of adhesion, we next demonstrate that the association of the topological linkages regulates the transient adhesion, which can be exploited to encode adhesion kinetics (Fig 1d). When the bridging polymer solution is placed between the hydrogel and a permeable substrate, they diffuse into the two networks while crosslinking into a bridging network in response to a trigger. The reaction-diffusion process comprises two concurrent subprocesses: the gelation and the diffusion of the bridging polymer with their respective kinetic
time \(t_{\rm gel}\) and \(t_{\rm d}\), respectively. We assume that the overall adhesion kinetics is governed by the slower sub-kinetics: \(t\equiv\max\{t_{\rm d},t_{\rm gel}\}\).
When using chitosan as the bridging polymer, the gelation process is due to the pH change in the solution, which is associated with the diffusion of gelling trigger (protons) away from the cast adhesive solution. The thickness of the solution \(h_{\rm sol}\) sets the critical diffusion length, and thus its kinetics time follows \(t_{\rm gel}\sim h_{\rm sol}^{2}/D_{\rm eff,gel}\)[15] where \(D_{\rm eff,gel}\) is the effective diffusion coefficient of the gelling trigger. However, \(h_{\rm sol}\) is sensitive to the applied compression or wettability of the interface, yielding the gelation kinetics uncertain in practice without carefully controlled \(h_{\rm sol}\).
Figure 3: **Programmable adhesion kinetics of TEA.**(a) Illustrations showing that the total adhesion kinetics comprises two sub-kinetic processes: diffusion and gelation. (b) Dimensionless adhesion between two regular hydrogels \(G/G_{\rm eq}\) as a function of waiting time for different cast solution thicknesses \(h_{\rm sol}\). The inset shows \(t_{1/2}\) as a function of \(h_{\rm sol}\). Error bars represent 95% confidence intervals from fitting the exponential function. (c) Similar curves as (b) measured at the interface between two TEA gels with \(h_{\rm dc}\approx 120\mu m\). (d) Adhesion kinetics of TEA interfaces with fixed \(h_{\rm sol}\) (\(50\mu m\)) and varying values of \(h_{\rm dc}\) (\(h_{\rm dc}\approx 370,120,70\mu m\), achieved using \(C=0.024\%\), \(0.048\%\), and \(0.06\%\), respectively). The inset shows \(t_{1/2}\) as a function of \(h_{\rm dc}\). The y error bars represent a 95% confidence interval from fitting an exponential function while the x error bars represent the standard deviation from 3 measures.
Meanwhile, the diffusion process of bridging polymers depends on the value of \(h_{\rm dc}\), and hence the type of formed linkages. For a regular gel, \(h_{\rm dc}\to 0\), the interface is dominated by stitch linkages which only require the bridging polymer to diffuse by one mesh size of the gel network, thus taking negligible kinetic time \(t_{\rm d}\approx 0\) s [15]. Thus, one can expect the adhesion kinetics of the regular hydrogel interface to be limited by \(t_{\rm gel}\), which is difficult to control in practice due to the variable \(h_{\rm sol}\). We hypothesize that incorporation of slip or hybrid linkages with dangling chain layers of finite values of \(h_{\rm dc}\) can resolve the issue. In this case, the formation of the linkages requires the bridging polymers to diffuse through the dangling chains layer (Fig 3a). \(h_{\rm dc}\) thus sets the characteristic diffusion length such that \(t_{\rm d}\sim h_{\rm dc}^{2}/D_{\rm eff}\). The prolonged diffusion process can bypass the uncertain gelation process to govern the overall adhesion kinetics. Importantly, since \(h_{\rm dc}\) is a material property, it can render the overall adhesion kinetics insensitive to processing or environmental conditions.
To test the hypothesis, we characterized the adhesion kinetics with different values of \(h_{\rm sol}\) (50 and 120 \(\mu\)m) controlled by nylon meshes of different thicknesses[15] (Methods and Fig S3). We define the adhesion kinetics using the half time \(t_{1/2}\) when \(G/G_{\rm eq}\) reaches 1/2, where \(G_{\rm eq}\) is the adhesion energy in equilibrium. For the regular gel interface, we observe a strong \(h_{\rm sol}\)-dependent adhesion kinetics, and the associated kinetic time follows \(t_{1/2}\sim h_{\rm sol}^{2}\) (Fig 3b and inset). On the contrary, we observe that the adhesion kinetics of the TEA gel interface with \(h_{\rm dc}\approx 120\mu m\) (\(C=0.048\%\)) is insensitive to the value of \(h_{\rm sol}\) (Fig 3c and inset). Our point is further strengthed by applying an initial compression (15% strain) to the TEA gel interface (\(h_{\rm dc}\approx 370\mu m\)) without controlling \(h_{\rm sol}\), which yields the same adhesion kinetics as the TEA gel interface with controlled \(h_{\rm sol}\) (Fig S3c). Thus, incorporation of the engineered dangling chain layer leads to adhesion kinetics insensitive to processing conditions, validating our hypothesis.
Importantly, not only is the TEA kinetics insensitive to processing conditions but also controllable through changing \(h_{\rm dc}\). Fixing \(h_{\rm sol}\), we observed strong \(h_{\rm dc}\)-dependent adhesion kinetics of the TEA gel interface: the kinetics accelerates as \(h_{\rm dc}\) decreases, suggesting shorter distance that the bridging polymers need to diffuse across to form hybrid or slip linkages (Fig 3d). The half time follows \(t_{1/2}\sim h_{\rm dc}^{2}\) at \(h_{\rm dc}\approx 70\) and 120 \(\mu\)m but deviates from the scaling relation at \(h_{\rm dc}\approx 370\)\(\mu\)m (Fig 3d inset). In the last case, the kinetics time is presumably bounded by the total diffusion-reaction time since \(h_{\rm pen}/h_{\rm dc}\ll 1\), indicating that the underlying crosslinked network of the TEA gel is beyond the reach of bridging polymers. The pre-programmable TEA kinetics can be tailored to suit different applications. For instance, a small \(h_{\rm dc}\) can be used with compression to achieve fast kinetics for hemostatic applications[23], while a large \(h_{\rm dc}\) provides a sufficient time window for adhesive placement.
## Universal applicability
The design and fabrication of TEA are universally applicable to a wide range of material systems (different bridging polymers, targeted substrates, and TEA networks) (Fig 4a). Slip linkages formed on the gel-bridging network interface could be coupled with other interactions such as slip, stitch linkages or covalent bonds [24] that the bridging network can interact with the targeted substrates. For instance, the triggered crosslinking and the abundant amino groups of chitosan provide numerous options to interact with different substrates through covalent or physical interactions[25]. Based on the principle, slip-slip, slip-stitch, and slip-bond linkages were achieved between two TEA gels, between a TEA and a regular gel, and between a TEA gel and a VHB elastomer, respectively (Fig 4b, Table 1). Remarkably, our data reasonably collapse for the different mechanisms to engage different targeted substrates (Fig 4c), suggesting that the overall adhesion behavior is dictated by the slip linkages while depending less on the types of interactions between the bridging network and targeted substrates. These results support the robustness of the adhesion programming through the TEA strategy.
Besides chitosan, we examine another bridging polymer gelatin, which was prepared as polymer solution at \(37^{\circ}C\) and then applied to the interface between two TEA gels for \(C=0.024\%\) at room temperature. Similar to chitosan, gelatin diffused into the gel and was crosslinked into a
bridging network in responding to a temperature drop to form slip linkages with the TEA dangling chains. Our data reveals an identical trend between the data obtained using gelatin and chitosan as bridging polymers (Fig 4d), highlighting the dominating role of polymer topology rather than material chemistry in the formation of slip linkages.
We then explore using double-network (DN) hydrogel as the TEA network. Compared to single-network (SN) hydrogels, the DN hydrogels exhibit much higher fracture toughness and adhesion[5, 11, 26, 27] due to background dissipation. We tested PAAm-alginate and PAAm-chitosan hydrogels as representative materials. In the two types of DN gels, alginate and chitosan are physically crosslinked macromolecules and do not covalently interfere with the PAAm network, we expect that the hydrophobic mold could produce surface dangling chains in the PAAm network within the DN gels. We confirmed the presence of the dangling chain layer in the surface of a PAAm-alginate hydrogel polymerized on hydrophobic substrate by EDTA treatment to remove calcium-alginate bonds followed by Atomic Force Microscopy (AFM) tests (Fig S5a, b). We then examined the adhesion of TEA and regular DN gels polymerized on PMMA and glass
Figure 4: **Universal applicability of the TEA strategy.** (a) Schematic showing that the main constitutes of a TEA interface can be made of a variety of materials. (b) When the adhesive network is made of SN PAAm TEA gel, interfacial topological linkages can be engineered to interact with different targeted substrates. The slip-slip, slip-stitch, and slip-bond linkages are created on targeted substrates TEA gel, regular gel, and VHB elastomer, respectively. (c) Different topological linkages lead to a similar trend of \(G^{1/2}\) as functions \(V_{\text{crack}}\). The SN TEA gel matrix and the chitosan solution are prepared with \(C=0.024\%\) and \(c_{\text{chi}}=2\%\) g/ml. (d) Slip-slip mediated adhesion between two TEA gels using gelatin and chitosan as bridging polymers. \(c_{\text{chi}}=c_{\text{gelatin}}=2\%\) g/ml (d) Adhesion of TEA and regular DN gels on porcine skin at relatively low \(V_{\text{crack}}\).
molds on porcine skin (for a systematic study on different gelling molds, see Fig S5c). We use chitosan as the bridging polymer and EDC/NHS reagent to form covalent bonds between chitosan and tissue surfaces [11]. Since the energy dissipation in the DN gels is coupled to the interfacial adhesion energy, we hypothesized that the slip linkage elicits negligible bulk energy dissipation at low \(V_{\text{crack}}\), thereby resulting in weak adhesion. Here, we only focused on the adhesion behavior of the DN TEA at low \(V_{\text{crack}}\), since its rate-sensitive adhesion is presumably coupled with the rate-dependent bulk dissipation and is not further pursued [7]. Both PAAm-alginate and PAAm-chitosan gels show slip linkage-mediated adhesion 10 times lower than stitch-mediated adhesion at \(V_{\text{crack}}\)=0.25mm/s (Fig 4e), demonstrating that our methodology is applicable to both SN and DN hydrogels as long as the topology of one of the networks can be engineered.
## Programming spatial adhesion
The contrast between slip and stitch linkages allows us to program the adhesion spatially. To do so, we patterned a mold substrate with hydrophilic (Glass) and hydrophobic regions (PTFE films thickness \(\sim\)0.1mm), followed by polymerizing a TEA gel on the patterned mold. Given the predefined geometries (circle, triangle) of the hydrophobic domains, we can design the dangling chain region where weak adhesion \(G_{\text{slip}}\) is formed at low loading rates; meanwhile, strong adhesion \(G_{\text{stitch}}\) is formed in other areas to sustain tension or twisting applied to the interface without interface debonding. Fig 5a and S5e show that by shaping the dangling chain region, we can achieve weak adhesion region of complex shapes between a TEA gel and a regular gel by slowly injecting liquid dye into the weak interface. To further characterize the resolution of the spatially programmable adhesion, we made a series of circular islands of nominal radii \(r_{\text{nominal}}\) in which slip linkages are formed. By slowly injecting the liquid dye, we visualized and measured their radii \(r_{\text{measure}}\) of the weak adhesion region using a digital camera (Fig. 5b and S5c). The excellent agreement between the nominal and measured radii suggests high spatial resolution \(\sim\)0.1 mm achieved with a manual procedure.
As the slip and stitch linkages show different sensitivities to loading rate, we expect rate-dependent spatially programmable adhesion, characterized by the adhesion energy contrast \(G_{\text{slip}}/G_{\text{stitch}}\). Fig 5c shows that \(G_{\text{slip}}/G_{\text{stitch}}\) predicted by the parameterized model (Supplementary note 2) approaches unity at high \(V_{\text{crack}}\) and decreases towards zero at low \(V_{\text{crack}}\). The prediction is supported by our experimental observations: A TEA gel with the designed dangling chain region shows large adhesion contrast to a regular gel at low \(V_{\text{crack}}\) while appearing to be uniformly adhesive at relatively larger \(V_{\text{crack}}\) (Fig. 5d). The rate-dependent spatially-programmable adhesion can potentially enable applications which desire tunable adhesion contrast in different regions under different loading rates. Additionally, not only we can achieve reduced adhesion (\(G_{\text{slip}}/G_{\text{stitch}}<1\)) but also enhanced adhesion (\(G_{\text{hybrid}}/G_{\text{stitch}}>1\)) in the region with slip linkages by leveraging the hybrid linkage at large \(V_{\text{crack}}\) (Fig S4). In this case, the slip linkage acts as a toughening mechanism that synergistically contributes to the adhesion unit with the stitch linkage. Moreover, the one-step fabrication allows spatially selective adhesion to be assembled within a monolithic material, which otherwise requires assembling different materials at the interface.
Figure 5: **Spatial programming and soft devices enabled with TEA** (a) TEA strategy leads to spatially programmable and deformable adhesion capable of tracing complex shapes between a PAAm-alginate TEA DN gel and a regular DN gel. Scale bar: 1cm. (b) Spatial resolution of the spatially programmable adhesion. (c) Predicated \(G_{\text{slip}}/G_{\text{stitch}}\) achieved using the parameterized model (Eqn.14) using the data of \(G_{\text{slip}}\) and \(G_{\text{stitch}}\) in Fig 2d. (d) Experimental demonstration of the rate-dependent \(G_{\text{slip}}/G_{\text{stitch}}\) between a TEA gel with a circular-shaped dangling chain region and a regular gel. Scale bar: 1cm. (e) Wound patches made of TEA and regular PAAm-alginate DN gel adhered to wounds on rat skin (top, scale bar: 8mm) and porcine stomach (bot, scale bar: 12mm). (f) A drug-eluding device enabled by injecting drug into the weakly-adhered interface between a TEA gel and a regular gel. Grid size of the inset: 10mm. (g) Deformable hydrogel-based fluidic channels created by adhering a PAAm-alginate TEA DN gel to a regular DN gel. Scale bars: 2cm. (Bottom) A PAAm-alginate TEA DN gel with designed adhesion selectivity forms a fluid channel on the surface of porcine skin. Scale bar: 2.5cm. (h) Reconfigurable soft actuators. (Top) the fabrication process of the actuator units with connection surfaces composed of dangling chains. (Bottom) two modes of actuation. The initial and the actuated stages are indicated by the white and green dash lines, respectively. Scale bars: 2cm.
### TEA-based devices
The programmable adhesion of TEA enables various applications such as wound patches, drug depots, fluidic channels, and soft actuators. For the application of wound patches, TEA allows one to program weak adhesion to wound beds while maintaining strong adhesion to the surrounding tissue. As such, the patch could protect the wound without impairing tissue regeneration and wound closure. Using the one-step fabrication process (Fig 1e and S5d), we prepared such a TEA gel with its surface composed of a circular region of dangling chains and the surrounding region of crosslinked network. The dangling chain region forms slip linkages which attach weakly to the wound site upon slow removal to minimize the damage to the wound. Meanwhile, the stitch linkages attach strongly to the surrounding healthy tissue to maintain the stickiness of the patch. In contrast, a regular hydrogel exerts strong and uniform adhesion to both wounded and healthy tissues, which ruptures the wound bed upon removal (Fig 5e).
Besides, the creation of a weak adhesion region between two hydrogels could serve as a drug depot. Upon slow injection, mock drug solution filled up the weak interface. Further injection created a bulge of hydrogel to accommodate a high amount of drug, which can be continuously released through the hydrogel network when the whole device is immersed in an aqueous environment (Fig. 5f Top). Our data show that the initial amount of drug injected into the depot affects the amount of release over time but the relative kinetics of release remains similar (Fig. 5f Bottom). As well, we can create a drug depot above a wound site, where drugs can be directly released into wounded tissue. In contrast, the strong adhesion of a regular hydrogel prevents the injection of drug solution to interface (Fig. S5f).
We then demonstrate hydrogel-based fluidic devices assembled with TEA. A PAAm-alginate TEA DN gel with a rectangular-shaped dangling chain region forms a partially weak interface with a regular DN gel, which subsequently becomes a fluidic channel upon slow injection of liquids. The resulting device is highly deformable while no liquid leakage is observed (Fig 5g Top). The one-step fabrication technique provides a simple approach to fabricate hydrogel fluidic channels compared with conventional methods that typically involve multiple molding steps [28, 29]. In addition, the spatially programmable adhesion is applicable to varying surfaces as it requires no patterning of the targeted substrate. As such, we can form such a fluid channel directly on tissue surfaces such as porcine skin (Fig 5g Bottom). This feature could benefit medical devices that contact with tissue surfaces for sustained drug release [30], or in-_vitro_ organ-on-chip models to study cellular behaviors [31].
Lastly, we show that the TEA made of SN PAAm hydrogels can be used to construct reconfigurable soft actuators, featuring minimal bulk dissipation for efficient actuation and dynamic adhesion for reversible attachment (Fig 5h). Such actuators are formed with hydrogel units that contain surface dangling chains on each face and are connected to each other with the aid of bridging polymer (Methods). The slip linkage-mediated adhesion between the units is strong enough to sustain actuation, and yet can be separated easily and slowly with a small force. The separated units can then be reconnected upon reapplying the bridging polymer solution to the interface so that one can modify configurations of assembly for different actuation. Our data shows that the slip linkage-mediated adhesion increases and reaches a plateau after cycles of detachment and reattachment (Fig S2f). This property can be partially attributed to the fact that the dissociation of the slip linkage does not rupture the adherend networks, so the slip-mediated TEA interface is inherently subjected to minimal damage compared with those bonded by stitch linkages or covalent bonds (Fig S2g).
## Conclusion
In summary, we have demonstrated that designing the interfacial network topologies of hydrogels provides a facile and robust approach to program adhesion in multiple aspects including magnitude, space, and kinetics. Our approach can be potentially extended to different length scales using proper manufacturing processes. For instance, spatially programmable adhesion
with a spatial resolution on the micro-scale can be achieved with microfabrication of the hydrogel network topology [32, 33], while that on the metre scale is expected to be achieved using gelling molds of the same size for applications such as camouflaging skin [34]. Broadly, our methodology falls into the emerging paradigm of material intelligence, as the adhesion programming is directly encoded in the hydrogel network as material properties, similar to other properties such as elastic modulus. The implementation of adhesion control requires no external apparatus, making the methodology extremely facile, robust, and scalable. We hope that the design of TEA can spark interest in controlling hydrogel adhesion by designing their network topologies, opening the door to a new design space for intelligent materials/structures through programmable adhesion.
|
2307.04391 | Vehicle Detection in 6G Systems with OTFS Modulation | The recently introduced orthogonal time frequency space modulation (OTFSM) is
more robust to large narrow-band Doppler frequency shift than the orthogonal
frequency division multiplexing (OFDM), used in the 5G standard. In this paper
it is shown how the elecommunication OTFSM-based signal with random padding can
be used with success in the 6G standard for detection of high-speed vehicles.
Two approaches for detecting targets during the random padded OTFS based
transmission are compared in the paper | Pavel Karpovich, Tomasz P. Zielinski | 2023-07-10T07:54:59Z | http://arxiv.org/abs/2307.04391v1 | # Vehicle Detection in 6G Systems with OTFS Modulation
###### Abstract
The recently introduced orthogonal time frequency space modulation (OTFSM) is more robust to large narrow-band Doppler frequency shift than the orthogonal frequency division multiplexing (OFDM), used in the 5G standard. In this paper it is shown how the telecommunication OTFSM-based signal with random padding can be used with success in the 6G standard for detection of high-speed vehicles. Two approaches for detecting targets during the random padded OTFS based transmission are compared in the paper.
5G, 6G, OFDM, OTFSM, radar. Viehicle Detection in 6G Systems with OTFS Modulation
## 1 Introduction
In last few years, the scientific community attention has been focused on the discussion of next generation 6G communication. There are a lot of publications about what applications will drive the 6G network and what technologies should be included in the 6G standard to satisfy their requirements [1][2]. Among large number of proposals, there are some that are most common, such as a terahertz wave and an integrated sensing and communication (ISAC) [3][4]. This paper addresses a problem of adding a radar functionality to the communication systems of the future which will use higher frequency carriers and support high-mobility users.
The usage of terahertz band is challenging. Even relatively slow objects could generate very high Doppler frequency shifts. The strong Doppler effect limits the usage of the orthogonal frequency division multiplexing (OFDM) waveform which is at present de-facto a standard waveform in telecommunication systems (e.g. DVB-T2, Wi-Fi, LTE, 5G [5]). The OFDM is based on assumptions that linear convolution of the signal and the channel impulse response can be replaced by circular convolution, and that the channel impulse response is time-invariant or almost time-invariant. This allows to do a very fast and simple channel impulse response estimation. In case of the strong Doppler environment the assumption about constant channel impulse response is no longer valid since any channel coefficient can rotate in complex plane all the time due to the Doppler effect. Using OFDM in such conditions leads to errors in channel estimation and equalization, and eventually to inter-carrier-interference (ICI) and subsequently errors in bit detection.
Increasing sub-carrier spacing (SCS) in OFDM helps to deal with the strong Doppler frequency shift. However, this operation will increase also the OFDM cyclic prefix overhead and reduce transmission efficiency [5]. In order to eliminate the mentioned above disadvantage of the OFDM, the orthogonal time frequency and space (OTFS) modulation was recently introduced in [6]. Due to its unique features it is seriously treated as one of possible 6G waveforms [7].
In this article simulation results for an ISAC system using the OTFS waveform are shown. We will start with the OTFS waveform description, present the delay-Doppler domain used in OTFS and discuss different pilot configurations exploited in it. Next, we will introduce the ISAC system using the OTFS waveform. Finally, in experimental part, we will show results from simulation of a radar part of the discussed RP-OTFS-based ISAC system.
In work [8] results from simulation of the communication part of the RP-OTFS transmission system were presented while this paper addresses simulation of the radar part of the system only. Practical verification of the general RP-OTFS based transmission and sensing concept was already presented in [9].
## 2 Orthogonal Time Frequency and Space
The concept of the OTFS is shown in the figure 1[8][9]. In comparison to OFDM, the OTFS is a two-dimensional modulation technique. In case of OTFS the modulation process looks as follows. At the beginning modulated IQ/QAM symbols are put into elements of the matrix \(\mathbf{A}\) in figure 1, i.e. on the grid in a delay-Doppler (DD) domain. Then, the inverse Zak transform (inverse Fourier transform over the Doppler axis) [10] is used to transform (demodulate) data from the DD to a fast time - slow time (TT) domain. Finally, the obtained samples are reshaped from a matrix into a vector. The DD grid usage for data modulation makes the OTFS waveform attractive for ISAC since it is "native" domain for radars.
Figure 1: The OTFS concept |
2310.01626 | Model Explanation via Support Graphs | In this note, we introduce the notion of support graph to define explanations
for any model of a logic program. An explanation is an acyclic support graph
that, for each true atom in the model, induces a proof in terms of program
rules represented by labels. A classical model may have zero, one or several
explanations: when it has at least one, it is called a justified model. We
prove that all stable models are justified whereas, in general, the opposite
does not hold, at least for disjunctive programs. We also provide a
meta-programming encoding in Answer Set Programming that generates the
explanations for a given stable model of some program. We prove that the
encoding is sound and complete, that is, there is a one-to-one correspondence
between each answer set of the encoding and each explanation for the original
stable model. | Pedro Cabalar, Brais Muñiz | 2023-10-02T20:40:26Z | http://arxiv.org/abs/2310.01626v1 | # Model Explanation via Support Graphs
###### Abstract
In this note, we introduce the notion of support graph to define explanations for any model of a logic program. An explanation is an acyclic support graph that, for each true atom in the model, induces a proof in terms of program rules represented by labels. A classical model may have zero, one or several explanations: when it has at least one, it is called a justified model. We prove that all stable models are justified whereas, in general, the opposite does not hold, at least for disjunctive programs. We also provide a meta-programming encoding in Answer Set Programming that generates the explanations for a given stable model of some program. We prove that the encoding is sound and complete, that is, there is a one-to-one correspondence between each answer set of the encoding and each explanation for the original stable model.
Answer Set Programming, Explanations, Supported Models, Justified Models 10.1017/xxxxx
## 1 Introduction
In the past few years, Artificial Intelligence (AI) systems have made great advancements, generally at the cost of increasing their scale and complexity. Although symbolic AI approaches have the advantage of being verifiable, the number and size of possible justifications generated to explain a given result may easily exceed the capacity of human comprehension. Consider, for instance, the case of Answer Set Programming (ASP) Brewka et al. (2011), a successful logic programming paradigm for practical Knowledge Representation and problem solving. Even for a positive program, whose answer set is unique, the number of proofs for an atom we can form using _modus ponens_ can be exponential. It makes sense, then, to generate explanations through the typical ASP problem solving orientation. Namely, we may consider each explanation _individually_ as one solution to the "explainability problem" (that is, explaining a model) and let the user decide to generate one, several or all of them, or perhaps to impose additional preference conditions as done with optimisation problems in ASP.
In this technical note, we describe a formal characterisation of explanations in terms of graphs constructed with atoms and program rule labels. Under this framework, models may be _justified_, meaning that they have one or more _explanations_, or _unjustified_ otherwise. We prove that all stable models are justified whereas, in general, the opposite does not hold, at least for disjunctive programs. We also provide an ASP encoding to generate
the explanations of a given answer set of some original program, proving the soundness and completeness of this encoding.
The rest of this note is structured as follows. Section 2 contains the formal definitions for explanations and their properties with respect to stable models. Section 3 describes the ASP encoding and proves its soundness and completeness. Section 4 briefly comments on related work and, finally, Section 5 concludes the paper.
## 2 Explanations as Support Graphs
We start from a finite1 signature \(At\), a non-empty set of propositional atoms. A _(labelled) rule_ is an implication of the form:
Footnote 1: We leave the study of infinite signatures for future work. This will imply explanations of infinite size, but each one should contain a finite proof for each atom.
\[\ell:p_{1}\vee\cdots\lor p_{m}\gets q_{1}\wedge\cdots\wedge q_{n}\wedge \neg s_{1}\wedge\cdots\wedge\neg s_{j}\wedge\neg\neg t_{1}\wedge\cdots\wedge \neg\neg t_{k} \tag{1}\]
Given a rule \(r\) like (1), we denote its label as \(\mathit{Lb}(r)\stackrel{{\mathrm{df}}}{{=}}\ell\). We also call the disjunction in the consequent \(p_{1}\vee\cdots\lor p_{m}\) the _head_ of \(r\), written \(\mathit{Head}(r)\), and denote the set of head atoms as \(H(r)\stackrel{{\mathrm{df}}}{{=}}\{p_{1},\ldots,p_{m}\}\); the conjunction in the antecedent is called the _body_ of \(r\) and denoted as \(\mathit{Body}(r)\). We also define the positive and negative parts of the body respectively as the conjunctions \(\mathit{Body}^{+}(r)\stackrel{{\mathrm{df}}}{{=}}q_{1}\wedge \cdots\wedge q_{n}\) and \(\mathit{Body}^{-}(r)\stackrel{{\mathrm{df}}}{{=}}\neg s_{1} \wedge\cdots\wedge\neg s_{j}\wedge\neg\neg t_{1}\wedge\cdots\wedge\neg\neg t _{k}\). The atoms in the positive body are represented as \(\mathit{B}^{+}(r)\stackrel{{\mathrm{df}}}{{=}}\{q_{1},\ldots,q_{n}\}\). As usual, an empty disjunction (resp. conjunction) stands for \(\bot\) (resp. \(\top\)). A rule \(r\) with empty head \(H(r)=\emptyset\) is called a _constraint_. On the other hand, when \(H(r)=\{p\}\) is a singleton, \(\mathit{B}^{+}(r)=\emptyset\) and \(\mathit{Body}^{-}(r)=\top\) the rule has the form \(\ell:p\leftarrow\top\) and is said to be a _fact_, simply written as \(\ell:p\). The use of double negation in the body allows representing elementary choice rules. For instance, we will sometimes use the abbreviation \(\ell:\{p\}\gets B\) to stand for \(\ell:p\gets B\wedge\neg\neg p\). A _(labelled) logic program_\(P\) is a set of labelled rules where no label is repeated. Note that \(P\) may still contain two rules \(r,r^{\prime}\) with same body and head \(\mathit{Body}(r)=\mathit{Body}(r^{\prime})\) and \(H(r)=H(r^{\prime})\), but different labels \(\mathit{Lb}(r)\neq\mathit{Lb}(r^{\prime})\). A program \(P\) is _positive_ if \(\mathit{Body}^{-}(r)=\top\) for all rules \(r\in P\). A program \(P\) is _non-disjunctive_ if \(|H(r)|\leq 1\) for every rule \(r\in P\). Finally, \(P\) is _Horn_ if it is both positive and non-disjunctive: note that this may include (positive) constraints \(\bot\gets B\).
A propositional interpretation \(I\) is any subset of atoms \(I\subseteq At\). We say that a propositional interpretation is a _model_ of a labelled program \(P\) if \(I\models\mathit{Body}(r)\rightarrow\mathit{Head}(r)\) in classical logic, for every rule \(r\in P\). The _reduct_ of a labelled program \(P\) with respect to \(I\), written \(P^{I}\), is a simple extension of the standard reduct by Gelfond and Lifschitz (1988) that collects now the _labelled_ positive rules:
\[P^{I}\stackrel{{\mathrm{df}}}{{=}}\{\mathit{Lb}(r):\mathit{ Head}(r)\leftarrow\mathit{Body}^{+}(r)\ \ |\ \ r\in P,\ I\models\mathit{Body}^{-}(r)\ \}\]
As usual, an interpretation \(I\) is a _stable model_ (or _answer set_) of a program \(P\) if \(I\) is a minimal model of \(P^{I}\). Note that, for the definition of stable models, the rule labels are irrelevant. We write \(\mathit{SM}(P)\) to stand for the set of stable models of \(P\).
We define the rules of a program \(P\) that _support_ an atom \(p\) under interpretation \(I\) as \(\mathit{SUP}(P,I,p)\stackrel{{\mathrm{df}}}{{=}}\{r\in P\ |\ p\in H(r),I\models\mathit{Body}(r)\}\) that is, rules with \(p\) in the head whose
body is true w.r.t. \(I\). The next proposition proves that, given \(I\), the rules that support \(p\) in the reduct \(P^{I}\) are precisely the positive parts of the rules that support \(p\) in \(P\).
Proposition 1: For any model \(I\models P\) of a program \(P\) and any atom \(p\in I\): \(\mathit{SUP}(P^{I},I,p)=\mathit{SUP}(P,I,p)^{I}\).
Proof: We prove first \(\supseteq\): suppose \(r\in\mathit{SUP}(P,I,p)\) and let us call \(r^{\prime}=\mathit{Lb}(r):\mathit{Head}(r)\leftarrow\mathit{Body}^{+}(r)\). Then, by definition, \(I\models\mathit{Body}(r)\) and, in particular, \(I\models\mathit{Body}^{-}(r)\), so we conclude \(r^{\prime}\in P^{I}\). To see that \(r^{\prime}\in\mathit{SUP}(P^{I},I,p)\), note that \(I\models\mathit{Body}(r)\) implies \(I\models\mathit{Body}^{+}(r)=\mathit{Body}(r^{\prime})\).
For the \(\subseteq\) direction, take any \(r^{\prime}\in\mathit{SUP}(P^{I},I,p)\). By definition of reduct, we know that \(r^{\prime}\) is a positive rule and that there exists some \(r\in P\) where \(\mathit{Lb}(r)=\mathit{Lb}(r^{\prime})\), \(H(r)=\mathit{H}(r^{\prime})\), \(B^{+}(r)=B^{+}(r^{\prime})\) and \(I\models\mathit{Body}^{-}(r)\). Consider any rule \(r\) satisfying that condition (we could have more than one): we will prove that \(r\in\mathit{SUP}(P,I,p)\). Since \(r^{\prime}\in\mathit{SUP}(P^{I},I,p)\), we get \(I\models\mathit{Body}(r^{\prime})\) but this is equivalent to \(I\models\mathit{Body}^{+}(r)\). However, as we had \(I\models\mathit{Body}^{-}(r)\), we conclude \(I\models\mathit{Body}(r)\) and so \(r\) is supported in \(P\) given \(I\).
Definition 1 (Support Graph/Explanation): Let \(P\) be a labelled program and \(I\) a classical model of \(P\). A _support graph_\(G\) of \(I\) under \(P\) is a labelled directed graph \(G=\langle I,E,\lambda\rangle\) whose vertices are the atoms in \(I\), the edges in \(E\subseteq I\times I\) connect pairs of atoms, the function \(\lambda:I\to\mathit{Lb}(P)\) assigns a label to each atom, and \(G\) further satisfies:
1. \(\lambda\) is injective
2. for every \(p\in I\), the rule \(r\) such that \(\mathit{Lb}(r)=\lambda(p)\) satisfies: \(r\in\mathit{SUP}(P,I,p)\) and \(B^{+}(r)=\{q\mid(q,p)\in E\}\).
A support graph \(G\) is said to be an _explanation_ if it additionally satisfies:
1. \(G\) is acyclic.
Condition (i) means that there are no repeated labels in the graph, i.e., \(\lambda(p)\neq\lambda(q)\) for different atoms \(p,q\in I\). Condition (ii) requires that each atom \(p\) in the graph is assigned the label \(\ell\) of some rule with \(p\) in the head, with a body satisfied by \(I\) and whose atoms in the positive body form all the incoming edges for \(p\) in the graph. Intuitively, labelling \(p\) with \(\ell\) means that the corresponding (positive part of the) rule has been fired, "producing" \(p\) as a result. Since a label cannot be repeated in the graph, each rule can only be used to produce one atom, even though the rule head may contain more than one (when it is a disjunction). It is not difficult to see that an explanation \(G=\langle I,E,\lambda\rangle\) for a model \(I\) is uniquely determined by its atom labelling \(\lambda\). This is because condition (ii) about \(\lambda\) in Definition 1 uniquely specifies all the incoming edges for all the nodes in the graph. On the other hand, of course, not every arbitrary atom labelling corresponds to a well-formed explanation. We will sometimes abbreviate an explanation \(G\) for a model \(I\) by just using its labelling \(\lambda\) represented as a set of pairs of the form \(\lambda(p):p\) with \(p\in I\).
Definition 2 (Supported/Justified model):
A classical model \(I\) of a labelled program \(P\) if \(I\models P\) is said to be a _supported model_ of \(P\) if there exists some support graph of \(I\) under \(P\). Moreover, \(I\) is said to be a _justified model_ of \(P\) if there exists some explanation \(G\) (i.e. acyclic support graph) of \(I\) under \(P\). We write \(\mathit{SPM}(P)\) and \(\mathit{JM}(P)\) to respectively stand for the set of supported and justified models of \(P\). \(\Box\)
Obviously all justified models are supported \(\mathit{JM}(P)\subseteq\mathit{SPM}(P)\) but, in general, the opposite does not hold, as we will see later. Our main focus, however, is on justified models, since we will relate them to proofs, that are always acyclic. We can observe that not all models are justified, whereas a justified model may have more than one explanation, as we illustrate next.
**Example 1**: Consider the labelled logic program \(P\)
\[\ell_{1}:\ a\lor b\qquad\ell_{2}:\ d\gets a\land\neg c\qquad\ell_{3}:\ d \leftarrow\neg b\]
No model \(I\models P\) with \(c\in I\) is justified since \(c\) does not occur in any head, so its support is always empty \(\mathit{SUP}(P,I,c)=\emptyset\) and \(c\) cannot be labelled. The models of \(P\) without \(c\) are \(\{b\}\), \(\{a,d\}\), \(\{b,d\}\) and \(\{a,b,d\}\) but only the first two are justified. The explanation for \(I=\{b\}\) corresponds to the labelling \(\{(\ell_{1}:b)\}\) (it forms a graph with a single node). Model \(I=\{a,d\}\) has the two possible explanations:
\[\ell_{1}:a \longrightarrow\ell_{2}:d \ell_{1}:a\qquad\ell_{3}:d \tag{2}\]
Model \(I=\{b,d\}\) is not justified: we have no support for \(d\) given \(I\), \(\mathit{SUP}(P,I,d)=\emptyset\), because \(I\) satisfies neither bodies of \(\ell_{2}\) nor \(\ell_{3}\). On the other hand, model \(\{a,b,d\}\) is not justified either, because \(\mathit{SUP}(P,I,a)=\mathit{SUP}(P,I,b)=\{\ell_{1}\}\) and we cannot use the same label \(\ell_{1}\) for two different atoms \(a\) and \(b\) in a same explanation (condition (i) in Def. 1).\(\Box\)
**Definition 3** (_Proof of an atom_): Let \(I\) be a model of a labelled program \(P\), \(G=\langle I,E,\lambda\rangle\) an explanation for \(I\) under \(P\) and let \(p\in I\). The _proof_ for \(p\) induced by \(G\), written \(\pi_{G}(p)\), is the derivation:
\[\pi_{G}(p) \stackrel{{\mathrm{df}}}{{=}} \frac{\pi_{G}(q_{1})\ \dots\ \pi_{G}(q_{n})}{p}\ \lambda(p),\]
where, if \(r\in P\) is the rule satisfying \(\mathit{Lb}(r)=\lambda(p)\), then \(\{q_{1},\dots,q_{n}\}=\mathit{B}^{+}(r)\). When \(n=0\), the derivation antecedent \(\pi_{G}(q_{1})\ \dots\ \pi_{G}(q_{n})\) is replaced by \(\top\) (corresponding to the empty conjunction). \(\Box\)
**Example 2**: Let \(P\) be the labelled logic program:
\[\ell_{1}:\ p\qquad\ell_{2}:\ q\gets p\qquad\ell_{3}:\ r\gets p,q\]
\(P\) has a unique justified model \(\{p,q,r\}\) whose explanation is shown in Figure 1 (left) whereas the induced proof for atom \(r\) is shown in Figure 1 (right). \(\Box\)
The next proposition trivially follows from the definition of explanations:
**Proposition 2**:
If \(P\) is a Horn program, and \(G\) is an explanation for a model \(I\) of \(P\) then, for every atom, \(p\in I\), \(\pi_{G}(p)\) corresponds to a Modus Ponens derivation of \(p\) using the rules in \(P\).
It is worth mentioning that explanations do not generate any arbitrary Modus Ponens derivation of an atom, but only those that are globally "coherent" in the sense that, if any atom \(p\) is repeated in a proof, it is always justified repeating _the same subproof_.
In the previous examples, justified and stable models coincided: one may wonder whether this is a general property. As we see next, however, every stable model is justified but, in general, the opposite may not hold. To prove that stable models are justified, we start proving a correspondence between explanations for any model \(I\) of \(P\) and explanations under \(P^{I}\).
**Proposition 3**: _Let \(I\) be a model of program \(P\). Then \(G\) is an explanation for \(I\) under \(P\) iff \(G\) is an explanation for \(I\) under \(P^{I}\)._
By Proposition 1, for any atom \(p\in I\), the labels in \(\mathit{SUP}(P,I,p)\) and \(\mathit{SUP}(P^{I},I,p)\) coincide, so there is no difference in the ways in which we can label \(p\) in explanations for \(P\) and for \(P^{I}\). On the other hand, the rules in \(\mathit{SUP}(P^{I},I,p)\) are the positive parts of the rules in \(\mathit{SUP}(P,I,p)\), so the graphs we can form are also the same.
**Corollary 1**: \(I\in\mathit{JM}(P)\) _iff \(I\in\mathit{JM}(P^{I})\)._
**Theorem 1**: _Stable models are justified: \(\mathit{SM}(P)\subseteq\mathit{JM}(P)\)._
Let \(I\) be a stable model of \(P\). To prove that there is an explanation \(G\) for \(I\) under \(P\), we can use Proposition 1 and just prove that there is some explanation \(G\) for \(I\) under \(P^{I}\). We will build the explanation with a non-deterministic algorithm where, in each step \(i\), we denote the graph \(G_{i}\) as \(G_{i}=\langle I_{i},E_{i},\lambda_{i}\rangle\) and represent the labelling \(\lambda_{i}\) as a set of pairs of the form \((\ell:p)\) meaning \(\ell=\lambda(p)\). The algorithm proceeds as follows:
```
1:\(I_{0}\leftarrow\emptyset;E_{0}\leftarrow\emptyset;\lambda_{0}\leftarrow\emptyset\)
2:\(G_{0}=\langle I_{0},E_{0},\lambda_{0}\rangle\)
3:\(i\gets 0\)
4:while\(I_{i}\not\models P^{I}\)do
5: Pick a rule \(r\in P^{I}\) s.t. \(I_{i}\models\mathit{Body}(r)\land\neg\mathit{Head}(r)\)
Figure 1: Some results for model \(\{p,q,r\}\) of program in Example 2.
* Pick an atom \(p\in I\cap H(r)\)
* \(I_{i+1}\gets I_{i}\cup\{p\}\)
* \(\lambda_{i+1}\leftarrow\lambda_{i}\cup\{(\ell:p)\}\)
* \(E_{i+1}\gets E_{i}\cup\{(q,p)\mid q\in B^{+}(r)\}\)
* \(G_{i}\leftarrow\langle I_{i},E_{i},\lambda_{i}\rangle\)
* \(i\gets i+1\)
* **end while**
The existence of a rule \(r\in P^{I}\) in line 5 is guaranteed because the **while** condition asserts \(I_{i}\not\models P^{I}\) and so there must be some rule whose positive body is satisfied by \(I_{i}\) but its head is not satisfied. We prove next that the existence of an atom \(p\in I\cap\mathit{Head}(r)\) (line 5) is also guaranteed. First, note that the **while** loop maintains the invariant \(I_{i}\subseteq I\), since \(I_{0}=\emptyset\) and \(I_{i}\) only grows with atoms \(p\) (line 7) that belong to \(I\) (line 6). Therefore, \(I_{i}\models\mathit{Body}(r)\) implies \(I\models\mathit{Body}(r)\), but since \(I\models P^{I}\), we also conclude \(I\models r\) and thus \(I\models\mathit{Head}(r)\) that is \(I\cap H(r)\neq\emptyset\), so we can always pick some atom \(p\) in that intersection. Now, note that the algorithm stops because, in each iteration, \(I_{i}\) grows with exactly one atom from \(I\) that was not included before, since \(I_{i}\models\neg\mathit{Head}(r)\), and so, this process will stop provided that \(I\) is finite. The **while** stops satisfying \(I_{i}\models P^{I}\) for some value \(i=n\). Moreover, \(I_{n}=I\), because otherwise, as \(I_{i}\subseteq I\) is an invariant, we would conclude \(I_{n}\subset I\) and so \(I\) would not be a minimal model of \(P^{I}\), which contradicts that \(I\) is a stable model of \(P\). We remain to prove that the final \(G_{n}=\langle I_{n},E_{n},\lambda_{n}\rangle\) is a correct explanation for \(I\) under \(P^{I}\). As we said, the atoms in \(I\) are the graph nodes \(I_{n}=I\). Second, we can easily see that \(G_{n}\) is acyclic because each iteration adds a new node \(p\) and links this node to previous atoms from \(B^{+}(r)\subseteq I_{i}\) (remember \(I_{i}\models\mathit{Body}(r)\)) so no loop can be formed. Third, no rule label can be repeated, because we go always picking a rule \(r\) that is new, since it was not satisfied in \(I_{i}\) but becomes satisfied in \(I_{i+1}\) (the rule head \(\mathit{Head}(r)\) becomes true). Last, for every \(p\in I\), it is not hard to see that the (positive) rule \(r\in P^{I}\) such that \(Lb(r)=\lambda_{n}(p)\) satisfies \(p\in H(r)\) and \(B^{+}(r)=\{q\mid(q,p)\in E\}\) by the way in which we picked \(r\) and inserted \(p\) in \(I_{i}\), whereas \(I\models\mathit{Body}(r)\) because \(I_{i}\models\mathit{Body}(r)\), \(r\) is a positive rule and \(I_{i}\subseteq I\).
As a result, we get \(\mathit{SM}(P)\subseteq\mathit{JM}(P)\subseteq\mathit{SPM}(P)\), that is, justified models lay in between stable and supported.
**Proposition 4**: If \(P\) is a consistent Horn program then it has a unique justified model \(I\) that coincides with the least model of \(P\).
Since \(P\) is Horn and consistent (all constraints are satisfied) its unique stable model is the least model \(I\). By Theorem 1, \(I\) is also justified by some explanation \(G\). We remain to prove that \(I\) is the unique justified model. Suppose there is another model \(J\supset I\) (remember \(I\) is the least model) justified by an explanation \(G\) and take some atom \(p\in J\setminus I\). Then, by Proposition 2, the proof for \(p\) induced by \(G\), \(\pi_{G}(p)\), is a Modus Ponens derivation of \(p\) using the rules in \(P\). Since Modus Ponens is sound and the derivation starts from facts in the program, this means that \(p\) must be satisfied by any model of \(P\), so \(p\in I\) and we reach a contradiction.
In general, the number of explanations for a single justified model can be exponential, even when the program is Horn, and so, has a unique justified and stable model corresponding to the least classical model, as we just proved. As an example2:
Footnote 2: This example was already introduced as Program 7.1 in Fandinno (2015).
Example 3 (\(A\) chain of firing squads): Consider the following variation of the classical _Firing Squad Scenario_ introduced by Pearl (1999) for causal counterfactuals (although we do not use it for that purpose here). We have an army distributed in \(n\) squads of three soldiers each, a captain and two riflemen for each squad. We place the squads in a sequence of \(n\) consecutive hills \(i=0,\ldots,n-1\). An unfortunate prisoner is at the last hill \(n-1\), and is being aimed at by the last two riflemen. At each hill \(i\), the two riflemen \(a_{i}\) and \(b_{i}\) will fire if their captain \(c_{i}\) gives a signal to fire. But then, captain \(c_{i+1}\) will give a signal to fire if she hears a shot from the previous hill \(i\) in the distance. Suppose captain \(c_{0}\) gives a signal to fire. Our logic program would have the form:
\[\begin{array}{ll}s_{0}:\mathit{signal}_{0}&a_{i}:\mathit{fire}A_{i}\gets \mathit{signal}_{i}&a^{\prime}_{i+1}:\mathit{signal}_{i+1}\leftarrow\mathit{ fire}A_{i}\\ &b_{i}:\mathit{fire}B_{i}\gets\mathit{signal}_{i}&b^{\prime}_{i+1}: \mathit{signal}_{i+1}\leftarrow\mathit{fire}B_{i}\end{array}\]
for all \(i=0,\ldots,n-1\) where we assume (for simplicity) that \(\mathit{signal}_{n}\) represents the death of the prisoner. This program has one stable model (the least model) making true the \(3n+1\) atoms occurring in the program. However, this last model has \(2^{n}\) explanations because to derive \(\mathit{signal}_{i+1}\) from level \(i\), we can choose between any of the two rules \(a^{\prime}_{i}\) or \(b^{\prime}_{i}\) (corresponding to the two riflemen) in each explanation.
In many disjunctive programs, justified and stable models coincide. For instance, the following example is an illustration of a program with disjunction and head cycles.
Example 4: Let \(P\) be the program:
\[\ell_{1}:p\lor q\qquad\qquad\ell_{2}:q\gets p\qquad\qquad\ell_{3}:p\gets q\]
This program has one justified model \(\{p,q\}\) that coincides with the unique stable model and has two possible explanations, \(\{(\ell_{1}:p),(\ell_{2}:q)\}\) and \(\{(\ell_{1}:q),(\ell_{3}:p)\}\).
However, in the general case, not every justified model is a stable model: we provide next a simple counterexample. Consider the program \(P\):
\[\ell_{1}:a\lor b\qquad\qquad\ell_{2}:a\lor c\]
whose classical models are the five interpretations: \(\{a\}\), \(\{a,c\}\), \(\{a,b\}\), \(\{b,c\}\) and \(\{a,b,c\}\). The last one \(\{a,b,c\}\) is not justified, since we would need three different labels and we only have two rules. Each model \(\{a,c\}\), \(\{a,b\}\), \(\{b,c\}\) has a unique explanation corresponding to the atom labellings \(\{(\ell_{1}:a),(\ell_{2}:c)\}\), \(\{(\ell_{1}:b),(\ell_{2}:a)\}\) and \(\{(\ell_{1}:b),(\ell_{2}:c)\}\), respectively. On the other hand, model \(\{a\}\) has two possible explanations, corresponding to \(\{(\ell_{1}:a)\}\) and \(\{(\ell_{2}:a)\}\). Notice that, in the definition of explanation, there is no need to fire every rule with a true body in \(I\) - we are only forced to explain every true atom in \(I\). Note also that only the justified models \(\{a\}\) and \(\{b,c\}\) are also stable: this is due to the minimality condition imposed by stable models on positive programs, getting rid of
the other two justified models \(\{a,b\}\) and \(\{a,c\}\). The following theorem asserts that, for non-disjunctive programs, every justified model is also stable.
**Theorem 2**: If \(P\) is a non-disjunctive program, then \(\mathit{SM}(P)=\mathit{JM}(P)\). \(\square\)
Proof: Given Theorem 1, we must only prove that, for non-disjunctive programs, every justified model is also stable. Let \(I\) be a justified model of \(P\). By Proposition 3, we also know that \(I\) is a justified model of \(P^{I}\). \(P^{I}\) is a positive program and is non-disjunctive (since \(P\) was non-disjunctive) and so, \(P\) is a Horn program. By Proposition 4, we know \(I\) is also the _least model_ of \(P^{I}\), which makes it a stable model of \(P\). \(\square\)
Moreover, for non-disjunctive programs, we can prove that our definition of supported model, coincides with the traditional one in terms of fixpoints of the immediate consequences operator van Emden and Kowalski (1976) or as models of completion Clark (1978). Given a non-disjunctive program \(P\), let \(T_{P}(I)\) be defined as \(\{p\mid r\in P,I\models\mathit{Body}(r),\mathit{Head}(r)=p\}\).
**Theorem 3**: If \(P\) is a non-disjunctive program, then \(I=T_{P}(I)\) iff \(I\in\mathit{SPM}(P)\). \(\square\)
Proof: For left to right, suppose \(I=T_{P}(I)\). It is easy to see that this implies \(I\models P\). By definition of \(T_{P}\), for each atom \(p\) there exists some rule \(r\) with \(\mathit{Head}(r)=p\) and \(I\models\mathit{Body}(r)\). Let us arbitrarily pick one of those rules \(r_{p}\) for each \(p\). Then we can easily form a support graph where \(\lambda(p)=Lb(r_{p})\) and assign all the incoming edges for \(p\) as \((q,p)\) such that \(q\in\mathit{Body}^{+}(r_{p})\).
For right to left, suppose \(I\models P\) and there is some support graph \(G\) of \(I\) under \(P\). We prove both inclusion directions for \(I=T_{P}(I)\). For \(\subseteq\), suppose \(p\in I\). Then \(p\) is a node in \(G\) and there is a rule \(r\) such that \(\lambda(p)=Lb(r)\), \(p=\mathit{Head}(r)\) (\(P\) is non-disjunctive) and \(I\models\mathit{Body}(r)\). But then \(p\in T_{P}(I)\). For \(\supseteq\), take any \(p\in T_{P}(I)\) and suppose \(p\not\in I\). Then, we have at least some rule \(r\in P\) with \(I\models\mathit{Body}(r)\) and \(I\not\models\mathit{Head}(r)(=p)\), something that contradicts \(I\models P\). \(\square\)
To illustrate supported models in the disjunctive case, consider the program:
\[\ell_{1}:a\lor b\gets c\qquad\qquad\ell_{2}:c\gets b\]
The only justified model of this program is \(\emptyset\) which is also stable and supported. Yet, we also obtain a second supported model \(\{b,c\}\) that is justified by the (cyclic) support graph with labelling \(\{\ell_{1}:b,\ell_{2}:c\}\).
## 3 An ASP encoding to compute explanations
In this section, we focus on the computation of explanations for a given stable model. We assume that we use an ASP solver to obtain the answer sets of some program \(P\) and that we have some way to label the rules. For instance, we may use the code line number
(or another tag specified by the user), followed by the free variables in the rule and some separator. In that way, after grounding, we get a unique identifier for each ground rule.
To explain the answer sets of \(P\) we may build the following (non-ground) ASP program \(x(P)\) that can be fed with the (reified) true atoms in \(I\) to build the ground program \(x(P,I)\). As we will prove, the answer sets of \(x(P,I)\) are in one-to-one correspondence with the explanations of \(I\). The advantage of this technique is that, rather than collecting all possible explanations in a single shot, something that results too costly for explaining large programs, we can perform regular calls to an ASP solver for \(x(P,I)\) to compute one, several or all explanations of \(I\) on demand. Besides, this provides a more declarative approach that can be easily extended to cover new features (such as, for instance, minimisation among explanations).
For each rule in \(P\) of the form (1), \(x(P)\) contains the set of rules:
\[sup(\ell) \leftarrow as(q_{1})\wedge\cdots\wedge as(q_{n})\wedge as(p_{i})\wedge\neg a (s_{1})\wedge\cdots\wedge\neg as(s_{j}) \tag{3}\] \[\wedge\ \neg\neg as(t_{1})\wedge\cdots\wedge\neg\neg as(t_{k})\] (4) \[\{f(\ell,p_{i})\} \leftarrow f(q_{1})\wedge\cdots\wedge f(q_{n})\wedge as(p_{i})\wedge sup(\ell)\] (5) \[\bot \leftarrow f(\ell,p_{i})\wedge f(\ell,p_{h}) \tag{6}\]
for all \(i,h=1\ldots m\) and \(i\neq h\), and, additionally \(x(P)\) contains the rules:
\[f(A) \leftarrow f(L,A)\wedge as(A) \tag{7}\] \[\bot \leftarrow not\ f(A)\wedge as(A)\] (8) \[\bot \leftarrow f(L,A)\wedge f(L^{\prime},A)\wedge L\neq L^{\prime}\wedge as(A) \tag{9}\]
As we can see, \(x(P)\) reifies atoms in \(P\) using three predicates: \(as(A)\) which means that atom \(A\) is in the answer set \(I\), so it is an initial assumption; \(f(L,A)\) means that rule with label \(L\) has been "fired" for atom \(A\), that is, \(\lambda(A)=L\); and, finally, \(f(A)\) that just means that there exists some fired rule for \(A\) or, in other words, we were able to derive \(A\). Predicate \(sup(\ell)\) tells us that the body of the rule \(r\) with label \(\ell\) is "supported" by \(I\), that is, \(I\models\mathit{Body}(r)\). Given any answer set \(I\) of \(P\), we define the program \(x(P,I)\stackrel{{\mathrm{df}}}{{=}}x(P)\cup\{as(A)\mid A\in I\}\). It is easy to see that \(x(P,I)\) becomes equivalent to the ground program containing the following rules:
\[\{f(\ell,p)\} \leftarrow f(q_{1})\wedge\cdots\wedge f(q_{n}) \text{for each rule }r\in P\text{ like (\ref{
We have to prove that \(J\) induces a valid explanation \(G\). Let us denote \(\mathit{At}(J)\stackrel{{\mathrm{df}}}{{=}}\{a\in\mathit{At}\mid f(a) \in J\}\). Since (12) is the only rule for \(f(a)\), we can apply completion to conclude that \(f(a)\in J\) iff \(f(\ell,a)\in J\) for some label \(\ell\). So, the set \(\mathit{At}(J)\) contains the set of atoms for which \(J\) assigns some label: we will prove that this set coincides with \(I\). We may observe that \(I\subseteq\mathit{At}(J)\) because for any \(a\in I\) we have the constraint (13) forcing \(f(a)\in J\). On the other hand, \(\mathit{At}(J)\subseteq I\) because the only rules with \(f(a)\) in the head are (12) and these are only defined for atoms \(a\in I\). To sum up, in any answer set \(J\) of \(x(P,I)\), we derive exactly the original atoms in \(I\), \(\mathit{At}(J)=I\) and so, the graph induced by \(J\) has exactly one node per atom in \(I\).
Constraint (14) guarantees that atoms \(f(\ell,a)\) have a functional nature, that is, we never get two different labels for a same atom \(a\). This allows defining the labelling function \(\lambda(a)=\ell\) iff \(f(\ell,a)\in J\). We remain to prove that conditions (i)-(iii) in Definition 1 hold. Condition (i) requires that \(\lambda\) is injective, something guaranteed by (11). Condition (ii) requires that, informally speaking, the labelling for each atom \(a\) corresponds to an activated, supported rule for \(a\). That is, if \(\lambda(a)=\ell\), or equivalently \(f(\ell,a)\), we should be able to build am edge \((q,a)\) for each atom in the positive body of \(\ell\) so that atoms \(q\) are among the graph nodes. This is guaranteed by that fact that rule (10) is the only one with predicate \(f(\ell,a)\) in the head. So, if that ground atom is in \(J\), it is because \(f(q_{i})\) are also in \(J\) i.e. \(q_{i}\in I\), for all atoms in the positive body of rule labelled with \(\ell\). Note also that (10) is such that \(I\models\mathit{Body}(r)\), so the rule supports atom \(p\) under \(I\), that is, \(r\in\mathit{SUP}(P,I,p)\). Let us call \(E\) to the set of edges formed in this way. Condition (iii) requires that the set \(E\) of edges forms an acyclic graph. To prove this last condition, consider the reduct program \(x(P,I)^{J}\). The only difference of this program with respect to \(x(P,I)\) is that rules (10) have now the form:
\[f(\ell,p)\gets f(q_{1})\wedge\cdots\wedge f(q_{n}) \tag{15}\]
for each rule \(r\in P\) like (1), \(I\models\mathit{Body}(r)\), \(p\in H(r)\cap I\) as before, but additionally \(f(\ell,p)\in J\) so the rule is kept in the reduct. Yet, the last condition is irrelevant since \(f(\ell,p)\in J\) implies \(f(p)\in J\) so \(p\in\mathit{At}(J)=I\). Thus, we have exactly one rule (15) in \(x(P,I)^{J}\) per each choice (10) in \(x(P,I)\). Now, since \(J\) is an answer set of \(x(P,I)\), by monotonicity of constraints, it (11), (13) and (14) and is an answer set of the rest of the program \(P^{\prime}\) formed by rules (15) and (11). This means that \(J\) is a minimal model of \(P^{\prime}\). Suppose we have a cycle in \(E\), formed by the (labelled) nodes and edges \((\ell_{1}:p_{1})\longrightarrow\cdots\longrightarrow(\ell_{n}:p_{n}) \longrightarrow(\ell_{1}:p_{1})\).Take the interpretation \(J^{\prime}=J\setminus\{f(\ell_{1},p_{1}),\ldots,f(\ell_{n},p_{n}),f(p_{1}), \ldots,f(p_{n})\}\). Since \(J\) is a minimal for \(P^{\prime}\) there must be some rule (15) or (11) not satisfied by \(J^{\prime}\). Suppose \(J^{\prime}\) does not satisfy some rule (11) so that \(f(a)\not\in J^{\prime}\) but \(f(\ell,a)\in J^{\prime}\subseteq J\). This means we had \(f(a)\in J\) since the rule was satisfied by \(J\) so \(a\) is one of the removed atoms \(p_{i}\) belonging to the cycle. But then \(f(\ell,a)\) should have been removed \(f(\ell,a)\not\in J^{\prime}\) and we reach a contradiction. Suppose instead that \(J^{\prime}\) does not satisfy some rule (15), that is, \(f(\ell,p)\not\in J^{\prime}\) and \(\{f(q_{1}),\ldots,f(g_{n})\}\subseteq J^{\prime}\subseteq J\). Again, since the body holds in \(J\), we get \(f(\ell,p)\in J\) and so, \(f(\ell,p)\) is one of the atoms in the cycle we removed from \(J^{\prime}\). Yet, since \((\ell:p)\) is in the cycle, there is some incoming edge from some atom in the cycle and, due to the way in which atom labelling is done, this means that this edge must come from some atom \(q_{i}\) with \(1\leq i\leq n\) in the positive body of the rule whose label is \(\ell\). But, since this atom is in the cycle, this also means that \(f(q_{i})\not\in J^{\prime}\) and we reach a contradiction.
**Theorem 5** (_Completeness_): Let \(I\) be an answer set of \(P\). For every explanation \(G=\langle I,E,\lambda\rangle\) of \(I\) under \(P\) there exists some answer set \(J\) of program \(x(P,I)\) where \(f(\ell,a)\in J\) iff \(\lambda(a)=\ell\) in \(G\). \(\Box\)
Proof: Take \(I\) an answer set of \(P\) and \(G=\langle I,E,\lambda\rangle\) some explanation for \(I\) under \(P\) and let us define the interpretation:
\[J:=\{f(a)\mid a\in I\}\cup\{f(\ell,a)\mid\lambda(a)=\ell\}\]
We will prove that \(J\) is an answer set of \(x(P,I)\) or, in other words, that \(J\) is a minimal model of \(x(P,I)^{J}\). First, we will note that \(J\) satisfies \(x(P,I)^{J}\) rule by rule. For the constraints, \(J\) obviously satisfy (7) because it contains an atom \(f(a)\) for each \(a\in I\). We can also see that \(J\) satisfies (11) because graph \(G\) does not contain repeated labels, so we cannot have two different atoms with the same label. The third constraint (14) is also satisfied by \(J\) because atoms \(f(\ell,a),f(\ell^{\prime},a)\) are obtained from \(\lambda(a)\) that is a function that cannot assign two different labels to a same atom \(a\). Satisfaction of (11) is guaranteed since the head of this rule \(f(a)\) is always some atom \(a\in I\) and therefore \(f(a)\in J\). For the remaining rule, (10), we have two cases. If \(f(\ell,p)\not\in J\) then the rule is not included in the reduct and so there is no need to be satisfied. Otherwise, if \(f(\ell,p)\in J\) then the rule in the reduct corresponds to (15) and is trivially satisfied by \(J\) because its only head atom holds in that interpretation. Finally, to prove that \(J\) is a minimal model of \(x(P,I)^{J}\), take the derivation tree \(\pi_{G}(a)\) for each atom \(a\in I\). Now, construct a new tree \(\pi\) where we replace each atom \(p\) in \(\pi_{G}(a)\) by an additional derivation from \(f(\ell,p)\) to \(f(p)\) through rule (12). It is easy to see that \(\pi\) constitutes a Modus Ponens proof for \(f(a)\) under the Horn program \(x(P,I)^{J}\) and the same reasoning can be applied to atom \(f(\ell,a)\in J\) that is derived in the tree \(\pi\) for \(f(a)\). Therefore, all atoms in \(J\) must be included in any model of \(x(P,I)^{J}\). \(\Box\)
## 4 Related work
The current approach constitutes the formal basis of the new version of the explanation tool xclingo Cabalar and Muniz (2023) which also uses the ASP encoding from Section 3 to compute the explanations. Theorems 4 and 5 prove, in this way, that the tool is sound and complete with respect to the definition of explanation provided in the current paper.
There exist many other approaches for explanation and debugging in ASP (see the survey Fandinno and Schulz (2019)). The closest approach to the current work is clearly the one based on _causal graphs_ Cabalar et al. (2014). Although we conjecture that a formal relation can be established (we plan this for future work), the main difference is that causal graphs are "atom oriented" whereas the current approach is model oriented. For instance, in the firing squads example, the causal-graph explanation for the derivations of atoms \(\mathit{signal}_{4}\) and \(\mathit{signal}_{8}\) would contain algebraic expressions with _all_ the possible derivations for each one of those atoms. In the current approach, however, we would get an individual derivation in each case, but additionally, the proof we get for \(\mathit{signal}_{4}\) has to be _the same one_ we use for that atom inside the derivation of \(\mathit{signal}_{8}\).
Justifications based on the positive part of the program were also used before in Erdem and Oztok (2013). There, the authors implemented an ad-hoc approach to the problem of solving biomedical queries, rather than a general ASP explanation tool.
Other examples of general approaches are the _formal theory of justifications_Denecker et al. (2015), _off-line justifications_Pontelli and Son (2006), LABAS Schulz and Toni (2016) (based on argumentation theory Bondarenko et al. (1997); Dung et al. (2009)) or s(CASP) Arias et al. (2020). All of them provide graph or tree-based explanations for an atom to be (or not) in a given answer set. The formal theory of justifications was also extended to deal with nested graph based justifications Marynissen (2022) and is actually a more general framework that allows covering other logic programming semantics. System xASP Trieu et al. (2022) generates explanation graphs from Pontelli and Son (2006) and also uses an ASP meta-programming encoding. In the case of s(CASP), it proceeds in a top-down manner, building the explanation as an ordered list of literals extracted from the goal-driven satisfaction of the query. An important difference with respect to this last group of approaches is that their explanations consider dependences through default negation. To illustrate the effect, take the program:
\[\ell_{1}:switch\] \[\ell_{2}:light \leftarrow switch,not\ ab\] \[\ell_{3}:ab \leftarrow blown\_fuse\] \[\ell_{4}:ab \leftarrow broken\_bulb\] \[\ell_{5}:ab \leftarrow blackout,not\ generator\]
The only stable model is \(\{switch,light\}\) and its unique explanation is the support graph
\[\ell_{1}:switch\longrightarrow\ell_{2}:light\]
that is, the light is on because we toggled the switch. Adding negative information would lead us to explain \(not\ ab\) and obtain two explanations: one in which we also add that there is no blown fuse, no broken bulb and no blackout; the second one is similar, but instead of no blackout, we have a doubly negative dependence on generator: i.e. nothing prevents having a generator, even though we do not have it. Note how these explanations may easily get complicated: we could have to negate multiple alternative ways of breaking the bulb, even when _none of them have happened3_. Our approach consists, instead, in explaining the information that currently holds, assuming that other states of affairs will arise in terms of other alternative answer sets. In other words, we refrain from using facts for which we have no evidence or reason to believe in our current model.
Footnote 3: We face here, somehow, a kind of qualification problem in the explanations.
Another distinctive feature of our approach is that it provides explanations for disjunctive programs and, moreover, it has also allowed us to define supported and justified models for that case. In fact, we plan to study potential connections between justified models and other approaches for disjunction not based in minimal models such as Aguado et al. (2017) or Shen and Eiter (2019).
Other ASP explanation approaches have to do with comparing stable models or explaining their non-existence. For instance, Gebser et al. (2008) uses a meta-programming
technique to explain why a given model _is not_ an answer set of a given program. More recently, Eiter et al. (2019) considered the explanation of ASP programs that have no answer sets in terms of the concept of _abstraction_Saribatur et al. (2021). This allows spotting which parts of a given domain are actually relevant for rising the unsatisfiability of the problem. We plan to explore formal relations to these approaches or to study potential combinations with some of them.
## 5 Conclusions
We have introduced the notion of explanation of a model of a logic program as some kind of (acyclic) labelled graph we called _support graph_. We have have defined justified models as those that have at least one explanation and proved that all stable models are justified, whereas the opposite does not hold, at least for disjunctive programs. We also provided a meta-programming encoding in ASP that generates the explanations of a given stable model. We formally proved a one-to-one correspondence between the answer sets of the encoding and the explanations of the original stable model. Since this encoding constitutes the basis of the tool xclingo 2.0, we provide in this way a formal proof of correctness for this system. A system description of the tool is left for a forthcoming document. Future work includes the comparison to other approaches, the explanation of unsatisfiable programs and the minimisation or even the specification of preferences among explanations.
|
2305.14923 | Faraday rotation and transmittance as markers of topological phase
transitions in 2D materials | We analyze the magneto-optical conductivity (and related magnitudes like
transmittance and Faraday rotation of the irradiated polarized light) of some
elemental two-dimensional Dirac materials of group IV (graphene analogues,
buckled honeycomb lattices, like silicene, germanene, stannane, etc.), group V
(phosphorene), and zincblende heterostructures (like HgTe/CdTe quantum wells)
near the Dirac and gamma points, under out-of-plane magnetic and electric
fields, to characterize topological-band insulator phase transitions and their
critical points. We provide plots of the Faraday angle and transmittance as a
function of the polarized light frequency, for different external electric and
magnetic fields, chemical potential, HgTe layer thickness and temperature, to
tune the material magneto-optical properties. We have shown that
absortance/transmittance acquires extremal values at the critical point, where
the Faraday angle changes sign, thus providing fine markers of the topological
phase transition. In the case of non-topological materials as phosphorene, a
minimum of the transmittance is also observed due to the energy gap closing by
an external electric field. | M. Calixto, A. Mayorgas, N. A. Cordero, E. Romera, O. Castaños | 2023-05-24T09:09:33Z | http://arxiv.org/abs/2305.14923v3 | # Faraday rotation and transmittance as markers of topological phase transitions in 2D materials
###### Abstract
We analyze the magneto-optical conductivity (and related magnitudes like transmittance and Faraday rotation of the irradiated polarized light) of some elemental two-dimensional Dirac materials of group IV (graphene analogues, buckled honeycomb lattices, like silicene, germanene, stannane, etc.), group V (phosphorene), and zincblende heterostructures (like HgTe/CdTe quantum wells) near the Dirac and gamma points, under out-of-plane magnetic and electric fields, to characterize topological-band insulator phase transitions and their critical points. We provide plots of the Faraday angle and transmittance as a function of the polarized light frequency, for different external electric and magnetic fields, chemical potential, HgTe layer thickness and temperature, to tune the material magneto-optical properties. We have shown that absoratance/transmittance acquires extremal values at the critical point, where the Faraday angle changes sign, thus providing fine markers of the topological phase transition.
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
+
Footnote †: [email protected]
## I Introduction
Two-dimensional (2D) materials have been extensively studied in recent years (and are expected to be one of the crucial research topics in future years) especially because of their remarkable electronic and magneto-optical properties which make them hopeful candidates for next generation optoelectronic devices. Graphene is the archetype of a 2D nanomaterial with exceptional high tensile strength, electrical conductivity, transparency, etc. In spite of being the thinnest one, it exhibits a giant Faraday rotation (\(\Theta_{\mathrm{F}}\sim 6^{\circ}\)) on polarized light in single- and multilayer arrangements [1; 2; 3; 4; 5; 6] with experimental confirmation [7]. Faraday rotation is a fundamental magneto-optical phenomenon used in various optical control, laser technology and magnetic field sensing techniques.
Magneto-optical properties of other buckled honeycomb lattices, like silicene [8], have been studied in [9; 10; 11], together with other monolayer transition metal dichalcogenides [12] and anisotropic versions like phosphorene [13]. Magneto-optical measurements also provide signatures of the topological phase transition (TPT; see [14; 15; 16] for standard textbooks on the subject) in inverted HgTe/CdTe quantum wells (QW), distinguishing quantum Hall (QH) from quantum spin Hall (QSH) phases [17], where one can tune the band structure by fabricating QWs with different thicknesses \(\lambda\). A universal value of the Faraday rotation angle, close to the fine structure constant, has been experimentally observed in thin HgTe QW with critical thickness [18].
Information theoretic measures also provide signatures of the TPT in silicene [19; 20; 21; 22; 23] and HgTe/CdTe QWs [24], as an alternative to the usual topological (Chern) numbers. They also account for semimetalic behavior of phosphorene [25; 26] under perpendicular electric fields.
In this paper we perform a comparative study of the
magneto-optical properties of several 2D Dirac materials looking for TPT signatures when the band structure is tuned by applying external fields or by changing the material characteristics. The organization of the article is as follows. In Sec. II we discuss the structure of time independent Bloch Hamiltonians for general two-band 2D-Dirac material models, their Chern numbers and their minimal coupling to an external perpendicular magnetic field. We particularize to graphene analogues (silicene, germanene, etc.) in Sec. II.1, zincblende heterostructures (HgTe/CdTe quantum wells) in Sec. II.2 and anisotropic materials like phosphorene in II.3, calculating their energy spectrum and Hamiltonian eigenstates (Landau levels) and describing their topological phases (when they exist). In Sec. III we recall the Kubo-Greenwood formula for the magneto-optical conductivity tensor \(\mathbf{\sigma}\) of a 2D electron system in a perpendicular magnetic field \(B\) and an oscillating electric field of frequency \(\Omega\). In particular, we are interested in analyzing the transmittance and Faraday rotation of linearly polarized light of frequency \(\Omega\) for normal incidence on the 2D material. Magneto-optical properties of graphene analogues, zincblende heterostructures and phosphorene are analyzed in Sections III.1, III.2 and III.3, respectively. For topological insulator materials, we find that the critical point is generally characterized by a minimum transmittance \(\mathcal{T}_{0}\) at a given light frequency \(\Omega_{0}\), where the Faraday angle changes sign. The effect of anisotropies is also discussed in phosphorene in Section III.3. Finally, Sec. IV is devoted to conclusions.
## II Some two-band 2D-Dirac material models
The time independent Bloch Hamiltonian of a two-band 2D insulator has the general form
\[H(\mathbf{k})=\epsilon_{0}(\mathbf{k})\tau_{0}+\mathbf{d}(\mathbf{k})\cdot\mathbf{\tau}, \tag{1}\]
where \(\mathbf{\tau}=(\tau_{x},\tau_{y},\tau_{z})\) is the Pauli matrix vector, \(\tau_{0}\) denotes the \(2\times 2\) identity matrix and \(\mathbf{d}(\mathbf{k})\) parameterizes an effective spin-orbit coupling near the center \(\Gamma\) or the Dirac valleys \(K\) and \(K^{\prime}\) of the first Brillouin zone (FBZ), with \(\mathbf{k}=(k_{x},k_{y})\) the two-dimensional wavevector. The energy of the two bands is \(\epsilon_{\pm}(\mathbf{k})=\epsilon_{0}(\mathbf{k})\pm|\mathbf{d}(\mathbf{k})|\).
To distinguish between band insulator and topological insulator phases, one can use the TKNN (Thouless-Kohmoto-Nightingale-Nijs) formula [27] providing the Chern-Pontryagin number (related to the quantum spin Hall conductance and the Berry phase [28])
\[\mathcal{C}=\frac{1}{2\pi}\int\int_{\text{FBZ}}d^{2}\mathbf{k}\left(\frac{\partial \hat{\mathbf{d}}(\mathbf{k})}{\partial k_{x}}\times\frac{\partial\hat{\mathbf{d}}(\mathbf{k}) }{\partial k_{y}}\right)\cdot\hat{\mathbf{d}}(\mathbf{k}), \tag{2}\]
with \(\hat{\mathbf{d}}=\mathbf{d}/|\mathbf{d}|\), which counts the number of times (winding number) the unit vector \(\hat{\mathbf{d}}(\mathbf{k})\) wraps around the unit sphere as \(\mathbf{k}\) wraps around the entire FBZ. The Chern number \(\mathcal{C}\) usually depends on the sign of some material and (external) control parameters in the Hamiltonian \(H\) (see later for some examples), taking different values in different phases. We shall see that magneto-optical conductivity measures also capture the topological phase transition.
We shall consider the interaction with a perpendicular magnetic field \(\mathbf{B}=(0,0,B)\). Promoting the wavevector \(\mathbf{k}\) to the momentum operator \(\mathbf{k}\to\mathbf{p}/\hbar=-\mathrm{i}\mathbf{\nabla}\), this interaction is introduced through the usual minimal coupling, \(\mathbf{p}\to\mathbf{P}=\mathbf{p}+e\mathbf{A}\) with \(\mathbf{A}=(A_{x},A_{y})=(-By,0)\) the electromagnetic potential (in the Landau gauge) and \(e\) the elementary charge (in absolute value). After Peierls' substitution, which results in
\[k_{x}\to P_{x}/\hbar=\frac{a^{\dagger}+a}{\sqrt{2}\ell_{B}},\quad k_{y}\to P_{y }/\hbar=\frac{a^{\dagger}-a}{\mathrm{i}\sqrt{2}\ell_{B}}, \tag{3}\]
the Hamiltonian (1) can be eventually written in terms of creation \(a^{\dagger}\) and annihilation
\[a=\frac{\ell_{B}}{\sqrt{2}\hbar}(P_{x}-\mathrm{i}P_{y})=\frac{-1}{\sqrt{2}\ell _{B}}(y-y_{0}+\mathrm{i}\ell_{B}^{2}p_{y}/\hbar), \tag{4}\]
operators, where \(\ell_{B}=\sqrt{\hbar/(eB)}\) is the magnetic length and \(y_{0}=\ell_{B}^{2}k_{x}\) is the coordinate of the conserved center of the cyclotron orbit.
Let us review some relevant physical examples.
### Graphene analogues: silicene, germanene, etc
Silicene, germanene, and other transition metal dichalcogenides (of the Xene type), differ from pristine graphene in that they exhibit an intrinsic non-zero spin-orbit coupling \(H_{\text{so}}=-\frac{1}{2}s\xi\Delta_{\text{so}}\tau_{z}\) (\(s=\pm 1\) is the spin of the electron and \(\xi=\pm 1\) refer to the Dirac valleys \(K\) and \(K^{\prime}\)) due to second neighbors hopping terms in the tight binding model [29]. Spin-orbit interaction \(H_{\text{so}}\) combined with and external perpendicular electric field coupling \(H_{\Delta_{z}}=\frac{1}{2}\Delta_{z}\tau_{z}\), gives \(\mathbf{d}(\mathbf{k})=(v\hbar\xi k_{x},v\hbar k_{y},\Delta_{\xi\xi})\), where \(\Delta_{\xi\xi}=(\Delta_{z}-s\xi\Delta_{\text{so}})/2\) results in a tunable (Dirac mass) gap (see e.g. [30; 31; 32; 33]). In Table 1 we show a comparative of spin-orbit coupling and Fermi velocity values for several 2D materials.
The Chern number (2) turns out to be
\[\mathcal{C}_{s\xi}=\xi\,\text{sign}(\Delta_{s\xi}), \tag{5}\]
where we have integrated on the whole plane, as corresponds to the FBZ in the continuum limit (zero lattice constant). Therefore, the topological phase is determined by the sign of the Dirac mass at each valley \(\xi\). More precisely, there is a TPT from a topological insulator (TI, \(|\Delta_{z}|<\Delta_{\text{so}}\)) to a band insulator (BI, \(|\Delta_{z}|>\Delta_{\text{so}}\)), at a charge neutrality point (CNP) \(\Delta_{z}^{(0)}=s\xi\Delta_{\text{so}}\), where there is a gap cancellation between the perpendicular electric field and the spin-orbit coupling.
Using the general prescription (3), the minimal coupling with a perpendicular magnetic field \(B\) then results in a different Hamiltonian \(H_{\xi}\) for each valley \(\xi=\pm 1\)
\[H_{1}=\left(\begin{array}{cc}\Delta_{s,1}&\hbar\omega a\\ \hbar\omega a^{\dagger}&-\Delta_{s,1}\end{array}\right),\;H_{-1}=\left(\begin{array} []{cc}\Delta_{s,-1}&-\hbar\omega a^{\dagger}\\ -\hbar\omega a&-\Delta_{s,-1}\end{array}\right), \tag{6}\]
where \(\omega=\sqrt{2}v/\ell_{B}\) denotes the cyclotron frequency. The eigenvalues of both Hamiltonians are simply:
\[E_{n}^{s\xi}=\left\{\begin{array}{l}\mathrm{sgn}(n)\sqrt{|n|\hbar^{2}\omega^ {2}+\Delta_{s\xi}^{2}},\quad n\neq 0,\\ -\xi\Delta_{s\xi},\quad n=0,\end{array}\right. \tag{7}\]
and the corresponding eigenstates are written in terms of Fock states \(||n|\rangle\), for Landau level (LL) index \(n=0,\pm 1,\pm 2,\ldots\) [valence (\(-\)) and conduction (\(+\)) states], as spinors
\[|\mathbf{n}\rangle_{s\xi}=\left(\begin{array}{c}A_{n}^{s\xi}\left||n|-\frac{\xi +1}{2}\\ B_{n}^{s\xi}\left||n|+\frac{\xi-1}{2}\right\rangle\end{array}\right), \tag{8}\]
with coefficients (see [36; 37; 38; 9; 38] for similar results)
\[A_{n}^{s\xi} = \left\{\begin{array}{l}\frac{\mathrm{sgn}(n)}{\sqrt{2}}\sqrt{1 +\mathrm{sgn}(n)\cos\theta_{n}^{s\xi}},\quad n\neq 0,\\ (1-\xi)/2,\quad n=0,\end{array}\right. \tag{9}\] \[B_{n}^{s\xi} = \left\{\begin{array}{l}\frac{\xi}{\sqrt{2}}\sqrt{1-\mathrm{sgn }(n)\cos\theta_{n}^{s\xi}},\quad n\neq 0,\\ (1+\xi)/2,\quad n=0,\end{array}\right.\]
where \(\theta_{n}^{s\xi}=\arctan\left(\hbar\omega\sqrt{|n|}/\Delta_{s\xi}\right)\), that is, \(\cos\theta_{n}^{s\xi}=\Delta_{s\xi}/|E_{n}^{s\xi}|\). Note that \(A_{n}^{s\xi}\) and \(B_{n}^{s\xi}\) can eventually be written as \(\cos(\theta_{n}^{s\xi}/2)\) or \(\sin(\theta_{n}^{s\xi}/2)\), depending on \(\mathrm{sgn}(n)\).
In Figure 1 we plot the low energy spectra of silicene, given by (7), as a function of the external electric field \(\Delta_{z}\), together with the charge neutrality (critical) points \(\Delta_{z}^{(0)}=\pm|\Delta_{\mathrm{so}}|\) (marked by vertical dashed lines) at which the TPT takes place.
### HgTe/CdTe quantum wells
In [39; 40; 41; 42] it was shown that quantum spin Hall effect can be realized in mercury telluride-cadmium telluride semiconductor quantum wells. Similar effects were also predicted in Type-II semiconductor quantum wells made from InAs/GaSb/AlSb [43]. The surface states in these 3D topological insulators can be described by a 2D modified effective Dirac Hamiltonian
\[H=\left(\begin{array}{cc}H_{+}&0\\ 0&H_{-}\end{array}\right),\,H_{s}(\mathbf{k})=\epsilon_{0}(\mathbf{k})\tau_{0}+\mathbf{d} _{s}(\mathbf{k})\cdot\mathbf{\tau}, \tag{10}\]
where \(s=\pm 1\) is the spin and \(H_{-}(\mathbf{k})=H_{+}^{*}(-\mathbf{k})\) (temporarily reversed). The expansion of \(H_{s}(\mathbf{k})\) about the center \(\Gamma\) of the first Brillouin zone gives [40]
\[\epsilon_{0}(\mathbf{k})=\gamma-\delta\mathbf{k}^{2},\quad\mathbf{d}_{s}(\mathbf{k})=(\alpha sk _{x},\alpha k_{y},\mu-\beta\mathbf{k}^{2}), \tag{11}\]
where \(\alpha,\beta,\gamma,\delta\) and \(\mu\) are expansion parameters that depend on the heterostructure (the HgTe layer thickness \(\lambda\)). The most important one is the mass or gap parameter \(\mu\), which changes sign at a critical HgTe layer thickness \(\lambda_{c}\) when going from the normal (\(\lambda<\lambda_{c}\) or \(\mu/\beta<0\)) to the inverted (\(\lambda>\lambda_{c}\) or \(\mu/\beta>0\)) regime [44]. Typical values of these parameters for different HgTe layer thickness
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(\Delta_{\mathrm{so}}\) (meV) & \(l\) (Å) & \(v\) (\(10^{5}\)m/s) \\ \hline Si & 4.2 & 0.22 & 4.2 \\ Ge & 11.8 & 0.34 & 8.8 \\ Sn & 36.0 & 0.42 & 9.7 \\ Pb & 207.3 & 0.44 & – \\ \hline \end{tabular}
\end{table}
Table 1: Approximate values of model parameters \(\Delta_{\mathrm{so}}\) (spin-orbit coupling), \(l\) (interlattice distance) and \(v\) (Fermi velocity) for two dimensional Si, Ge, Sn and Pb sheets. These data have been obtained from first-principles computations in [33] (\(\Delta_{\mathrm{so}}\) and \(l\)) and [34; 35] (\(v\)).
(below and above \(\lambda_{c}\)) can be found in [44] and in Table 2 (\(\gamma\) can be neglected).
The energy of the two bands is
\[\epsilon_{\pm}(\mathbf{k})=\epsilon_{0}(\mathbf{k})\pm\sqrt{\alpha^{2}\mathbf{k}^{2}+(\mu- \beta\mathbf{k}^{2})^{2}}. \tag{12}\]
The TKNN formula (2) for \(\mathbf{d}_{s}(\mathbf{k})\) provides the Chern number
\[\mathcal{C}_{s}=s[\text{sign}(\mu)+\text{sign}(\beta)], \tag{13}\]
where we have integrated on the whole plane, as corresponds to the continuum limit. According to Table 2, \(\beta\) does not change sing and, therefore, the topological phase transition occurs when \(\mu\) changes sign, as already mentioned. In reference [44], the normal and inverted regimes are equivalently given by the sign of \(\mu/\beta\).
Using again the general prescription (3), the minimal coupling with a perpendicular magnetic field \(B\) now results in
\[H_{+} = \left(\begin{array}{cc}\gamma+\mu-\frac{(\delta+\beta)(2N+1)}{ \epsilon_{B}^{2}}&\frac{\sqrt{2}a}{\ell_{B}}a\\ \frac{\sqrt{2}\alpha}{\ell_{B}}a^{\dagger}&\gamma-\mu-\frac{(\delta-\beta)(2N+ 1)}{\epsilon_{B}^{2}}\end{array}\right),\] \[H_{-} = \left(\begin{array}{cc}\gamma+\mu-\frac{(\delta+\beta)(2N+1)}{ \epsilon_{B}^{2}}&-\frac{\sqrt{2}\alpha}{\ell_{B}}a^{\dagger}\\ -\frac{\sqrt{2}\alpha}{\ell_{B}}a&\gamma-\mu-\frac{(\delta-\beta)(2N+1)}{ \epsilon_{B}^{2}}\end{array}\right),\]
with \(N=a^{\dagger}a\). A Zeeman term contribution
\[H_{s}^{Z}=-\frac{s}{2}B\mu_{\text{B}}\left(g_{\text{e}}\frac{\tau_{0}+\tau_{z} }{2}+g_{\text{h}}\frac{\tau_{0}-\tau_{z}}{2}\right) \tag{15}\]
can also be added to the Hamiltonian, with \(\mu_{\text{B}}\simeq 0.058\) meV/T the Bohr magneton and \(g_{\text{e,h}}\) the effective (out-of-plane) \(g\)-factors for electrons and holes (conduction and valence bands).
Using (Fock state) eigenvectors \(||n|\rangle\) of the (Landau level) number operator \(N=a^{\dagger}a\), one can analytically obtain the eigenspectrum
\[E_{n}^{s}= \,\gamma-\frac{2\delta|n|-s\beta}{\ell_{B}^{2}}-s\frac{g_{\text{ e}}+g_{\text{h}}}{4}B\mu_{\text{B}} \tag{16}\] \[+\text{sgn}(n)\sqrt{\frac{2\alpha^{2}|n|}{\ell_{B}^{2}}+\left( \mu-\frac{2\beta|n|-s\delta}{\ell_{B}^{2}}-s\frac{g_{\text{e}}-g_{\text{h}}}{ 4}B\mu_{\text{B}}\right)^{2}},\]
for LL index \(n=\pm 1,\pm 2,\pm 3,\dots\) [valence (\(-\)) and conduction (\(+\))], and
\[E_{0}^{s}=\gamma-s\mu-\frac{\delta-s\beta}{\ell_{B}^{2}}-B\mu_{\text{B}}\left( \frac{s+1}{4}g_{\text{h}}+\frac{s-1}{4}g_{\text{e}}\right), \tag{17}\]
for the edge states \(n=0\), \(s=\pm 1\). These eigenvalues coincide with those in [45; 46; 17] for the identification \(s=\{-1,1\}=\{\uparrow,\downarrow\}\).
The corresponding eigenvectors are
\[|\mathbf{n}\rangle_{s}=\left(\begin{array}{c}A_{n}^{s}\left||n|-\frac{s+1}{2} \right>\\ B_{n}^{s}\left||n|+\frac{s-1}{2}\right>\end{array}\right), \tag{18}\]
with coefficients
\[A_{n}^{s} = \left\{\begin{array}{ll}\frac{\text{sgn}(n)}{\sqrt{2}}\sqrt{1+ \text{sgn}(n)\cos\vartheta_{n}^{s}},&n\neq 0,\\ (1-s)/2,&n=0,\end{array}\right.\]
\[B_{n}^{s} = \left\{\begin{array}{ll}\frac{s}{\sqrt{2}}\sqrt{1-\text{sgn}(n) \cos\vartheta_{n}^{s}},&n\neq 0,\\ (1+s)/2,&n=0,\end{array}\right.\]
where
\[\vartheta_{n}^{s}=\arctan\left(\frac{\sqrt{2|n|}\,\alpha/\ell_{B}}{\mu-\frac{2 \beta|n|-s\delta}{\ell_{B}^{2}}-s\frac{g_{\text{e}}-g_{\text{h}}}{4}B\mu_{ \text{B}}}\right). \tag{20}\]
As for the graphene analogues in (10), the coefficients \(A_{n}^{s}\) and \(B_{n}^{s}\) can eventually be written as sine and cosine of half angle, depending on \(\text{sgn}(n)\).
According to (17), the band inversion for edge states occurs when
\[E_{0}^{+}=E_{0}^{-}\Rightarrow B_{\text{inv}}=\frac{\mu}{e\beta/\hbar-\mu_{ \text{B}}(g_{\text{e}}+g_{\text{h}})/4}, \tag{21}\]
which gives the critical magnetic field \(B_{c}\) which separates the QSH and QH regimes [46]. For example, for the material parameters in Table 2 corresponding to a QW thickness \(\lambda=7.0\) nm and \(g\)-factors \(g_{\text{e}}=22.7,g_{\text{h}}=-1.21\), one obtains \(B_{\text{inv}}\simeq 7.4\) T. See also Figure 2 for a graphical representation of this band inversion.
From now on we shall discard Zeeman coupling for the sake of convenience since our main conclusions remain qualitatively equivalent. We address the interested reader to the Supplemental Material [47] where we reproduce some results of Reference [17] for non-zero Zeeman coupling and contrast with the zero Zeeman coupling case.
We shall use a linear fit
\[\mu(\lambda) = 77.31\,-12.53\lambda,\] \[\alpha(\lambda) = 467.49-14.65\lambda,\] \[\beta(\lambda) = 283.58-138.16\lambda,\] \[\delta(\lambda) = 458.46-138.25\lambda, \tag{22}\]
of the material parameters in Table 2 as a function of the HgTe layer thickness \(\lambda\) (dimensionless units and \(\lambda\) in nm units). In all cases the coefficient of determination is \(R^{2}>0.99\). Looking at \(\mu(\lambda)\) in (22), we can obtain an estimation of the critical HgTe thickness at which the topological phase transition occurs as
\[\mu=0\Rightarrow\lambda_{\text{c}}=6.17\text{ nm}. \tag{23}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\lambda\)(nm) & \(\alpha\)(meV\(\cdot\)nm) & \(\beta\)(meV\(\cdot\)nm\({}^{2}\)) & \(\delta\)(meV\(\cdot\)nm\({}^{2}\)) & \(\mu\)(meV) \\ \hline
5.5 & 387 & -480 & -306 & 9 \\
6.1 & 378 & -553 & -378 & -0.15 \\
7.0 & 365 & -686 & -512 & -10 \\ \hline \end{tabular}
\end{table}
Table 2: Material parameters for HgTe/CdTe quantum wells with different HgTe layer thicknesses \(\lambda\)[44].
In Figure 2 we plot the low energy spectra given by (16) and (17) as a function of the HgTe layer thickness \(\lambda\), where we have extrapolated the linear fit (22) to the interval [4 nm, 8 nm]. When neglecting Zeeman coupling, the band inversion for edge states (21) occurs for \(B=\hbar\mu/(e\beta)\) which, using the linear fit (22), provides a relation
\[\lambda_{\rm inv}(B)=\frac{368.31-2.05B}{59.7-B} \tag{24}\]
between the applied magnetic field \(B\) (in Tesla) and the HgTe layer thickness \(\lambda_{\rm inv}(B)\) (in nanometers) at which the band inversion \(E_{0}^{+}=E_{0}^{-}\) takes place. Note that \(\lambda_{\rm inv}(B)\simeq\lambda_{c}=6.17\) nm for low \(B\ll 1\) T, and that \(E_{0}^{+}=E_{0}^{-}\simeq 0\) meV at this point as shows Figure 2.
### Phosphorene as an anisotropic material
The physics of phosphorene has been extensively studied [48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. There are several approaches to the low energy Hamiltonian of phosphorene in the literature. Rudenko et _al._[63] and Ezawa [64] propose a four-band and five-neighbors tight-binding model later simplified to two-bands [64]. Several approximations of this two-band model have been used in [65; 66; 67; 13]. We shall choose for our study the Hamiltonian
\[H=\left(\begin{array}{cc}E_{\rm c}+\alpha_{x}k_{x}^{2}+\alpha_{y}k_{y}^{2}& \gamma k_{x}\\ \gamma k_{x}&E_{\rm v}-\beta_{x}k_{x}^{2}-\beta_{y}k_{y}^{2}\end{array}\right). \tag{25}\]
proposed by Zhou and collaborators [13]. This corresponds to a Bloch Hamiltonian (1) with
\[\epsilon_{0}(\mathbf{k}) = \frac{E_{\rm c}+E_{\rm v}+(\alpha_{x}-\beta_{x})k_{x}^{2}+(\alpha _{y}-\beta_{y})k_{y}^{2}}{2}, \tag{26}\] \[\mathbf{d}(\mathbf{k}) = \left(\gamma k_{x},0,\frac{E_{\rm c}-E_{\rm v}+(\alpha_{x}+\beta_ {x})k_{x}^{2}+(\alpha_{y}+\beta_{y})k_{y}^{2}}{2}\right),\]
The Hamiltonian (25) provides a trivial Chern number (2), even in the presence of a tunable perpendicular constant electric field (see below), which means that monolayer phosphorene does not have a topological phase _per se_. It has been shown that topological transitions can be induced in phosphorene when rapidly driven by in-plane time-periodic laser fields [68]; these are called in general "Floquet topological insulators" (see e.g. [69; 70; 71]), but we shall not consider this possibility here. Although phosphorene is not a topological material, we will see in Sec. III.3 that the critical magneto-optical properties (e.g., minimum transmittance) observed for silicene and HgTe QWs are still valid in phosphorene when closing the energy gap through an external electric field. Another possibility to modify the energy gap could be by applying strain [65; 55] (see later in Sec. III.3).
The material parameters of phosphorene can be written in terms of conduction (c) and valence (v) effective masses as (see [13] for more information)
\[\alpha_{x,y}=\frac{\hbar^{2}}{2m_{\rm cx,cy}},\quad\beta_{x,y}=\frac{\hbar^{2 }}{2m_{\rm vx,vy}}, \tag{27}\]
with
\[\begin{split} m_{\rm cx}&=0.793m_{\rm e},\,\,\,m_{ \rm cy}=0.848m_{\rm e},\\ m_{\rm vx}&=1.363m_{\rm e},\,\,m_{\rm vy}=1.142m_{\rm e },\end{split} \tag{28}\]
and \(m_{\rm e}\) is the free electron mass. Conduction and valence band edge energies are \(E_{\rm c}=0.34\) eV and \(E_{\rm v}=-1.18\) eV, so that the energy gap is \(E_{\rm g}=E_{\rm c}-E_{\rm v}=1.52\) eV. The interband coupling parameter is \(\gamma=-0.523\) eV\(\cdot\)nm.
When coupling to an external perpendicular magnetic field, the anisotropic character of phosphorene slightly modifies Peierls' substitution (3), which now adopts the following form
\[k_{x}\rightarrow\frac{P_{x}}{\hbar}=\frac{a^{\dagger}+a}{\sqrt{2}\alpha_{yx} \ell_{B}},\quad k_{y}\rightarrow\frac{P_{y}}{\hbar}=\frac{\alpha_{yx}(a^{ \dagger}-a)}{{\rm i}\sqrt{2}\ell_{B}}, \tag{29}\]
with \(\alpha_{yx}=\left(\frac{m_{\rm cx}}{m_{\rm cx}}\right)^{1/4}\). Therefore, applying this prescription to (25), the final Hamiltonian can be written as
\[\begin{split} H=&\,\hbar\omega_{\gamma}(a+a^{ \dagger})\tau_{x}+\left[E_{\rm c}+\hbar\omega_{\rm c}(a^{\dagger}a+1/2)\right] \frac{\tau_{0}+\tau_{z}}{2}\\ &+\left[E_{\rm v}-\hbar\omega_{\rm v}(a^{\dagger}a+1/2)-\hbar \omega^{\prime}(a^{2}+a^{\dagger 2})\right]\frac{\tau_{0}-\tau_{z}}{2},\end{split} \tag{30}\]
Figure 2: Low-energy spectra \(E_{n}^{s}\) of a HgTe/CdTe quantum well as a function of the HgTe layer thickness \(\lambda\) for \(B=0.5\) T. Landau levels \(n=\pm 1,\pm 2,\pm 3\) [valence \((-)\) and conduction \((+)\)] are represented by thin solid lines, blue for spin \(s=-1\) and red for \(s=1\). Edge states (\(n=0\)) are represented by thick lines. A vertical dashed black line indicates the HgTe thickness \(\lambda_{\rm inv}(0.5)=6.20\) nm\(\simeq\lambda_{c}\) where the band inversion for edge states occurs for \(B=0.5\) T according to (24).
in terms of the annihilation (and creation \(a^{\dagger}\)) operator
\[a=\sqrt{\frac{m_{\mathrm{c}y}\omega_{\mathrm{c}}}{2\hbar}}\left(y-y_{0}+i\frac{ \hat{p}_{y}}{m_{\mathrm{c}y}\omega_{\mathrm{c}}}\right), \tag{31}\]
in analogy to (4), where some effective frequencies have been defined as
\[\begin{split}&\omega_{\mathrm{c}}=\frac{eB}{\sqrt{m_{\mathrm{c}x} m_{\mathrm{c}y}}},\qquad\omega_{\gamma}=\frac{\gamma}{\sqrt{2}\hbar\alpha_{\mathrm{y}x} \ell_{B}},\\ &\omega_{\mathrm{v}}=(r_{x}+r_{y})\omega_{\mathrm{c}},\ \omega^{\prime}=(r_{x}-r_{y})\omega_{\mathrm{c}}/2,\end{split} \tag{32}\]
with
\[r_{x}=\frac{m_{\mathrm{c}x}}{2m_{\mathrm{v}x}},\,r_{y}=\frac{m_{\mathrm{c}y}}{ 2m_{\mathrm{v}y}}.\]
As we did for silicene, we shall also consider here the application of a perpendicular electric field to the phosphorene sheet in the usual form [72]\(\tilde{H}_{\Delta}=\Delta_{z}\tau_{z}\), with \(\Delta_{z}\) the on-site electric potential. Unlike for silicene and HgTe QWs, the diagonalization of the phosphorene Hamiltonian (30) has to be done numerically [25].
Note that the Hamiltonian (30) preserves the parity \(\pi(n,s)=e^{i\pi n_{s}}\) of the state \(|\mathbf{n}\rangle_{s}\), with \(n_{s}=n+(s+1)/2\) (see e.g. [25]). This means that the matrix elements \({}_{s}\langle\mathbf{n}|H|\mathbf{n}^{\prime}\rangle_{s^{\prime}}\propto\delta_{\pi(n,s),\pi(n^{\prime},s^{\prime})}\) are zero between states of different parity. Therefore, this parity symmetry helps in the diagonalization process and any (non-degenerate) eigenstate of \(H\) has a definite parity. The Hamiltonian eigenstates can now be written as
\[|\psi_{l}\rangle=\sum_{n,s}c_{n,s}^{(l)}|\mathbf{n}\rangle_{s}, \tag{33}\]
where \(l\in\mathbb{Z}\) denotes the LL index (\(l>0\) for conduction and \(l\leq 0\) for valence band). The sum \(\sum_{n,s}\) is constrained to \(\pi(n,s)=\pm 1\), depending on the even (\(+\)) and odd (\(-\)) parity of \(k\). The coefficients \(c_{n,s}^{(k)}\) are obtained by numerical diagonalization of the Hamiltonian matrix, which is truncated to \(n\leq N\), with \(N\) large enough to achieve convergent results for given values of the magnetic and electric fields. In particular, we have used Fock states with \(N\leq 1000\) to achieve convergence (with error tolerance \(\leq 10^{-15}\) eV) for \(B=0.5\) T in the six first Hamiltonian eigenvalues in the range \(-1.55\leq\Delta_{z}\leq-1.49\) eV. The resulting spectrum, as a function of the electric field potential \(\Delta_{z}\), can be seen in Figure 3 for a magnetic field of \(B=0.5\) T (higher magnetic fields need less Fock states to achieve convergence). The vertical dashed line gives the point \(\Delta_{z}=-1.520\) eV at which the electric potential equals minus the energy gap \(E_{\mathrm{g}}=E_{\mathrm{c}}-E_{\mathrm{v}}=1.52\) eV of phosphorene. This is not really a critical point in the same sense as \(\Delta_{z}^{(0)}=\Delta_{\mathrm{so}}=4.2\) meV for silicene and \(\lambda_{c}=6.17\) nm for HgTe QWs, since phosphorene as such (as already said) does not display a topological phase. However, we will see in Section III.3 that the phosphorene transmittance still presents a minimum at \(\Delta_{\mathrm{z}}^{(0)}=-1.523\) eV, which closes the energy gap \(E_{\mathrm{g}}=1.52\) eV at low magnetic fields.
It is also interesting to note that the LLs of phosphorene are degenerated in pairs for an electric potential below \(\Delta_{z}\simeq-1.53\) eV. Namely, we obtain numerically that \(|E_{I}^{\mathrm{even}}-E_{I+1}^{\mathrm{odd}}|\leq 10^{-4}\) eV for all \(\Delta_{z}<-1.53\) eV and \(l=-6,-4,-2,0,2,4\) as it shows the left hand side of Figure 3. This energy degeneracy will influence the conductivity as well.
## III Magneto-optical conductivity
The magneto-optical conductivity tensor \(\mathbf{\sigma}\) of a 2D electron system in a perpendicular magnetic field \(B\) and an oscillating electric field of frequency \(\Omega\), can be obtained from Kubo-Greenwood formula [73; 74; 27] in the Landau-level representation:
\[\sigma_{ij}(\Omega,B)=\frac{\mathrm{i}\hbar}{2\pi\ell_{B}^{2}}\sum_{\mathbf{n}, \mathbf{m}}\frac{f_{m}-f_{n}}{E_{n}-E_{m}}\frac{\langle\mathbf{m}|j_{i}|\mathbf{n}\rangle \langle\mathbf{n}|j_{j}|\mathbf{m}>}{\hbar\Omega+E_{m}-E_{n}+i\eta}, \tag{34}\]
where
\[\mathbf{j}=\frac{\mathrm{i}e}{\hbar}[H,\mathbf{r}]=\frac{e}{\hbar}\nabla_{\mathbf{k}}H \tag{35}\]
is the current operator, with \(\mathbf{r}=(x,y)\) and \(\nabla_{\mathbf{k}}=(\partial_{k_{x}},\partial_{k_{y}})\) [the minimal coupling prescription (3) is understood under external electromagnetic fields], and
Figure 3: Low energy spectra \(E_{l}\) of phosphorene as function of the electric field potential \(\Delta_{z}\) for thirteen Hamiltonian eigenstates \(l=-6,\dots,0,\dots,6\) and a magnetic field \(B=0.5\) T. Valence and conduction band states of even (odd) parity \(l=\pm 2,\pm 4,\pm 6\) (\(n=\pm 1,\pm 3,\pm 5\)) are represented in red (blue) color. The edge state \(E_{0}^{\mathrm{even}}\) is represented by a thick black line. The vertical dashed black line is the point \(\Delta_{z}=-E_{\mathrm{g}}=-1.520\) eV at which the electric potential equals the energy gap of phosphorene.
\(1/(1+\exp[(E_{n}-\mu_{\rm F})/(k_{B}T)])\) is the Fermi distribution function at temperature \(T\) and chemical potential \(\mu_{\rm F}\). In the zero temperature limit, the Fermi function \(f_{n}\) is replaced by the Heaviside step function \(\Theta(\mu_{\rm F}-E_{n})\), which enforces the Pauli exclusion principle for optical transitions (they are allowed between occupied and unoccupied states). The parameter \(\eta\) is a small residual scattering rate of charge carriers and, although the exact shape of \(\sigma_{ij}\) would depend on the details of the scattering mechanisms, using a constant \(\eta\) gives a good, qualitative description of the essential mechanisms relevant for magneto-optical experiments. In \(\sum_{\mathbf{n}}\) of eq. (34) it is also implicit the sum over spin \(s\) and valley \(\xi\), besides the LL index \(n\) (for graphene, there is a twofold spin and valley degeneracy, so that the extra sum just contributes with a degeneracy factor \(g=4\)). We shall measure \(\sigma_{ij}\) in units of the conductance quantum \(\sigma_{0}=e^{2}/h=38.8\)\(\mu\)S [73] and renormalize the currents as \(\tilde{j}=j/(e/\hbar)=\nabla_{\mathbf{k}}H\), so that
\[\frac{\sigma_{ij}(\Omega,B)}{\sigma_{0}}=\frac{\rm i}{\ell_{B}^{2}}\sum_{\bm {n},\mathbf{m}}\frac{f_{m}-f_{n}\ \langle\mathbf{m}|\tilde{j}_{i}|\mathbf{n}\rangle\langle\mathbf{n}|\tilde{j}_{j}|\mathbf{m}>} {\hbar\Omega+E_{m}-E_{n}+i\eta}, \tag{36}\]
We shall analyze the transmittance and Faraday rotation of linearly polarized light of frequency \(\Omega\) for normal incidence on the 2D material, where the electric fields of incident (\(\mathbf{E}^{i}\)) and transmitted (\(\mathbf{E}^{t}\)) waves are related through the conductivity tensor \(\mathbf{\sigma}\) by the formula [75, 76, 77]
\[\mathbf{E}^{t}=\left(\mathbf{I}+\tfrac{1}{2}Z_{0}\mathbf{\sigma}\right)^{-1}\cdot\mathbf{E}^{ i}, \tag{37}\]
where \(Z_{0}=2\alpha/\sigma_{0}\) is the vacuum impedance (\(\alpha=1/137\) is the fine-structure constant) and \(\mathbf{I}\) denotes the \(2\times 2\) identity matrix. We also assume that the incident field is linearly polarized in the \(x\) axis, that is \(\mathbf{E}^{i}=(E_{x}^{i},0)\). From here, the transmittance \(\mathcal{T}\) and the Faraday rotation angle \(\Theta_{\rm F}\) (in degrees) are [77, 2]
\[\mathcal{T}= \,\frac{1}{2}(|t_{+}|^{2}+|t_{-}|^{2})\simeq 1-Z_{0}{\rm Re}( \sigma_{xx})\,, \tag{38}\] \[\Theta_{\rm F}= \,\frac{1}{2}(\arg(t_{+})+\arg(t_{-}))\simeq\frac{180}{2\pi}Z_{0} {\rm Re}(\sigma_{xy})\,, \tag{39}\]
where \(t_{\pm}=E_{\pm}^{t}/|\mathbf{E}^{i}|\) are the transmission amplitudes in the circular polarization basis [78, 79] or chiral basis [80], \(\mathbf{E}_{\pm}^{t}=E_{x}^{t}\pm{\rm i}E_{y}^{t}\). Re(\(\sigma_{ij}\)) means the real part of \(\sigma_{ij}\) and \(\arg(t_{\pm})\) the complex argument. We have also provided the approximate expressions in the limit of weak absorption for isotropic materials. Note that, in this case, according to (39), the absorption peaks of Re[\(\sigma_{xx}(\Omega)\)] shown in Figure 6, correspond to dips of the transmittance \(\mathcal{T}\). Silicene and HgTe QWs have both longitudinal conductivities equal \(\sigma_{xx}=\sigma_{yy}\), but this symmetry is broken for anisotropic materials like phosphorene [10, 80] (see later on Section III.3). Therefore, in phosphorene, we cannot apply the approximation in eq.(39) and we have to use the strict equality.
In the circular polarization (right- and left-handed \(\pm\)) basis, the conductivity is given by \(\sigma_{\pm}=\sigma_{xx}\pm{\rm i}\sigma_{xy}\), and the absorptive part is therefore Re(\(\sigma_{\pm})=\) Re(\(\sigma_{xx})\mp\) Im(\(\sigma_{xy}\)). In the Supplemental Material [47] we provide extra plots for the silicene conductivity under circular polarization which reproduce the results of [9].
### Magneto-optical properties of graphene analogues
The current operator (35) for this case is \(\mathbf{j}=(j_{x},j_{y})=ev(\xi\tau_{x},\tau_{y})\). The matrix elements
\[\langle\mathbf{m}|\tau_{x}|\mathbf{n}\rangle_{s\xi} =A_{m}^{s\xi}B_{n}^{s\xi}\delta_{|m|-\xi,|n|}+A_{n}^{s\xi}B_{m}^{s \xi}\delta_{|m|+\xi,|n|}, \tag{40}\] \[\langle\mathbf{m}|\tau_{y}|\mathbf{n}\rangle_{s\xi} =-{\rm i}A_{m}^{s\xi}B_{n}^{s\xi}\delta_{|m|-\xi,|n|}+{\rm i}A_{n}^ {s\xi}B_{m}^{s\xi}\delta_{|m|+\xi,|n|},\]
provide the familiar selection rules \(|n|=|m|\pm 1\) for LL transitions. Plugging (40) into the general expression (36) we obtain the magneto-optical conductivity for graphene analogues. In Figure 4 we plot the real and imaginary parts of the conductivity tensor components \(\sigma_{ij}\) (in \(\sigma_{0}=e^{2}/h\) units) of silicene as a function of the polarized light frequency \(\Omega\) at three different electric potentials \(\Delta_{z}=0.5\Delta_{\rm so},\Delta_{\rm so},1.5\Delta_{\rm so}\) around the critical point \(\Delta_{z}^{(0)}=\Delta_{\rm so}\), for a magnetic field \(B=0.05\) T and some representative values of the chemical potential \(\mu_{\rm F}=2.1\) meV, temperature \(T=1\) K and scattering rate \(\eta=0.1\) meV. For \(\hbar\Omega\in[0,20]\) meV, we achieve convergence with 100 LLs, that is, restricting the sum in (36) as \(\sum_{n=-\infty}^{\infty}\to\sum_{n=-100}^{100}\). More explicitly, for the parameters mentioned above,
\[\left|\sum_{n=-100}^{n=100}\sigma_{ij}-\sum_{n=-99}^{n=99}\sigma_{ij}\right|/ \sigma_{0}\leq\begin{cases}10^{-5}&\text{if }\sigma_{ij}=\text{Re}(\sigma_{xx})\,,\\ 10^{-15}&\text{if }\sigma_{ij}=\text{Re}(\sigma_{xy})\,,\\ 10^{-3}&\text{if }\sigma_{ij}=\text{Im}(\sigma_{xx})\,,\\ 10^{-14}&\text{if }\sigma_{ij}=\text{Im}(\sigma_{xy})\,.\end{cases} \tag{41}\]
Each peak on the plot of the conductivity Re(\(\sigma_{xx}\)) against \(\hbar\Omega\) represents an electron transition between two LLs \(n,m\) connected by the selection rules \(|n|=|m|\pm 1\) and generally arranged above and below the Fermi level \(\mu_{\rm F}\); this latter constrain comes from the Fermi functions factor (\(f_{m}-f_{n}\)) of the Kubo formula (34), which becomes a step function at low temperatures. For more information, see the Supplemental Material [47] where we illustrate these electron transitions by arrows in the energy spectrum in an animated gif. The value of \(\hbar\Omega\) where a peak of the conductivity occurs coincides with the energy difference (\(E_{n}-E_{m}\)) of the LL transition \(n\to m\). This is clear by looking at the denominator of the Kubo formula. For example, the two main peaks of Re(\(\sigma_{xx}\)) at low frequencies \(\hbar\Omega\in[2,6]\) meV in Figure 4 correspond to the transitions \(0\to 1\) for spin and valley \(s=\xi=1\) and \(s=\xi=-1\) (purple and green arrows in the animated gif of [47]). The other conductivity peaks located at higher frequencies correspond to electron transitions between higher LLs and different spin/valley combinations according to (40). When the external electric field
\(\Delta_{z}\) is such that the energy differences of the two main peaks are the same, that is, when \(E_{1}^{++}-E_{0}^{++}\) is equal to \(E_{1}^{--}-E_{0}^{--}\), both peaks merge into a bigger one. Using the silicene spectrum energy equation (7), we find that this condition is fulfilled at the critical point \(\Delta_{z}=\Delta_{\rm so}\) for any value of the magnetic field \(B\). This result implies that we can extract information of the TPT occurring at \(\Delta_{z}^{(0)}=\Delta_{\rm so}\) by looking at the conductivity \({\rm Re}(\sigma_{xx})\) plot for different values of \(\Delta_{z}\).
To be more specific, in Figure 5 we represent the behavior of the two observables given in (39), that is, the Faraday angle \(\Theta_{\rm F}\) and the transmittance \(\mathcal{T}\), as a function of the polarized light frequency \(\Omega\) around the critical point \(\Delta_{z}^{(0)}=\Delta_{\rm so}=4.2\) meV. We focus on the frequency interval \(\hbar\Omega\in[2,6]\) meV where the main peaks (transition \(0\to 1\)) in Figure 4 are located. We find an absolute minimum of the transmittance \(\mathcal{T}_{0}=0.704\) at the critical point \(\Delta_{z}^{(0)}=\Delta_{\rm so}\) and \(\hbar\Omega=4.06\) meV. This "minimal" behavior does not depend on the particular values of magnetic field, chemical potential and temperature, which only change the actual value of \(\mathcal{T}_{0}\) and \(\hbar\Omega\) of the peak. Actually, the minimum peaks in the transmittance plot are related to the maximum peaks of the absortance \({\rm Re}(\sigma_{xx})\), according to equation (39). The Faraday angle at the critical point (black curve in Figure 5) changes sign at the minimum transmittance point \(\hbar\Omega=4.06\) meV, a behavior that can also be extrapolated to other 2D materials (see later for HgTe QWs and phosphorene). In fact, each peak of the transmittance in Figure 5 coincides in frequency with an inflection point of the Faraday angle, where it attains a value of 0 degrees.
Changing the chemical potential \(\mu_{\rm F}\) locks/unlocks other electronic transitions, so we would see different peaks in the conductivity and transmittance plots (see e.g., [10]). Increasing the scattering rate \(\eta\) smoothes the peaks in the transmittance, so it would be more difficult to distinguish when they overlap. We have choosen values of \(\eta\) approximately an order of magnitude below the frequency of the conductivity peaks, for which the resolution is fine.
Figure 5: Transmittance \(\mathcal{T}\) and Faraday angle \(\Theta_{\rm F}\) (in degrees) in a silicene monolayer as a function of the incident polarized light frequency \(\Omega\), and for different electric fields below and above the critical (black line) electric field \(\Delta_{z}^{(0)}=\Delta_{\rm so}=4.2\) meV. \(\mathcal{T}\) and \(\Theta_{\rm F}\) are symmetric about \(\Delta_{z}^{(0)}\). We set the conductivity parameters as \(\mu_{\rm F}=2.1\) meV, \(B=0.05\) T, \(T=1\) K and \(\eta=0.1\) meV.
For completeness, in the Supplemental Material [47] we show several contour plots of the Faraday angle using different cross sections in the \(\{\hbar\Omega,\Delta_{z},B,T,\mu_{\rm F}\}\) parameter space.
### Magneto-optical properties of zincblende heterostructures
From the Hamiltonian (10), the current operator (35) for zincblende heterostructures is
\[j_{x}^{s} = \frac{e}{\hbar}\left(s\alpha\tau_{x}-2k_{x}(\beta\tau_{z}+\delta \tau_{0})\right),\] \[j_{y}^{s} = \frac{e}{\hbar}\left(\alpha\tau_{y}-2k_{y}(\beta\tau_{z}+\delta \tau_{0})\right), \tag{42}\]
which, after minimal coupling according to the general prescription (3), results in
\[j_{x}^{s} = \frac{e}{\hbar}\left(s\alpha\tau_{x}-\sqrt{2}\frac{a^{\dagger}+a} {\ell_{B}}(\beta\tau_{z}+\delta\tau_{0})\right),\] \[j_{y}^{s} = \frac{e}{\hbar}\left(\alpha\tau_{y}+\mathrm{i}\sqrt{2}\frac{a^{ \dagger}-a}{\ell_{B}}(\beta\tau_{z}+\delta\tau_{0})\right). \tag{43}\]
Note that, in fact, \(j_{y}^{s}\) does not depend on \(s\). The current matrix elements for this case are
\[\langle\mathbf{m}|j_{x}^{s}|\mathbf{n}\rangle_{s} = \frac{es\alpha}{\hbar}\Xi_{m,n}^{s,+}-\frac{\sqrt{2}e}{\hbar \ell_{B}}\Phi_{m,n}^{s,+}\,,\] \[\langle\mathbf{m}|j_{y}^{s}|\mathbf{n}\rangle_{s} = -\mathrm{i}\frac{e\alpha}{\hbar}\Xi_{m,n}^{s,-}+\mathrm{i}\frac{ \sqrt{2}e}{\hbar\ell_{B}}\Phi_{m,n}^{s,-}\,, \tag{44}\]
where
\[\Xi_{m,n}^{s,\pm} = (A_{m}^{s}B_{n}^{s}\delta_{|m|-s,|n|}\pm A_{n}^{s}B_{n}^{s}\delta_ {|m|+s,|n|})\,, \tag{45}\] \[\Phi_{m,n}^{s,\pm} = ((\delta+\beta)A_{m}^{s}A_{n}^{s}+(\delta-\beta)B_{m}^{s}B_{n}^{s})\] \[\times \left(\sqrt{|n|+1+\frac{s-1}{2}\delta_{|m|-1,|n|}\pm\sqrt{|n|- \frac{s+1}{2}\delta_{|m|+1,|n|}}}\right).\]
Despite the more involved structure of the current than for silicene, the corresponding matrix elements maintain the same familiar selection rules \(|n|=|m|\pm 1\) for LL transitions.
Inserting the matrix elements (40) into the general expression (36) we obtain the magneto-optical conductivity for general zincblende heterostructures. In Figure 6 we plot the real and imaginary parts of the conductivity tensor components \(\sigma_{ij}\) (in \(\sigma_{0}=e^{2}/h\) units) of a HgTe QW as a function of the polarized light frequency \(\Omega\) at three different HgTe layer thicknesses \(\lambda=5.50\,\mathrm{nm}<\lambda_{c}\), \(\lambda=6.17\,\mathrm{nm}=\lambda_{c}\), and \(\lambda=7.00\,\mathrm{nm}>\lambda_{c}\), a magnetic field \(B=0.5\) T and some representative values of the chemical potential \(\mu_{\rm F}=12.5\) meV, temperature \(T=1\) K and scattering rate \(\eta=0.5\) meV. For \(\hbar\Omega\in[0,60]\) meV, we achieve convergence with 100 LLs, that is, restricting the sum in (36) as \(\sum_{n=-\infty}^{\infty}\to\sum_{n=-100}^{100}\). More explicitly, for the parameters mentioned above,
\[\left|\sum_{n=-100}^{n=100}\sigma_{ij}-\sum_{n=-99}^{n=99}\sigma_{ij}\right|/ \sigma_{0}\leq\begin{cases}10^{-5}&\text{if }\sigma_{ij}=\text{Re}(\sigma_{xx})\,,\\ 10^{-4}&\text{if }\sigma_{ij}=\text{Re}(\sigma_{xy})\,,\\ 10^{-3}&\text{if }\sigma_{ij}=\text{Im}(\sigma_{xx})\,,\\ 10^{-7}&\text{if }\sigma_{ij}=\text{Im}(\sigma_{xy})\,.\end{cases} \tag{46}\]
Similar to silicene, we can see in Figure 6 that there are multiple peaks in the absorptive components \(\text{Re}(\sigma_{xx})\) and \(\text{Im}(\sigma_{xy})\), corresponding to transitions between occupied and unoccupied LLs obeying the selection rules \(|n|=|m|\pm 1\). At lower frequencies \(\hbar\Omega\in[0,30]\) meV, inside each curve of Figure 6, we find the main peaks corresponding to the transitions \(0\to 1\) for spin \(s=1\) and \(s=-1\). Both peaks merge approximately at \(\lambda\simeq\lambda_{c}=6.17\) nm. This is because the energy differences \(E_{1}^{+}-E_{0}^{+}\) and \(E_{1}^{-}-E_{0}^{-}\) are similar when \(\lambda\simeq\lambda_{c}\) for low magnetic fields \(B\ll 1\) T, according to equations (16,17). In order to extend this
Figure 6: Real and imaginary parts of the longitudinal \(\sigma_{xx}\) and transverse Hall \(\sigma_{xy}\) magneto-optical conductivities in a bulk HgTe QW of thickness \(\lambda=5.50,6.17,7.00\) nm, as a function of the polarized light frequency \(\Omega\) and in \(\sigma_{0}=e^{2}/h\) units. We set the conductivity parameters as \(\mu_{\rm F}=12.5\) meV, \(B=0.5\) T, \(T=1\) K and \(\eta=0.5\) meV.
result to higher values of the magnetic field, we insert the parameter fits (22) into the equation \(E_{1}^{+}-E_{0}^{+}=E_{1}^{-}-E_{0}^{-}\), and solve it numerically for \(\lambda^{*}=\lambda^{*}(B)\), obtaining the values represented by blue dots in Figure 7. These values fit the equation
\[\lambda^{*}_{\rm fit}(B)=\frac{218.4-17.3B}{35.4-2.8B}\,, \tag{47}\]
which is represented as an orange curve in Figure 7. Consequently, only for small magnetic fields, we can infer the critical thickness \(\lambda_{c}\) where the TPT in HgTe QW occurs from the conductivity \({\rm Re}(\sigma_{xx})\) plot, that is, \(\lambda^{*}\simeq\lambda_{c}=6.17\) nm for \(B\ll 1\) T.
The behavior of the Faraday angle and the transmittance as a function of the polarized light frequency \(\Omega\) around the critical HgTe layer thickness \(\lambda_{c}=6.17\) nm (at which the material parameter \(\mu\) changes sign/Chern number) is shown in Figure 8. As for silicene, we focus on the lower frequencies \(\hbar\Omega\in[0,30]\) meV where the main peaks are located, and find again a minimum of the transmittance, this time \(\mathcal{T}_{0}=0.78\), at the critical point \(\lambda_{c}\) and \(\hbar\Omega=15.0\) meV. For this material, the "minimal" behavior does depend on the particular values of magnetic field, as we saw in equation (47). However, for small magnetic fields like \(B=0.5\) T in Figure 8, the minimum of the transmittance still takes place at \(\lambda^{*}\simeq\lambda_{c}=6.17\) nm. The Faraday angle at the critical point (black curve in Figure 8) changes sign at the minimum transmittance frequency \(\hbar\Omega=15.0\) meV, a behavior shared with silicene.
For completeness, in the Supplemental Material [47] we show several contour plots of the Faraday angle using different cross sections in the \(\{\hbar\Omega,\lambda,B,T,\mu_{\rm F}\}\) parameter space.
### Magneto-optical properties of phosphorene and effect of anisotropies
From the phosphorene Hamiltonian (25), the current operator (35) is
\[j^{s}_{x} = \frac{e}{\hbar}\left(\gamma\tau_{x}+k_{x}(\tau_{0}(\alpha_{x}- \beta_{x})+\tau_{z}(\alpha_{x}+\beta_{x}))\right),\] \[j^{s}_{y} = \frac{e}{\hbar}k_{y}\left(\tau_{0}(\alpha_{y}-\beta_{y})+\tau_{z }(\alpha_{y}+\beta_{y})\right), \tag{48}\]
which, after minimal coupling, according to prescription (29), results in
\[j^{s}_{x} = \frac{e}{\hbar}\left(\gamma\tau_{x}+\frac{a^{\dagger}+a}{\sqrt{2 }\alpha_{yx}\ell_{B}}(\tau_{0}(\alpha_{x}-\beta_{x})+\tau_{z}(\alpha_{x}+ \beta_{x}))\right),\] \[j^{s}_{y} = \frac{e}{\hbar}\frac{\alpha_{yx}(a^{\dagger}-a)}{{\rm i}\sqrt{2 }\ell_{B}}\left(\tau_{0}(\alpha_{y}-\beta_{y})+\tau_{z}(\alpha_{y}+\beta_{y}) \right). \tag{49}\]
Plugging these matrix elements into the general expression (36) we obtain the magneto-optical conductivity for
Figure 7: Numerical solutions \(\lambda^{*}\) (in nm, blue dots) of the equation \(E_{1}^{+}-E_{0}^{+}=E_{1}^{-}-E_{0}^{-}\) (energies (16,17) of HgTe QW) for 50 different values of the external magnetic field \(B\). In orange, non-linear fit (47) of the numerical values.
phosphorene. Note that, unlike silicene and HgTe QW, there is now a large asymmetry between \(\sigma_{xx}\) and \(\sigma_{yy}\) (about one order of magnitude difference), as evidenced by Figure 9. This asymmetry was already highlighted by [66], where tunable optical properties of multilayer black phosphorus thin films were studied for \(B=0\). In Figure 9 we plot the real and imaginary parts of the conductivity tensor components \(\sigma_{ij}\) (in \(\sigma_{0}=e^{2}/h\) units) of phosphorene as a function of the polarized light frequency \(\Omega\), for some values of the electric potential around \(\Delta_{z}^{(0)}=-E_{\rm g}=-1.52\) eV (closing the energy gap), a magnetic field of \(B=0.5\) T, like in Figure 3, and some representative values of the chemical potential \(\mu_{\rm F}=-0.417\) eV, temperature \(T=1\) K and scattering rate \(\eta=0.2\) meV. We are using the same threshold of \(N=1000\) Fock states that we used to find convergence in the first 6 Hamiltonian eigenstates of the numerical diagonalization in Figure 3. This convergence is ensured for \(\hbar\Omega\in[0,20]\) meV. The anisotropic character of phosphorene also implies that the current \(j_{y}^{s}\) is significantly lower than \(j_{x}^{s}\) [the Hamiltonian (25) is of second order in \(k_{y}\)]. This makes transversal components of the conductivity significantly lower than longitudinal components. This is why we have disposed Figure 9 in a slightly different manner from Figures 4 for silicene and 6 for HgTe QW, which display a more isotropic structure.
Due to the parity symmetry of the Hamiltonian (30), only the electronic transitions between LLs of different parities are allowed [25]. The main peak (smaller frequency) of the conductivity \({\rm Re}(\sigma_{xx})\) in Figure 9 corresponds to the electronic transitions \(E_{0}^{\rm even}\to E_{3}^{\rm odd}\) and \(E_{1}^{\rm odd}\to E_{2}^{\rm even}\), which have approximately the same energy difference for all \(\Delta_{z}<-1.53\) eV with a tolerance \(\leq 10^{-14}\) eV. That is, \(E_{0}^{\rm even}\) and \(E_{1}^{\rm odd}\), and \(E_{2}^{\rm even}\) and \(E_{3}^{\rm odd}\), are degenerate for all \(\Delta_{z}<-1.53\) eV as the spectrum in Figure 3 shows. When the degeneration is broken around the electric potential \(\Delta_{z}\simeq-1.53\) eV, the main conductivity \({\rm Re}(\sigma_{xx})\) peak splits into two as we can see in Figure 9.
The anisotropic character of phosphorene also affects the Faraday angle, which attains much lower values (in absolute value) than for silicene or HgTe QWs. Indeed, in Figure 10 we plot Faraday angle and transmittance as a function of the polarized light frequency \(\Omega\) for different electric field potentials \(-1.535\leq\Delta_{z}\leq-1.519\) eV. Like for silicene and HgTe QWs, we find a minimal behavior in the transmittance of phosphorene \(\mathcal{T}_{0}=0.50\) for a polarized light frequency \(\hbar\Omega=2.6\) meV at electric field potential \(\Delta_{z}^{(0)}=-1.523\) eV, which is close to minus the energy gap \(-E_{\rm g}=-1.52\) eV. Note that this value of the minimal transmittance of phosphorene is much smaller than for silicene and HgTe QWs; actually, the assumption of low absorbance in formula (39) is no longer valid here and we have used the exact expressions for \(\mathcal{T}\) and \(\Theta_{\rm F}\) in (39). Moreover, unlike for graphene analogues and HgTe QWs, this minimum of the transmittance does not seem to be related to the union of two conductivity peaks into a bigger one; rather, it is simply related to the energy gap closure. Actually, the critical electric potential \(\Delta_{z}^{(0)}\) where the transmittance of phosphorene reaches a minimum depends on the magnetic field \(B\) chosen, as Figure 11 shows. We perform a non-linear fit of the numerical values of \(\Delta_{z}^{(0)}(B)\) and obtain the equation (\(B\) in dimensionless units)
\[\left(\Delta_{z}^{(0)}\right)_{\rm fit}(B)=\frac{-77.4-3.5B}{50.9+2.2B}\,{\rm eV}\,, \tag{50}\]
which is represented as a orange curve in Figure 11. For small magnetic fields, we can deduce that the critical electric field potential is similar to minus the energy gap \(-E_{\rm g}\) of phosphorene, that is \(\Delta_{z}^{(0)}(B)\simeq-E_{\rm g}=-1.52\) eV for \(B\ll 1\) T. We have also checked numerically that the critical electric potentials \(\Delta_{z}^{(0)}(B)\) are independent
Figure 9: Real and imaginary parts of the longitudinal \(\sigma_{xx},\sigma_{yy}\) and transverse Hall \(\sigma_{xy}\) magneto-optical conductivities in a phosphorene monolayer, as a function of the polarized light frequency \(\Omega\) and in \(\sigma_{0}=e^{2}/h\) units. Phosphorene is under a perpendicular electric field potential \(\Delta_{z}^{(0)}=-E_{\rm g}=-1.52\) eV closing the energy gap in Figure 3. The \(y\)-axis ticks have different values in each subplot as the conductivities \(\sigma_{xy}\) and \(\sigma_{yy}\) attain smaller values than \(\sigma_{xx}\) (phosphorene anisotropy). We set the conductivity parameters as \(\mu_{\rm F}=-0.417\) meV, \(B=0.5\) T, \(T=1\) K and \(\eta=0.2\) meV.
of the parameters \(\mu_{\rm F}\) and \(\eta\) for a fixed magnetic field \(B\). However, we set different values of \(\mu_{\rm F}\) for small fields \(B\leq 2\) T (see caption of Figure 11), in order to avoid blocking the electric transition \(E_{1}^{\rm odd}\to E_{2}^{\rm even}\) of the main peak of the transmittance. We also increment \(N\) as \(B\) decreases in order to achieve convergence in the diagonalization.
Additionally, Figure 10 shows how one peak of the transmittance splits into two around \(\Delta_{z}\simeq-1.53\) eV (blue lines), since the LL \(E_{0}^{\rm even}\) breaks its degeneration approximately for \(\Delta_{z}>-1.53\) eV (see Figure 3). For \(\Delta_{z}=\Delta_{z}^{(0)}=-1.523\) eV (thick black line), the big peak on the left in Figure 10 corresponds to the electronic transition \(E_{1}^{\rm odd}\to E_{2}^{\rm even}\), and moves toward smaller values of \(\hbar\Omega\) when increasing \(\Delta_{z}\). The other small peak in the black line corresponds to the electronic transition \(E_{0}^{\rm even}\to E_{3}^{\rm odd}\), which moves toward bigger values of \(\hbar\Omega\) when increasing \(\Delta_{z}\). The Faraday angle also presents inflection points at the frequencies where the peaks of the transmittance are located. However, close to the degeneration point \(\Delta_{z}\simeq-1.53\) eV (light blue line), the Faraday angle displays a maximum around the same frequency \(\hbar\Omega\simeq 4.1\) meV than the peak of the transmittance at the same value of \(\Delta_{z}\simeq-1.53\) eV, which is a phenomenon previously unseen in the other 2D materials studied in this article.
Therefore, we see that anisotropies affect the values of the Faraday angle and transmittance. There are mechanical ways of introducing anisotropies in 2D materials by subjecting them to strain (like for strained [81] or rippled [82] graphene). This kind of anisotropies can be treated by replacing the scalar Fermi velocity \(v\) by a \(2\times 2\) symmetric tensor \(\mathbf{v}\) (see e.g. [77]). Namely, for graphene, the Hamiltonian (1) vector \(\mathbf{d}\) components \(d_{j}=\hbar vk_{j}\) are replaced by \(d_{j}=\hbar k_{i}v_{ij},i=1,2,d_{3}=0\). Actually, for uniformly strained graphene with strain tensor \(\mathbf{\varepsilon}\), the Fermi velocity tensor is (up to first order) \(\mathbf{v}=v(\tau_{0}-\beta\mathbf{\varepsilon})\) (see e.g. [83, 77]), where \(\beta\sim 2\). The relation between the isotropic \(\mathbf{\sigma}^{0}\) and the anisotropic \(\mathbf{\sigma}\) magneto-optical conductivity tensors is simply \(\mathbf{\sigma}(\Omega,B)=\mathbf{v}\mathbf{\sigma}^{0}(\Omega,\mathcal{B})\mathbf{v}/\det( \mathbf{v})\), with \(\mathcal{B}=B\det(\mathbf{v})/v^{2}\) an effective magnetic field. Interesting discussions on how measurements of dichroism and transparency for two different light polarization directions can be used to determine the magnitude and direction of strain can be found in [76]. Also, photoelastic effects in graphene [81], strain-modulated anisotropies in silicene [84, 85], etc. The band gap \(E_{\rm g}=E_{\rm v}-E_{\rm c}\) of phosphorene can be furthermore modulated by strain and by the number of layers in a stack [65, 55].
## IV Conclusions
We have studied magneto-optical properties of different 2D materials, focusing on transmittance and Faraday rotation near the critical point of the topological phase transition for topological insulators like silicene and HgTe quantum wells. We have seen that, in all topological 2D materials analyzed, transmittance attains an absolute minimum \(\mathcal{T}_{0}\) at the critical TPT point for a certain value \(\Omega_{0}\) of the normal incident polarized light frequency. This is a universal behavior for graphene analogues, that is, the minimal behavior of the transmittance does not depend on the chosen values of magnetic field, chemical potential and temperature, although the location of \(\Omega_{0}\) varies with them. In addition, we have found that each peak of the transmittance coincides in frequency with an inflection point of the Faraday angle, for a fixed selection of the electric field, magnetic field, chemical potential and temperature parameters.
This extremal universal behavior is shared with other topological 2D materials like HgTe quantum wells as long as the applied magnetic field remains small enough \(B\ll 1\) T. In HgTe quantum wells we have verified that there is a minimum of the transmittance \(\mathcal{T}_{0}\) at the critical HgTe layer thickness at a given frequency \(\Omega_{0}^{\prime}\) (for this material this minimal behavior depends on the magnetic field) and the Faraday angle at the critical point changes sign at the minimum transmittance frequency \(\Omega_{0}^{\prime}\).
Figure 10: Transmittance \(\mathcal{T}\) and Faraday angle \(\Theta_{\rm F}\) (in degrees) in a phosphorene monolayer as a function of the polarized light frequency \(\Omega\), and for electric fields \(-1.535<\Delta_{z}<-1.519\) eV around the minus energy gap \(-E_{\rm g}=-1.52\) eV. The black line corresponds to the electric potential \(\Delta_{z}^{(0)}=-1.523\) eV\(\,\simeq-E_{\rm g}\) where the transmittance attains a minimum of \(\mathcal{T}_{0}=0.5\) at \(\hbar\Omega=2.5\) meV. We set the conductivity parameters as \(\mu_{\rm F}=-0.417\) meV, \(B=0.5\) T, \(T=1\) K and \(\eta=0.2\) meV.
For other non-topological anisotropic materials like phosphorene, this minimal behavior of the transmittance still remains when the energy gap is closed, the Faraday angle being much smaller (in absolute value) than in silicene and HgTe QWs. In this case the critical electric potential where the transmittance reaches a minimum depends on the magnetic field.
Therefore, these extremal properties of transmittance/absortance and chirality change of Faraday angle at the critical point turn out to provide sharp markers of either the topological phase transition or the energy gap closure.
###### Acknowledgements.
We thank the support of the Spanish MICINN through the project PGC2018-097831-B-I00 and Junta de Andalucia through the projects FEDER/UJA-1381026 and FQM-381. AM thanks the Spanish MIU for the FPU19/06376 predoctoral fellowship.
[MISSING_PAGE_POST]
in One and Two Dimensions_ (Springer International Publishing Switzerland, 2016).
* Scharf _et al._ [2015]B. Scharf, A. Matos-Abiague, I. Zutic, and J. Fabian, Probing topological transitions in HgTe/CdTe quantum wells by magneto-optical measurements, Phys. Rev. B **91**, 235433 (2015).
* Shuvaev _et al._ [2016]A. Shuvaev, V. Dziom, Z. D. Kvon, N. N. Mikhailov, and A. Pimenov, Universal faraday rotation in hqte wells with critical thickness, Phys. Rev. Lett. **117**, 117401 (2016).
* Calixto and Romera [2015]M. Calixto and E. Romera, Identifying topological-band insulator transitions in silicene and other 2d gapped dirac materials by means of renyi-wehl entropy, EPL (Europhysics Letters) **109**, 40003 (2015).
* Romera and Calixto [2015]E. Romera and M. Calixto, Uncertainty relations and topological-band insulator transitions in 2d gapped dirac materials, Journal of Physics: Condensed Matter **27**, 175003 (2015).
* Calixto and Romera [2015]M. Calixto and E. Romera, Inverse participation ratio and localization in topological insulator phase transitions, Journal of Statistical Mechanics: Theory and Experiment **2015**, P06029 (2015).
* Romera and Calixto [2015]E. Romera and M. Calixto, Band inversion at critical magnetic fields in a silicene quantum dot, EPL (Europhysics Letters) **111**, 37006 (2015).
* Romera _et al._ [2018]E. Romera, M. Calixto, and J. Bolivar, Information measures and topological-band insulator transitions in 2d-dirac materials under external circularly polarized lasers, and static electric and magnetic fields, Physica A: Statistical Mechanics and its Applications **511**, 174 (2018).
* Calixto _et al._ [2022]M. Calixto, N. A. Cordero, E. Romera, and O. Castanos, Signatures of topological phase transitions in higher landau levels of hqte/cdte quantum wells from an information theory perspective, Physica A: Statistical Mechanics and its Applications **605**, 128057 (2022).
* Castanos _et al._ [2019]O. Castanos, E. Romera, and M. Calixto, Information theoretic analysis of landau levels in monolayer phosphorene under magnetic and electric fields, Materials Research Express **6**, 106316 (2019).
* Calixto _et al._ [2021]M. Calixto, E. Romera, and O. Castanos, Analogies between the topological insulator phase of 2d dirac materials and the superradiant phase of atom-field systems, International Journal of Quantum Chemistry **121**, e26464 (2021), [https://onlinelibrary.wiley.com/doi/pdf/10.1002/qua.26464](https://onlinelibrary.wiley.com/doi/pdf/10.1002/qua.26464).
* Thouless _et al._ [1982]D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Quantized hall conductance in a two-dimensional periodic potential, Phys. Rev. Lett. **49**, 405 (1982).
* Xiao _et al._ [2010]D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. **82**, 1959 (2010).
* Kane and Mele [2005]C. L. Kane and E. J. Mele, Quantum spin hall effect in graphene, Phys. Rev. Lett. **95**, 226801 (2005).
* Drummond _et al._ [2012]N. D. Drummond, V. Zolyomi, and V. I. Fal'ko, Electrically tunable band gap in silicene, Phys. Rev. B **85**, 075423 (2012).
* Liu _et al._ [2011]C.-C. Liu, W. Feng, and Y. Yao, Quantum spin hall effect in silicene and two-dimensional germanium, Phys. Rev. Lett. **107**, 076802 (2011).
* Liu _et al._ [2011]C.-C. Liu, H. Jiang, and Y. Yao, Low-energy effective hamiltonian involving spin-orbit coupling in silicene and two-dimensional germanium and tin, Phys. Rev. B **84**, 195430 (2011).
* Tsai _et al._ [2013]W.-F. Tsai, C.-Y. Huang, T.-R. Chang, H. Lin, H.-T. Jeng, and A. Bansil, Gated silicene as a tunable source of nearly 100% spin-polarized electrons, Nature Communications **4**, 1500 (2013).
* Trivedi _et al._ [2014]S. Trivedi, A. Srivastava, and R. Kurchania, Silicene and Germanene: A First Principle Study of Electronic Structure and Effect of Hydrogenation-Passivation, Journal of Computational and Theoretical Nanoscience **11**, 781 (2014).
* van den Broek _et al._ [2014]B. van den Broek, M. Houssa, E. Scalise, G. Pourtois, V. V. Afanas'ev, and A. Stsemans, Two-dimensional hexagonal tin: ab initio geometry, stability, electronic structure and functionalization, 2D Materials **1**, 021004 (2014).
* Stille _et al._ [2012]L. Stille, C. J. Tabert, and E. J. Nicol, Optical signatures of the tunable band gap and valley-spin coupling in silicene, Phys. Rev. B **86**, 195405 (2012).
* Tabert and Nicol [2013]C. J. Tabert and E. J. Nicol, Valley-spin polarization in the magneto-optical response of silicene and other similar 2d crystals, Phys. Rev. Lett. **110**, 197402 (2013).
* Tahir and Schwingenschlogl [2013]M. Tahir and U. Schwingenschlogl, Valley polarized quantum hall effect and topological insulator phase transitions in silicene, Scientific Reports **3**, 1075 (2013).
* Novik _et al._ [2005]E. G. Novik, A. Pfeuffer-Jeschke, T. Jungwirth, V. Latussek, C. R. Becker, G. Landwehr, H. Buhmann, and L. W. Molenkamp, Band structure of semimagnetic \(\mathrm{hg}_{1-y}\mathrm{mn}_{y}\mathrm{Te}\) quantum wells, Phys. Rev. B **72**, 035321 (2005).
* Bernevig _et al._ [2006]B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Quantum spin hall effect and topological phase transition in HgTe quantum wells, Science **314**, 1757 (2006), [https://www.science.org/doi/pdf/10.1126/science.1133734](https://www.science.org/doi/pdf/10.1126/science.1133734).
* Konig _et al._ [2007]M. Konig, S. Wiedmann, C. Brune, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Quantum spin hall insulator state in HgTe quantum wells, Science **318**, 766 (2007), [https://www.science.org/doi/pdf/10.1126/science.1148047](https://www.science.org/doi/pdf/10.1126/science.1148047).
* Konig _et al._ [2008]M. Konig, H. Buhmann, L. W. Molenkamp, T. Hughes, C.-X. Liu, X.-L. Qi, and S.-C. Zhang, The quantum spin hall effect: Theory and experiment, Journal of the Physical Society of Japan **77**, 031007 (2008), [https://doi.org/10.1143/JPSJ.77.031007](https://doi.org/10.1143/JPSJ.77.031007).
* Liu _et al._ [2008]C. Liu, T. L. Hughes, X.-L. Qi, K. Wang, and S.-C. Zhang, Quantum spin hall effect in inverted type-i semiconductors, Phys. Rev. Lett. **100**, 236601 (2008).
* Qi and Zhang [2011]X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. **83**, 1057 (2011).
* Buttner _et al._ [2011]B. Buttner, C. X. Liu, G. Tkachov, E. G. Novik, C. Brune, H. Buhmann, E. M. Hankiewicz, P. Recher, B. Trauzettel, S. C. Zhang, and L. W. Molenkamp, Single valley dirac fermions in zero-gap HgTe quantum wells, Nature Physics **7**, 418 (2011).
* Scharf _et al._ [2012]B. Scharf, A. Matos-Abiague, and J. Fabian, Magnetic properties of HgTe quantum wells, Phys. Rev. B **86**, 075418 (2012).
* [47]For details, see the Supplemental Material at, URL_will_be_inserted_by_publisher.
* Corbridge [2013]D. Corbridge, _Phosphorus: Chemistry, Biochemistry and Technology 6th edn_ (CRC Press, 2013).
* Bridgman [1914]P. W. Bridgman, Two new modifications of phosphorus., _Journal of the American Chemical Society_, Journal of the American Chemical Society **36**, 1344 (1914).
* Bridgman [1916]P. W. Bridgman, Further note on black phosphorus., _Journal of the American Chemical Society_, Journal of the American Chemical Society **38**, 609 (1916).
* Zhu and Tomanek [2014]Z. Zhu and D. Tomanek, Semiconducting layered blue phosphorus: A computational study, Phys. Rev. Lett. **112**, 176802 (2014).
* Guo _et al._ [2014]H. Guo, N. Lu, J. Dai, X. Wu, and X. C. Zeng, Phosphorene nanoribbons, phosphorus nanotubes, and van der waals multilayers, _The Journal of Physical Chemistry C_, The Journal of Physical Chemistry C **118**, 14051 (2014).
* Guan _et al._ [2014]J. Guan, Z. Zhu, and D. Tomanek, Phase coexistence and metal-insulator transition in few-layer phosphorene: A computational study, Phys. Rev. Lett. **113**, 046804 (2014).
* Liu _et al._ [2014]H. Liu, A. T. Neal, Z. Zhu, Z. Luo, X. Xu, D. Tomanek, and P. D. Ye, Phosphorene: An unexplored 2d semiconductor with a high hole mobility, _ACS Nano_, ACS Nano **8**, 4033 (2014).
* Carvalho _et al._ [2016]A. Carvalho, M. Wang, X. Zhu, A. S. Rodin, H. Su, and A. H. Castro Neto, Phosphorene: from theory to applications, Nature Reviews Materials **1**, 16061 (2016).
* Wan _et al._ [2014]R. Wan, X. Cao, and J. Guo, Simulation of phosphorene schottky-barrier transistors, Applied Physics Letters **105**, 163511 (2014), [https://doi.org/10.1063/1.4900410](https://doi.org/10.1063/1.4900410).
* Liu _et al._ [2015]H. Liu, Y. Du, Y. Deng, and P. D. Ye, Semiconducting black phosphorus: synthesis, transport properties and electronic applications, Chem. Soc. Rev. **44**, 2732 (2015).
* Akhtar _et al._ [2017]M. Akhtar, G. Anderson, R. Zhao, A. Alruqi, J. E. Mroczkowska, G. Sumanasekera, and J. B. Jasinski, Recent advances in synthesis, properties, and applications of phosphorene, npj 2D Materials and Applications **1**, 5 (2017).
* Li _et al._ [2014]L. Li, Y. Yu, G. J. Ye, Q. Ge, X. Ou, H. Wu, D. Feng, X. H. Chen, and Y. Zhang, Black phosphorus field-effect transistors, Nature Nanotechnology **9**, 372 (2014).
* Chen _et al._ [2017]P. Chen, N. Li, X. Chen, W.-J. Ong, and X. Zhao, The rising star of 2d black phosphorus beyond graphene: synthesis, properties and electronic applications, 2D Materials **5**, 014002 (2017).
* Ling _et al._ [2015]X. Ling, H. Wang, S. Huang, F. Xia, and M. S. Dresselhaus, The renaissance of black phosphorus, Proceedings of the National Academy of Sciences **112**, 4523 (2015), [https://www.pnas.org/content/112/15/4523.full.pdf](https://www.pnas.org/content/112/15/4523.full.pdf).
* Xu _et al._ [2016]R. Xu, S. Zhang, F. Wang, J. Yang, Z. Wang, J. Pei, Y. W. Myint, B. Xing, Z. Yu, L. Fu, Q. Qin, and Y. Lu, Extraordinarily bound quasi-one-dimensional trions in two-dimensional phosphorene atomic semiconductors, _ACS Nano_, ACS Nano **10**, 2046 (2016).
* Rudenko and Katsnelson [2014]A. N. Rudenko and M. I. Katsnelson, Quasiparticle band structure and tight-binding model for single- and bilayer black phosphorus, Phys. Rev. B **89**, 201408 (2014).
* Ezawa [2014]M. Ezawa, Topological origin of quasi-flat edge band in phosphorene, New Journal of Physics **16**, 115004 (2014).
* Rodin _et al._ [2014]A. S. Rodin, A. Carvalho, and A. H. Castro Neto, Strain-induced gap modification in black phosphorus, Phys. Rev. Lett. **112**, 176801 (2014).
* Low _et al._ [2014]T. Low, A. S. Rodin, A. Carvalho, Y. Jiang, H. Wang, F. Xia, and A. H. Castro Neto, Tunable optical properties of multilayer black phosphorus thin films, Phys. Rev. B **90**, 075434 (2014).
* Ezawa [2015]M. Ezawa, Highly anisotropic physics in phosphorene, Journal of Physics: Conference Series **603**, 012006 (2015).
* Dutreix _et al._ [2016]C. Dutreix, E. A. Stepanov, and M. I. Katsnelson, Laser-induced topological transitions in phosphorene with inversion symmetry, Phys. Rev. B **93**, 241404 (2016).
* Lindner _et al._ [2011]N. H. Lindner, G. Refael, and V. Galitski, Floquet topological insulator in semiconductor quantum wells, Nature Physics **7**, 490 (2011).
* Kitagawa _et al._ [2011]T. Kitagawa, T. Oka, A. Brataas, L. Fu, and E. Demler, Transport properties of nonequilibrium systems under the application of light: Photoinduced quantum hall insulators without landau levels, Phys. Rev. B **84**, 235108 (2011).
* Rapid Research Letters **7**, 101 (2013).
* Chen _et al._ [2020]G.-H. Chen, Y.-N. Chen, Y.-W. Zhou, Y.-L. Sun, and E.-J. Ye, Strain and electric field tunable electronic transport in armchair phosphorene nanodevive with normal-metal electrodes, AIP Advances **10**, 105012 (2020), [https://doi.org/10.1063/5.0021775](https://doi.org/10.1063/5.0021775).
* Mahan [2000]G. D. Mahan, _Many-Particle Physics, 3rd edition_ (Kluwer Academic/Plenum Publishers, New York, 2000).
* Allen [2006]P. Allen, Chapter 6 electron transport, in _Conceptual Foundations of Materials_, Contemporary Concepts of Condensed Matter Science, Vol. 2, edited by S. G. Louie and M. L. Cohen (Elsevier, 2006) pp. 165-218.
* Stauber _et al._ [2008]T. Stauber, N. M. R. Peres, and A. K. Geim, Optical conductivity of graphene in the visible region of the spectrum, Phys. Rev. B **78**, 085432 (2008).
* Oliva-Leyva and Naumis [2015]M. Oliva-Leyva and G. G. Naumis, Tunable dichroism and optical absorption of graphene by strain engineering, 2D Materials **2**, 025001 (2015).
* Oliva-Leyva and Wang [2017]M. Oliva-Leyva and C. Wang, Magneto-optical conductivity of anisotropic two-dimensional Dirac-Weyl materials, Annals of Physics **384**, 61 (2017).
* Chiu _et al._ [1976]K. Chiu, T. Lee, and J. Quinn, Infrared magneto-transmittance of a two-dimensional electron gas, Surface Science **58**, 182 (1976).
* O'Connell and Wallace [1982]R. F. O'Connell and G. Wallace, Ellipticity and faraday rotation due to a two-dimensional electron gas in a metal-oxide-semiconductor system, Phys. Rev. B **26**, 2231 (1982).
* Chakraborty _et al._ [2023]A. Chakraborty, G. Bian, and G. Vignale, Frequency-dependent faraday and kerr rotation in anisotropic non-symmorphic dirac semimetals in a magnetic field (2023), arXiv:2302.05385 [cond-mat.mes-hall].
* Pereira _et al._ [2011]V. M. Pereira, R. M. Ribeiro, N. M. R. Peres, and A. H. C. Neto, Optical properties of strained graphene, Europhysics Letters **92**, 67001 (2011).
* Schiefele _et al._ [2016]J. Schiefele, L. Martin-Moreno, and F. Guinea, Faraday effect in rippled graphene: Magneto-optics and random gauge fields, Phys. Rev. B **94**, 035401 (2016).
* Pellegrino _et al._ [2011]F. M. D. Pellegrino, G. G. N. Angilella, and R. Pucci, Linear response correlation functions in strained graphene, Phys. Rev. B **84**, 195407 (2011).
* Farokhnezhad _et al._ [2017]M. Farokhnezhad, M. Esmaeilzadeh, and K. Shakouri, Strain-modulated anisotropy of quantum transport properties in single-layer silicene: Spin and valley filtering, Phys. Rev. B **96**, 205416 (2017).
* Siu and Jalil [2021]Z. B. Siu and M. B. A. Jalil, Effective hamiltonian for silicene under arbitrary strain from multi-orbital basis, Scientific Reports **11**, 7575 (2021).
# Faraday rotation and transmittance as markers of topological phase transitions in 2D materials:
Supplemental material
Manuel Calixto
[email protected] Department of Applied Mathematics, University of Granada, Fuentenueva s/n, 18071 Granada, Spain Institute Carlos I for Theoretical for Theoretical and Computational Physics (iC1), Fuentenueva s/n, 18071 Granada, Spain
Alberto Mayorgas
[email protected] Department of Applied Mathematics, University of Granada, Fuentenueva s/n, 18071 Granada, Spain
Nicolas A. Cordero
[email protected] Department of Physics, University of Burgos, E-09001 Burgos, Spain
International Research Center in Critical Raw Materials for Advanced Industrial Technologies (ICCRAM), University of Burgos, E-09001 Burgos, Spain Institute Carlos I for Theoretical for Theoretical and Computational Physics (iC1), Fuentenueva s/n, 18071 Granada, Spain
Elvira Romera
[email protected] Department of Atomic, Molecular and Nuclear Physics, University of Granada, Fuentenueva s/n, 18071 Granada, Spain Institute Carlos I for Theoretical for Theoretical and Computational Physics (iC1), Fuentenueva s/n, 18071 Granada, Spain
Octavio Castanos
[email protected] Institute of Nuclear Sciences, National Autonomous University of Mexico, Apdo. Postal 70-543, 04510, CDMX, Mexico
November 6, 2021
###### Abstract
## I Silicene conductivity in the circularly polarization basis
We complete the analysis of magneto-optical properties of graphene analogues by discussing the case of circularly polarized light. In this case, the conductivity is \(\sigma_{\pm}(\Omega)=\sigma_{xx}(\Omega)\pm i\sigma_{xy}(\Omega)\) for right-handed (+) and left-handed (-) polarization [1]. Therefore, the absorptive part is \(\mathrm{Re}(\sigma_{\pm})=\mathrm{Re}(\sigma_{xx})\mp\mathrm{Im}(\sigma_{xy})\). In Figure 1, we present both absorptive parts \(\mathrm{Re}(\sigma_{\pm})\) for a silicene monolayer under an electric potential \(\Delta_{z}=0.5\Delta_{\mathrm{so}}\) as a function of the frequency of the incident light \(\Omega\). The conductivity parameters are specifically chosen to reproduce the results in [2], that is, \(\mu_{F}=3.0\Delta_{\mathrm{so}}\), \(B/\Delta_{\mathrm{so}}^{2}=657\) G/meV\({}^{2}\), \(T=0\) K and \(\eta=0.05\Delta_{\mathrm{so}}\). Note that we have defined the conductance quantum as \(\sigma_{0}=e^{2}/h=38.8\,\mu\)S, whereas the authors in reference [2] take \(\sigma_{0}=e^{2}/(4h)\).
## II HIGTE quantum well conductivity with Zeeman effect
We recalculate the conductivity of the HgTe quantum well with and without Zeeman coupling to support the argument that the results are qualitatively equivalent, the quantitative differences being small. A layer thickness of \(\lambda=7.0\) nm is selected, so the material parameters are \(\alpha=365\) meV\(\cdot\)nm, \(\beta=-686\) meV\(\cdot\)nm\({}^{2}\), \(\delta=-512\) meV\(\cdot\)nm\({}^{2}\), and \(\mu=-10\) meV, as taken from Ref. [3]. In Figure 2, we plot the real and imaginary parts of the longitudinal \(\sigma_{xx}\) and transverse \(\sigma_{xy}\) conductivities as a function of the polarized light frequency \(\Omega\). The conductivity parameters are chosen to reproduce the results in [4] with Zeeman coupling, that is, \(\mu_{F}=8\) meV, \(B=5\) T, \(T=1\) K and \(\eta=1\) meV. The conductance quantum used here is again \(\sigma_{0}=e^{2}/h=38.8\,\mu\)S, whereas the authors in reference [4] take \(\sigma_{0}=e^{2}/h\).
## III Animations of the energy spectrum and conductivities
Attached in the supplementary material is a series of animations called
-Silicene_Conductivity_and_Energy_VS_Omega.gif,
-HgTe_Conductivity_and_Energy_VS_Omega.gif,
-Phosphorene_Conductivity_and_Energy_VS_Omega.gif,
where we plot the energy spectrum at right, and the real part \(\mathrm{Re}[\sigma_{xx}(\Omega)]\) and \(\mathrm{Re}[\sigma_{xy}(\Omega)]\) of the conductivity components at left, for three different materials studied in the main text: silicene, HgTe QW, and phosphorene. The external electric field \(\Delta_{z}\) in the case of the silicene and phosphorene, and the layer thickness \(\lambda\) of the HgTe QW, are used as "time coordinate" on the animations, so each frame corresponds to one value of these control parameters.
Figure 2: Real and imaginary parts of the longitudinal \(\sigma_{xx}\) and transverse Hall \(\sigma_{xy}\) (magneto-)optical conductivities in a bulk HgTe QW of a thicknesses \(\lambda=7.0\) nm, as a function of the polarized light frequency \(\Omega\) (in \(\sigma_{0}=e^{2}/h\) units) with and without Zeeman coupling. We set the conductivity parameters \(\mu_{F}=8\) meV, \(B=5\) T, \(T=1\) K and \(\eta=1\) meV, as in Ref. [4].
The conductivities are plotted as a function of the polarized light frequency \(\Omega\), and they change in each frame according to the values of \(\Delta_{z}\) or \(\lambda\). Therefore, we can observe how the main peaks of the longitudinal conductivity \(\mathrm{Re}(\sigma_{xx})\) merge for the critical values \(\Delta_{z}^{(0)}=\Delta_{\mathrm{so}}=4.12\) meV (silicene) or \(\lambda=\lambda_{c}=6.17\) nm (HgTe QW), where the topological phase transition occurs in these 2D materials.
In the case of the phosphorene, we only observe the degeneration of Landau levels \(n=0\) and \(n=1\) in the conductivity around the electric potential \(\Delta_{z}\simeq-1.53\) eV. That is, the electronic transitions \(E_{1}^{\mathrm{odd}}\to E_{2}^{\mathrm{even}}\) and \(E_{0}^{\mathrm{even}}\to E_{3}^{\mathrm{odd}}\) have a similar energy and share a longitudinal conductivity peak (main peak at left in the gif), until the degeneration breaks for electric fields approximately higher than \(-1.53\) eV, when both electronic transitions will have different energies so the main peak will split into two.
On the other hand, the energy spectrum is static on the animation, as it is plotted as a function of all the values that \(\Delta_{z}\) or \(\lambda\) take. However, we plot a moving vertical dashed line on it, representing the value of \(\Delta_{z}\) or \(\lambda\) in the conductivity frame. On top of this vertical line, we also draw arrows representing the electronic transitions allowed between Landau levels (LLs) for the specific value \(\Delta_{z}\) or \(\lambda\), where the Fermi energy \(\mu_{F}\) is represented by an horizontal dashed line. The color of the arrows is the same as the color of the points plotted on the top of the longitudinal conductivity main peaks. The length of the arrows represents the energy difference \(|E_{n}-E_{m}|\) between the corresponding Landau levels in this particular electronic transition \(n\leftrightarrow m\), which also coincides with the frequency \(\hbar\Omega\) of the longitudinal conductivity peak associated with this transition. Therefore, when two arrows have the same length, we can observe two longitudinal conductivity peaks merging at the critical point. We have only drawn the arrows of the main peaks or lower Landau level electronic transitions for the sake of simplicity.
## IV Faraday angle contour plots
For completeness, in Figure 3 we show the variability of the Faraday angle for silicene across the parameter space: polarized light frequency \(\hbar\Omega\), electric field potential \(\Delta_{z}\), magnetic field \(B\), temperature \(T\) and chemical potential \(\mu_{\mathrm{F}}\)}, using several contour plots corresponding to different cross sections. Also, in Figure 4 we do the same for the Faraday angle in HgTE quantum wells using different cross sections in the \(\{\hbar\Omega,\lambda,B,T,\mu_{\mathrm{F}}\}\) parameter space, where the critical thickness \(\lambda_{c}\simeq 6.17\) nm is marked with a vertical magenta grid line. The variability of the Faraday angle with those parameters is shown with a color code (in degrees), going from the most negative value (blue) to the most positive (red).
|
2301.08625 | Interaction of thin tungsten and tantalum films with ultrashort laser
pulses: calculations from first principles | The interaction of ultrashort laser pulses with thin tungsten and tantalum
films is investigated through the full-potential band-structure calculations.
Our calculations show that at relatively low absorbed energies (the electron
temperature $T_e$$\lesssim$7 kK), the lattice of tantalum undergoes noticeable
hardening. The hardening leads to the change of the tantalum complete melting
threshold under these conditions. Calculations suggest that for the
isochorically heated Ta film, if such hardening really occurs, the complete
melting threshold will be at least 25% higher. It is also shown that the
body-centered cubic structures of W and Ta crystals become dynamically unstable
when the electronic subsystem is heated to sufficiently high temperatures
($T_e$$>$22 kK). This lead to their complete melting on the sub-picosecond time
scale. | N. A. Smirnov | 2023-01-20T15:13:04Z | http://arxiv.org/abs/2301.08625v1 | # Interaction of thin tungsten and tantalum films with ultrashort laser pulses:
###### Abstract
The interaction of ultrashort laser pulses with thin tungsten and tantalum films is investigated through the full-potential band-structure calculations. Our calculations show that at relatively low absorbed energies (the electron temperature \(T_{e}\)\(\lesssim\)7 kk), the lattice of tantalum undergoes noticeable hardening. The hardening leads to the change of the tantalum complete melting threshold under these conditions. Calculations suggest that for the isochorically heated Ta film, if such hardening really occurs, the complete melting threshold will be at least 25% higher. It is also shown that the body-centered cubic structures of W and Ta crystals become dynamically unstable when the electronic subsystem is heated to sufficiently high temperatures (\(T_{e}\)\(\gtrsim\)22 kK). This lead to their complete melting on the sub-picosecond time scale.
## I Introduction
As shown in a number of experimental studies, the melting of different materials after their interaction with ultrashort (femtosecond) laser pulses have their specific features [1; 2; 3; 4; 5]. Absorption of this radiation leads to a strongly non-equilibrium heating of the system where the temperatures of its electronic and ionic subsystems are very much different, \(T_{e}\)\(\gg\)\(T_{i}\). This state may keep for tens of picoseconds and even longer [5]. Under these conditions, semiconductors, for example, undergo the so-called nonthermal melting caused not by their lattice heating due to heat transfer from hot electrons to cold ions but by a dramatic change in the shape of the potential energy surface and hence dynamic lattice destabilization [1; 2; 3]. In semimetallic bismuth, the situation seems to be similar [4]. The determining factor here is the estimate of the electron-phonon coupling factor \(G\), which defines the rate of heat transfer from the electronic to ionic subsystem. For bismuth, the theoretical estimates of \(G\) strongly differ [6; 7; 8], leaving room for disputes on the presence of nonthermal melting in this metal after interaction with ultrashort laser pulses [9].
On the other hand, the change of the shape of the potential energy surface may also lead, under certain conditions, to the hardening of irradiated crystal [10; 11; 12], thus increasing the time of its melting and causing its strong overheating. Despite some claims that the lattice hardening has been experimentally observed [13], there is still no evidence of its reliable detection in experiments [5; 12; 14].
The experimental work reported in Ref. [15] aimed to explore the possibility of the nonthermal melting of tungsten by measuring reflectivity of the metal surface after its irradiation. The experiments show that above a certain value of absorbed excitation fluence, ablation of the metal surface proceeds in a sub-picosecond time interval. The revealed effect may be indicative of the ultrafast nonthermal melting because in the normal thermal scenario of ablation, the characteristic times of this process must be much higher than those obtained in experiment [15].
_Ab initio_ calculations [16] show that the heating of the electronic subsystem of tungsten to \(T_{e}\) above 20 kK may lead to a structural transition from bcc to fcc phase. The transition is also caused by the abrupt change in the shape of the potential energy surface, leading to fcc stabilization at high values of \(T_{e}\)[16]. In its turn, the bcc structure may lose dynamic stability under these conditions. It is however difficult to detect this transition in experiment because of the possibility of sub-picosecond nonthermal melting. Just this was shown in molecular dynamics (MD) calculations [17] where the interaction of femtosecond laser pulses with thin tungsten film was investigated. The nuclei of the new fcc phase were only able to form mainly on the surface of the film before the sample melted during about 0.8 ps. On whole, MD results [17] suggest that the detection probability for the nonthermal melting of tungsten is much higher than for the structural transition predicted in Ref. [16].
As mentioned above, an important factor of detecting nonthermal phenomena in metals is the electron-phonon coupling factor \(G\). Its values for metals are usually high [18], meaning that the nonthermal character of processes that occur after irradiation can hardly be recognized. There are different approaches to the theoretical determination of \(G\) (see, for example, [12; 18; 19]). In our research we will follow methodology described in Ref. [12], but also discuss results obtained with other approaches.
This paper studies the interaction of femtosecond laser pulses with thin (a few tens of nanometers thick) tungsten and tantalum films. The physical quantities required for calculations with a two-temperature model [20] were obtained from first principles. The issues discussed include the processes involved in the nonthermal melting
of the metals and the possibility of detecting tantalum lattice hardening at moderate absorbed energies. Our results are compared with available experimental data and other calculations.
## II Calculation method
In this work, the temperature evolution of electronic and ionic subsystems with time after irradiation by ultrashort laser pulses is determined using a well-known two-temperature model [20]. Since the thin (\(\sim\)10 nm) films of W and Ta are considered, the two-temperature model equations can be written as
\[C_{e}(T_{e})\frac{\partial T_{e}}{\partial t}=-(T_{e}-T_{i})G(T_{e})+S(t), \tag{1}\]
\[C_{i}(T_{i})\frac{\partial T_{i}}{\partial t}=(T_{e}-T_{i})G(T_{e}), \tag{2}\]
where \(S(t)\) is the time dependent radiation source function [17], \(C_{e}(T_{e})\) and \(C_{i}(T_{i})\) are electron and lattice heat capacities, and \(G(T_{e})\) is the electron-phonon coupling factor. Here we neglect lattice (\(\kappa_{i}\)) and electron (\(\kappa_{e}\)) thermal conductivities because, on the one hand, \(\kappa_{e}\)\(\gg\)\(\kappa_{i}\) in our case, and on the other hand, in thin foils, ballistic electrons bring the electronic subsystem to thermodynamic equilibrium over a time about a pulse duration \(\tau_{p}\)[21; 22]. So, no significant gradients in temperature occur in the target. The method to calculate \(C_{e}\), \(C_{i}\), and \(G\) as functions of electron and ion temperatures from first principles is described in rather detail in Ref. [12]. Here we only provide the key formula for the electron-phonon coupling factor. It reads as
\[G(T_{e})=\frac{2\pi\hbar}{(T_{l}-T_{e})}\int\limits_{0}^{\infty} \Omega d\Omega\int\limits_{-\infty}^{\infty}N(\varepsilon)\alpha^{2}F( \varepsilon,\Omega)\\ \times S(\varepsilon,\varepsilon+\hbar\Omega)d\varepsilon. \tag{3}\]
where \(N(\varepsilon)\) is the electronic density of states (DOS), \(\alpha_{2}\)\(F(\varepsilon\),\(\Omega)\) is the electron-phonon spectral function, \(\varepsilon\) and \(\hbar\Omega\) are, respectively, electron and phonon energies, \(S(\varepsilon\),\(\varepsilon+\hbar\Omega)\)=[\(f_{e}(\varepsilon)\)-\(f_{e}(\varepsilon+\hbar\Omega)][n(\hbar\Omega\),\(T_{i}\))-\(n(\hbar\Omega\),\(T_{e})]\) with \(f_{e}\) standing for the Fermi distribution function and \(n\) for the Bose-Einstein distribution function.
Another formula which is often used to determine \(G(T_{e})\) has some simplifications as compared to (3) and reads as [18]
\[G(T_{e})=\frac{\pi\hbar k_{B}\lambda\langle\omega^{2}\rangle}{N(E_{F})}\int \limits_{-\infty}^{\infty}N^{2}(\varepsilon)\left(-\frac{\partial f_{e}}{ \partial\varepsilon}\right)d\varepsilon. \tag{4}\]
Here \(\lambda\) is the electron-phonon mass enhancement parameter, \(\langle\Omega\rangle^{2}\) is the second moment of the phonon spectrum [23], and \(E_{F}\) is the Fermi energy. Formula 4 is derived under the assumption that in the interaction with phonon, the scattering probability matrix elements is independent of the initial \(\{\mathbf{k},i\}\) and final \(\{\mathbf{k}^{\prime},j\}\) electronic states. The authors of Ref. [18] determined the values of \(\lambda\) and \(\langle\Omega\rangle^{2}\) from experimental evaluation, not from first-principles calculations.
One more way to calculate \(G(T_{e})\) is based on the calculation of the electron-ion collision integral \(I_{nm}^{e-i}\) with the use of an approximate tight-binding model to calculate the band structure, combined with MD simulation [19]. The expression for \(I_{nm}^{e-i}\) is written as
\[I_{nm}^{e-i}=\frac{2\pi}{\hbar}|M_{e-i}(\varepsilon_{n},\varepsilon_{m})|^{2} \begin{cases}f_{e}(\varepsilon_{n})[2-f_{e}(\varepsilon_{m})]-f_{e}( \varepsilon_{m})[2-f_{e}(\varepsilon_{n})]e^{-\Delta\varepsilon/T_{i}};&\text {for $n{>}m$}\\ f_{e}(\varepsilon_{m})[2-f_{e}(\varepsilon_{n})]e^{-\Delta\varepsilon/T_{i}}-f_{ e}(\varepsilon_{n})[2-f_{e}(\varepsilon_{m})];&\text{otherwise}\end{cases}, \tag{5}\]
where \(\Delta\varepsilon\)=\(\varepsilon_{n}-\varepsilon_{m}\) is the energy difference between two states, and \(M_{e-i}\) is the electron-ion scattering matrix element. The electron-phonon coupling factor can be written as
\[G(T_{e})=\frac{1}{V(T_{e}-T_{i})}\sum_{n,m}\varepsilon_{m}I_{nm}^{e-i}, \tag{6}\]
here \(V\) is the specific volume. It should be noted here that our method for determining \(G(T_{e})\) (by formula (3)) does not use any experimentally determined parameters or approximations which simplify the scattering probability matrix element, as it is done in Ref. [18], or serious simplifications related to particle interactions in the system, as it is done in the tight-binding model [19].
In this work, first-principles calculations were done with the all-electron full-potential linear muffin-tin or
bital method (FP-LMTO) [24]. We consider here processes at a constant specific volume, i.e. the isochoric heating of targets. Within the scope of density functional theory the FP-LMTO method calculates the electron structure, internal and free energies, phonon spectrum and other material properties [24; 25; 12; 26]. Phonon spectrum and electron-phonon spectral function calculations for the metals of interest were done with linear response theory implemented in the FP-LMTO code [24; 25]. Integration over the Brillouin zone was done with an improved tetrahedron method [27]. Meshes in **k**-space corresponded to equidistant spacing 30\(\times\)30\(\times\)30. For integration over the **q**-points of the phonon spectrum, a 10\(\times\)10\(\times\)10 mesh appeared quite sufficient (see [26] for more details on meshes). The cutoff energy for representing the basis functions as a set of plane waves in the interstitial region was taken to be 900 eV. The basis set included MT-orbitals with moments to \(l_{max}^{b}\)=5. Charge density and potential expansions in terms of spherical harmonics were done to \(l_{max}^{w}\)=7. The internal FP-LMTO parameters such as the linearization energy, tail energies, and the radius of the MT-sphere were chosen using an approach similar to that one used in Ref. [28].
The valence electrons in our calculations were 5\(s\), 5\(p\), 4\(f\), 5\(d\), and 6\(s\). For better comparison with calculations by other authors, the exchange-correlation potential was chosen to be similar to that one used in Ref. [17], i.e., PBE [29]. This functional reproduces well the different properties of tungsten and tantalum. For example, the equilibrium volume \(V_{0}\) from calculation differs by no more than 2% from experiment for both the metals. Figure 1 shows the phonon densities of states (PDOS) from calculation in comparison with experimental data [30]. They are seen to be in quite a good agreement.
The entropy of the electronic subsystem was determined as
\[S_{e}(T_{e})=-k_{B}\int_{-\infty}^{\infty}d\varepsilon N(\varepsilon)[f_{e}ln( f_{e})+(1-f_{e})ln(1-f_{e})]. \tag{7}\]
With the known entropy \(S_{e}(T_{e})\) and internal energy \(E_{e}(T_{e})\) of electrons, it is easy to obtain the free energy \(F_{e}\)=\(E_{e}-T_{e}S_{e}\) of the electron gas.
The phonon spectrum of tungsten and tantalum was determined within quasiharmonic approximation [12]. The melting temperature \(T_{m}\) of crystal W and Ta versus electron temperature was estimated in the same manner as it was done in Ref. [31] with the well performing Lindemann criterion.
## III Results
Let's first compare the electronic structures of tungsten and tantalum. Figure 2 shows their electronic densities of states versus energy at \(V\)=\(V_{0}\) and \(T\)=0 calculated in this work. It is seen that the chemical potential \(\upmu\) which coincides with the Fermi energy at zero temperature is near the minimum of the DOS for tungsten, while for tantalum, the density of states at \(\varepsilon\)=\(\mu\) is much higher compared to W. For Ta, the Fermi level is near the peak of the DOS. Compared to tantalum, the electronic structure of tungsten is very much depleted in states in the vicinity of \(\mu\). Calculations show that as \(T_{e}\) grows to \(\sim\)15 kK, the values of \(N(\mu)\) increase for tungsten and decrease for tantalum. This causes certain differences in the behavior of these metals at elevating electron temperatures.
Now consider how the free energy of electrons depends on the lattice parameter \(c/a\) (i.e., the Bain path) at different temperatures \(T_{e}\). Fig
Figure 1: Tungsten and tantalum phonon spectra at the equilibrium experimental specific volume from calculations done in this work for zero temperature (red lines) and from experiment at room temperature [30] (circles connected by a line).
Figure 2: Electronic DOS for W (top) and Ta (bottom) at equilibrium specific volume and zero temperature (black lines). The green, blue and red lines are the Fermi distribution functions at different electron temperatures.
obtained for W and Ta, respectively. In both metals, the fcc structure is seen to be dynamically unstable at low electron temperatures. With the increasing temperature it stabilizes and at \(T_{e}\)\(>\)15 kK it becomes thermodynamically more preferable than bcc. It is seen that tantalum behaves very much like tungsten but requires somewhat higher temperatures for stabilization of the fcc structure. On the other hand, with the increasing \(T_{e}\) the bcc structure becomes dynamically unstable both in tungsten and in tantalum. These changes must lead to a bcc\(\rightarrow\)fcc transition when the electronic subsystem is heated. As however mentioned in paper [17], in such conditions their melting is more probable. On whole, our calculations for tungsten agree well with results presented in Ref. [16].
One more feature of tantalum should be noted here. It is seen from Fig. 4 that there exists a limited interval of temperatures at relatively low values of \(T_{e}\) (see \(T_{e}\)=5.8 kK), where the bcc lattice hardens. The free energy curve runs steeper near the minimum corresponding to the bcc phase. This feature is absent in tungsten. Figure 5 shows the densities of phonon states for W and Ta we calculated in this work for different electron temperatures. It is seen that with the increasing \(T_{e}\) tungsten gradually softens and its phonon frequencies reduce. The phonon frequencies of tantalum first increase with the growing \(T_{e}\) and cause bcc lattice hardening. Then the tendency changes - the high-frequency part of the spectrum goes on to harden, while the low-frequency part begins to soften reducing its frequencies (see Fig. 5, \(T_{e}\)=11.6 kK). At \(T_{e}\) above 20 kK the bcc structure in both metals loses its dynamic stability. It happens at about 22 kK in tungsten and 29 kK in tantalum. The hardening of the Ta lattice at relatively low electron temperatures leads to a sudden effect we will consider later.
Figures 6 and 7 show the electron-phonon coupling factor \(G\) as a function of electron temperature at \(V\)=\(V_{0}\), calculated in this work for tungsten and tantalum, respectively. The dependences \(G(T_{e})\) are provided for bcc and fcc structures in their stability regions. The values of G for the structures are seen to be close to each other and it is quite possible to approximate our results by a continuous line. The figures also show data from low-temperature experiments [32; 33; 34]. For tungsten, our results are seen to agree quite well with experiment. For tantalum, experimental data from Ref. [34] provides only the lower boundary of \(G\), which does not contradict our calculations. Figures 6 and 7 also show results from some other calculations. It is seen that compared to our results, calculations by Lin et al. [18] for W give overestimated values of \(G\) for the increasing temperature (Fig. 6). Such a behavior has earlier been observed in other metals [12] and can be related to the more correct account for the energy dependence of \(\alpha_{2}F(\varepsilon,\Omega)\) in formula (3). In turn, the values of \(G(T_{e})\) from Ref. [19] are much lower than our results and the experimental data available. Note that the presence of adjustable parameters in the calculation method may reduce the accuracy of results if they
Figure 4: Free electron energy versus lattice parameter \(c/a\) at different \(T_{e}\) for tantalum (\(V\)=\(V_{0}\)). The vertical lines show the values of \(c/a\) which correspond to its bcc and fcc structures.
Figure 5: Phonon densities of states in tungsten (left) and tantalum (right) at different electron temperatures (\(V\)=\(V_{0}\)).
Figure 3: Free electron energy versus lattice parameter \(c/a\) at different \(T_{e}\) for tungsten (\(V\)=\(V_{0}\)). The vertical lines show the values of \(c/a\) which correspond to its bcc and fcc structures.
are adjusted to conditions (for example, at \(T\)=0) different from what we are having here.
For tantalum (fig. 7), our calculations by expression (4) (the dotted line) had one distinction from those reported in paper [18]: the values of \(\lambda\) and \(\langle\Omega\rangle^{2}\) were determined from first-principles calculations rather than from experimental evaluation. It is seen that in this case, approaches [18] and [12] give close values for \(G(T_{e})\), the differences are minimal. In Ref. [34], the electron-phonon coupling factor was also calculated with formula (4) but with the electronic DOS determined from MD calculations. But here deviations from our results come, first of all, from the underestimated parameter \(\lambda\). The authors of [34] used the empirical value from Ref. [23], \(\lambda\)=0.65. Our calculations from first principles gave \(\lambda\)=0.88 in the case of tantalum. For tungsten, the difference between the empirical [23] and calculated values of \(\lambda\) is not so large; they agree within \(\sim\)3%.
Let's consider the accuracy of our calculations in comparison with other experimental results. The authors of paper [33] measured how evolved the intensity of the Laue diffraction peak (211) after a 30-nm-thick tungsten film deposited on a silicon nitride substrate was irradiated by 400-nm laser pulses with \(\tau_{p}\)=130 fs. The absorbed energy density \(E_{abs}\) was about 0.8 MJ/kg. Figure 8 compares experimental data with calculations performed in three variants (see [12] for calculation details). In addition to our computation with use of formula (4), it shows calculations with \(G(T_{e})\) taken from Ref. [18] and with constant \(G\)=\(2\cdot 10^{17}\) W/m\({}^{3}\)/K and \(\Theta_{D}\)=312 K [33]. The results obtained with expression (3) are seen to agree quite well with experiment. The use of \(G(T_{e})\) from Ref. [18] slightly worsens the agreement and the calculation with the constant \(G\) markedly underestimates the change of the diffraction peak intensity at times below 10 ps.
Figure 9 presents ion temperature versus electron temperature for tungsten, calculated by solving equations (1)-(2). We reproduced experimental conditions from Ref. [33] but did calculations for several values of \(E_{abs}\). The possibility of the bcc\(\rightarrow\)fcc transition was not considered because ultrafast melting was here more probable [17]. Figure 9 also shows the melting temperature of W versus \(T_{e}\), obtained in this work and by Murphy et al.
Figure 7: Electron-phonon coupling factor versus \(T_{e}\) for tantalum from our calculations using formula (3) (solid, dashed lines for bcc and fcc, respectively) and by a formula (4) (dotted line). Other calculations: dashed-dotted line – Ref. [19], dashed-dotted-dotted line - Ref. [34] by a formula from Ref. [18] (see the text). The triangle shows the lower boundary of \(G\) from experiment [34]. The vertical line shows the approximate value of \(T_{e}\) above which the fcc phase is more energetically preferable than bcc.
Figure 6: Electron-phonon coupling factor versus \(T_{e}\) for tungsten from our calculation (solid, dashed lines for bcc and fcc, respectively), from calculations reported in papers [18] (dotted line) and [19] (dashed-dotted line), and from experiments [32] and [33] (the circle and the triangle, respectively). The vertical line shows the approximate value of \(T_{e}\) above which the fcc phase becomes more energetically favorable than bcc.
Figure 8: Intensity of diffraction peak (211) versus time for tungsten for absorbed energy density 0.8 MJ/kg from our calculation (the solid line), calculations with a constant \(G\)[33] (the dashed line), calculations with \(G(T_{e})\) from Ref. [18] (the dashed-dotted line), and measurements [33] (circles).
[17] from MD calculations. Remind that our \(T_{m}(T_{e})\) was calculated with the Lindemann criterion. As seen from Fig. 9, the melting temperature of tungsten decreases with the increasing \(T_{e}\) due to lattice softening (Fig. 5). The resulted dependence \(T_{m}(T_{e})\) agrees rather well with data from Ref. [17] despite the essentially different approaches to its determination. Some discrepancy comes from the fact that our calculation corresponded to the isochore \(V\)=\(V_{0}\), while in MD simulation [17], the sample could expand along the axis normal to the target surface.
In paper [33], a threshold value \(E_{abs}^{m}\) required for the complete melting of tungsten was determined. For the conditions of that experiment, it was found to be 0.9 MJ/kg. Our calculations give a very close value of 0.91 MJ/kg (details of calculation can be found in paper [12]). Complete melting occurs after the temperature \(T_{m}\) is reached and the lattice gets sufficient heat to overcome the latent heat of fusion, \(\Delta H_{m}\)[35]. The absorbed energy density of 0.8 MJ/kg is not enough to completely melt the target [33]. It is seen from Fig. 9 that at high \(E_{abs}\) (\(>\)2.5 MJ/kg) the lattice temperature \(T_{i}\) reaches \(T_{m}\) even earlier than \(T_{i}(T_{e})\) reaches its maximum. At high \(T_{e}\), the melting temperature of tungsten becomes much lower than the normal melting temperature determined at ambient pressure, \(T_{m}^{0}\)\(\approx\)3.7 kK. MD calculations and analytic equations of state [36; 37], including that one for tungsten, suggest that the heat of fusion changes under the action of external conditions and it will reduce as \(T_{m}\) decreases. This will also influence the time of melting. Usually, \(T_{e}\) reaches a maximum after irradiation by ultrashort pulses at a time of about a few \(\tau_{p}\). Therefore at sufficiently high \(E_{abs}\) (\(>\)2.5 MJ/kg) tungsten will melt during sub-picosecond times which is also proved by calculations [17].
Now consider tantalum. Figure 10 demonstrates the \(T_{i}(T_{e})\) dependence for Ta similarly to tungsten. Irradiation conditions and target thickness are the same as for W. It is seen that the melting curve \(T_{m}(T_{e})\) reaches a maximum approximately at \(T_{e}\)=7.3 kK due to the hardening of the Ta crystal lattice at these temperatures, as mentioned earlier (see Fig. 5). Unlike gold, whose melting temperature begins to increase only at \(T_{e}\)\(>\)15 kK (remaining almost constant at lower \(T_{e}\)) [12], for tantalum this growth of \(T_{m}\) starts right after the electron temperature increases. At \(T_{e}\) higher than 7.3 kK, its lattice begins to gradually soften. Like tungsten, tantalum at sufficiently high values of \(E_{abs}\) (\(>\)3 MJ/kg) must melt on the sub-picosecond time scale due to the loss of dynamic stability by its lattice (Fig. 10). We do not consider the bcc\(\rightarrow\)fcc transition here also. The high electron-phonon coupling factor of tantalum signals a higher probability of its ultrafast melting. However, the existence of a maximum of \(T_{m}(T_{e})\) at relatively low electron temperatures gives an interesting effect. If such hardening really occurs, it should lead to an increase in the melting threshold \(E_{abs}^{m}\) for Ta metal. As shown in calculations, \(E_{abs}^{m}\) will be at least 25% higher. For tantalum normal melting temperature, \(T_{m}^{0}\)=3.29 kK, the threshold value \(\widetilde{E}_{abs}^{m}\) equals 0.74 MJ/kg. If the crystal lattice hardens, then, under isochoric heating, an absorbed energy density of \(\sim\)1.12 MJ/kg is required for complete melting. For non-isochoric conditions, the threshold may be lower, about 0.93 MJ/kg. However, the value is still rather far from normal \(\widetilde{E}_{abs}^{m}\)=0.74 MJ/kg and can be determined quite reliably in experiment (see, for example, [5]). In addi
Figure 10: Calculated evolution of electron and ion temperatures (isochoric heating) after irradiation of a 30-nm-thick tantalum film by a 130-fs-pulse for different absorbed energy densities (dashed, dashed-dotted, and dashed-dotted-dotted lines). The solid line shows \(T_{m}\) versus \(T_{e}\) from our calculation and the dotted line shows the normal melting temperature of Ta.
Figure 9: Calculated evolution of electron and ion temperatures (isochoric heating) after irradiation of the 30-nm-thick tungsten film by a 130-fs pulse for different absorbed energy densities (dashed, dashed-dotted, and dashed-dotted-dotted lines). The solid line shows the melting temperature \(T_{m}\) as a function of \(T_{e}\) from our calculation, the circles show \(T_{m}(T_{e})\) from Ref. [17] (non-isochoric conditions), and the dotted line shows the normal melting temperature of W.
tion, the growth of \(T_{m}\) make the latent heat of fusion higher which will also delay the complete melting.
A similar maximum of \(T_{m}(T_{e})\) at relatively low heating (\(T_{e}\)\(\sim\)5 kK) is also present in platinum [12]. As shown by calculations from first principles, its electronic structure is also characterized by a high electronic density of states \(N(\mu)\) on the Fermi level [18], which strongly reduces with the increasing \(T_{e}\). Our calculations show that the effect of lattice hardening is a bit lower here and the melting threshold increases by about 18%. But since \(\widetilde{E}^{m}_{abs}\) for platinum at the normal melting temperature \(T^{0}_{m}\) is quite small (\(\sim\)0.39 MJ/kg), the detection of its increase in experiment may be limited by experimental accuracy.
## IV Conclusions
The paper studied the interaction of femtosecond laser pulses with thin tungsten and tantalum films through calculations from first principles. Calculated results shows the body-centered cubic structure of both the metals to lose its dynamic stability at rather high electron temperatures. This effect must lead to their melting on the sub-picosecond time scale when the electronic subsystem is heated above 22 kK. It is also demonstrated that the metals have rather high values of the electron-phonon coupling factor (\(\sim\) several units per 10\({}^{17}\) W/m\({}^{3}\)/K) at electron temperatures from room temperature to \(\sim\)45 kK. In addition, unlike tungsten, the crystal lattice of tantalum hardens at relatively low values of \(T_{e}\) (\(\lesssim\)7 kK). The hardening changes the value of the complete melting threshold. Our calculations show that the melting threshold will be at least 25% higher if hardening really occurs. We suppose that this effect for tantalum can be detected quite reliably by modern experimental techniques used to study the interaction of matter with ultrashort laser pulses.
|
2302.11624 | Spectral Method for the Gravitational Perturbations of Black Holes:
Schwarzschild Background Case | We develop a novel technique through spectral decompositions to study the
gravitational perturbations of a black hole, without needing to decouple the
linearized field equations into master equations and separate their radial and
angular dependence. We first spectrally decompose the metric perturbation in a
Legendre and Chebyshev basis for the angular and radial sectors respectively,
using input from the asymptotic behavior of the perturbation at spatial
infinity and at the black hole event horizon. This spectral decomposition
allows us to then transform the linearized Einstein equations (a coupled set of
partial differential equations) into a linear matrix equation. By solving the
linear matrix equation for its generalized eigenvalues, we can estimate the
complex quasinormal frequencies of the fundamental mode and various overtones
of the gravitational perturbations simultaneously and to high accuracy. We
apply this technique to perturbations of a nonspinning, Schwarzschild black
hole in general relativity and find the complex quasinormal frequencies of two
fundamental modes and their first two overtones. We demonstrate that the
technique is robust and accurate, in the Schwarzschild case leading to relative
fractional errors of $\leq 10^{-10} - 10^{-8}$ for the fundamental modes, $\leq
10^{-7} - 10^{-6}$ for their first overtones, $\leq 10^{-7} - 10^{-4}$ for
their second overtones. This method can be applied to any black hole spacetime,
irrespective of its Petrov type, making the numerical technique extremely
powerful in the study of black hole ringdown in and outside general relativity. | Adrian Ka-Wai Chung, Pratik Wagle, Nicolas Yunes | 2023-02-22T20:02:08Z | http://arxiv.org/abs/2302.11624v2 | # Spectral Method for the Gravitational Perturbations of Black Holes:
###### Abstract
We develop a novel technique through spectral decompositions to study the gravitational perturbations of a black hole, without needing to decouple the linearized field equations into master equations and separate their radial and angular dependence. We first spectrally decompose the metric perturbation in a Legendre and Chebyshev basis for the angular and radial sectors respectively, using input from the asymptotic behavior of the perturbation at spatial infinity and at the black hole event horizon. This spectral decomposition allows us to then transform the linearized Einstein equations (a coupled set of partial differential equations) into a linear matrix equation. By solving the linear matrix equation for its generalized eigenvalues, we can estimate the complex quasinormal frequencies of the fundamental mode and various overtones of the gravitational perturbations simultaneously and to high accuracy. We apply this technique to perturbations of a non-spinning, Schwarzschild black hole in general relativity and find the complex quasinormal frequencies of 2 fundamental modes and their first 2 overtones. We demonstrate that the technique is robust and accurate, in the Schwarzschild case leading to relative fractional errors of \(\leq 10^{-10}-10^{-8}\) for the fundamental modes, \(\leq 10^{-7}-10^{-6}\) for their first overtones, \(\leq 10^{-7}-10^{-4}\) for their second overtones. This method can be applied to any black hole spacetime, irrespective of its Petrov type, making the numerical technique extremely powerful in the study of black hole ringdown in and outside general relativity.
## I Introduction
The LIGO-Virgo-KAGRA collaboration has successfully detected numerous gravitational-wave (GW) signals, most of which are emitted by binary black hole (BH) coalescence [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. After the merger, the remnant eventually relaxes into a stationary and rotating BH by emitting GWs with a discrete set of quasinormal mode (QNM) frequencies, a coalescence stage known as ringdown. These signals grant us pristine access to the properties of spacetime in the strong-field, most dynamical and non-linear regime, as these GWs travel mostly undisturbed, and thus, carry non-distorted information about their source. Thus far, all the GWs detected are consistent with General Relativity (GR) [14; 15; 16; 17; 18; 19; 7], indicating that Einstein's theory has now also passed the first GW tests. In the near future, the ongoing improvements in GW detector technology and the addition of new, next-generation detectors [20; 21] with improved sensitivity will allow us to listen to the universe and decipher its physics better.
While GR has passed numerous astrophysical and solar system tests [22; 23; 24; 25; 26; 27; 28; 29], several theoretical and observational issues remain. On the theoretical side, the existence of spacelike and timelike singularities and the hard-coded nature of locality in GR begs for a quantum completion of Einstein's classical theory that may resolve the BH information paradox [30; 31] and allow for quantum entanglement even in the presence of horizons. On the observational side, the matter-antimatter asymmetry of the universe, its late-time acceleration [32; 33] and galaxy rotation curves [34; 35] require that GR be completed with additional parity-violating physics (that satisfy the Sakharov conditions [36; 37; 38; 39]), an "unnaturally" small cosmological constant [40; 41] and a dark matter particle [42; 43; 44; 45] yet to be observed through direct detection particle experiments. These issues have inspired many modified gravity theories, such as Einstein-dilaton-Gauss-Bonnet gravity [46; 47; 48; 49], dynamical Chern-Simons gravity [50; 51; 52], Einstein-AEther theory [53; 54; 55; 56], Horndeski and beyond Horndeski gravity [57; 58; 59]. In these modified theories, BHs still exist but they need not be described by their GR counterparts, instead acquiring certain modifications that may render them more generic (e.g. of Petrov type I instead of D [60; 61; 62]). As a result of the modified field equations and the non-GR corrections to BHs in these theories, their QNM spectra can be quite different than that predicted in GR [63; 64; 65; 66; 67; 68; 69; 70; 71], in principle allowing for new tests with GWs [72; 73; 74; 75; 76; 77; 78].
Ringdown GW tests of modified gravity, however, are hindered by the intrinsic difficulty in the computation of the gravitational QNM frequencies of rotating BHs in modified theories. In principle, the BH QNM frequencies can be computed by solving the linearized field equations in that theory, derived by expanding the field equations to first order in metric perturbations. For a non-spinning BH background, the linearized field equations are a complicated set of coupled, partial differential equations, which one decouples to find _master equations_ for its propagating degrees of freedom through the use of special (Regee-Wheeler [89] and Zerilli-Moncrief [90; 91]) master functions. For a rotating BH, the linearized field equations are a nearly-impossible set of coupled, partial differential equations, which nobody has yet been able
to decouple into master equations when working directly with metric perturbations. Instead, for rotating BHs one can work with curvature perturbations through the Newman-Penrose (NP) formalism [92] (in which the field equations are cast in terms of spinor coefficients, the Weyl scalars and differential operators) to derive a master function for these curvature perturbations. In this way, the NP formalism allows one to derive the Teukolsky master equation (i.e. a separable wave equation for the NP scalars that represent propagating degrees of freedom), provided the rotating BH background is of Petrov-type D and the field equation is Einstein's [93; 94; 95; 96]. If the theory is not Einstein's, or if the BH is not of Petrov-type D, then there is no guarantee that one can decouple the field equations linearized in curvature perturbations through the NP formalism1.
Footnote 1: We note that, in parallel with this work, recent progress has been made to extend the derivation of the Teukolsky equation to beyond-GR BHs[97; 98] by working to leading-order in GR deviations within an effective field theory treatment.
This difficulty motivates us to explore new methods to compute the gravitational QNM frequencies of BH spacetimes. One necessary criterion that these new methods must satisfy is robustness and accuracy, which we can only assess by implementing them first within GR and comparing results to known gravitational QNM frequencies of Schwarzschild and Kerr BHs [99]. This is the main focus of this paper, focusing here on Schwarzschild BHs, a necessary step before tackling the Kerr case. One can attempt to construct many new methods that satisfy the above criteria, but one that has shown some promise in the last few decades is spectral methods. Spectral decomposition can be an effective method to handle complicated linearized field equations, as shown in [100; 101; 102; 103; 104; 105; 106; 107; 108]. Using the completeness and orthonormal properties of certain special functions, like the Chebyshev polynomials and the Legendre polynomials, we can express any piecewise continuous function as a linear combination of these special functions. The metric perturbations and the coefficient functions of the linearized field equations are at least \(C^{1}\) outside the horizon, so we can accurately approximate them by using a finite number of spectral bases, which simplifies the calculation of QNM frequencies.
The existing spectral codes [100; 101; 102; 103; 104; 105; 106; 107; 108; 109] for computing BH QNMs usually either (i) focus on calculating the QNMs of the scalar perturbations of BH (e.g. [100]), (ii) are based on the NP formalism and separate variables explicitly (e.g. [104; 105; 106; 107; 109]), or (iii) are based on metric perturbations but require the decoupling of the field equations (e.g. [101; 102; 103]). The goal of this paper is then to develop an even more powerful and adaptable, new approach to compute the QNM frequencies through spectral decomposition of metric perturbations and the linearized field equations, without decoupling the latter to derive master equations and then separating them. We begin by deriving the linearized Einstein equations that govern the metric perturbations of a Schwarzschild BH in the Regge-Wheeler gauge (Sec. II). We then use a product decomposition of the metric tensor into radial and angular functions, together with a spectral decomposition (of the angular sector in terms of associate Legendre polynomials) to turn the system of partial differential equations into a system of ordinary differential equations. By solving this system of ordinary differential equations asymptotically at spatial infinity and at the event horizon, we obtain the boundary conditions that the radial functions must satisfy (Sec. III). The asymptotic behavior of the radial functions allows us to construct a radial ansatz that corrects the asymptotic behavior through a spectral sum of Chebyshev polynomials (Sec. IV).
The full spectral decomposition transforms the linearized Einstein equations into a system of linear _algebraic_ equations, whose generalized eigenvalues contain the QNM frequencies of the Schwarzschild BH. We compute these QNM frequencies numerically by solving for the generalized eigenvalues and we devise specific procedures to identify which generalized eigenvalues correspond to which QNM frequencies. We show that the reconstruction of the metric functions through this spectral decomposition is actually an asymptotic series by calculating its optimal truncation order (Sec. V). We find that typically keeping 25 basis functions in the Chebyshev and the Legendre sectors suffices to identify 6 QNM frequencies, 2 of which correspond to fundamental modes, 2 to the first overtones and 2 to the second overtones. We also find that these QNM frequencies can be calculated fast and accurately, with relative fractional errors of \(\leq 10^{-10}-10^{-8}\) for the fundamental modes, \(\leq 10^{-7}-10^{-6}\) for their first overtones, and \(\leq 10^{-7}-10^{-4}\) for their second overtones.
We conclude by analyzing the robustness of our spectral method (Sec. VI). We first check that our QNM frequency calculations are independent of the order (\(m\)) of the associated Legendre polynomial basis, an important feature of gravitational perturbation of spherically-symmetric BHs. We then check that our QNM calculations are independent of the choice of radial scaling we choose in the ansatz for the radial function, further indicating the robustness of the spectral method. Finally, we check that the calculation of QNM frequencies is approximately insensitive to the set of 6 components of the linearized Einstein equations that we choose to solve for the 6 metric perturbation functions. This flexibility allows us to select the set of equations that is most convenient and to cross-check our results.
The work presented here is yet another avenue to calculate QNMs of perturbed BHs, but it is very promising and interesting for the following reasons. First, since we work with the metric perturbations directly, there is never a need to decouple the field equations and find master functions and equations. This is important because such a decoupling can be extremely complicated in modified theories of gravity, especially when the BH background is spinning and not of Petrov type D. Moreover, since we work with the metric perturbations directly, we automati
cally find solutions for all components of the metric itself without needing any further metric reconstruction. This could be useful when doing second-order BH perturbation theory [110; 111] and self-force calculations [112; 113], which typically require metric reconstruction. Finally, the method presented here is fast, computationally efficient, accurate, robust and able to obtain QNM frequencies of not just the fundamental model, but also of its overtones with similar speed, efficiency, accuracy and robustness. This is important because the calculation of QNM frequencies of higher overtones can sometimes be noise and not as accurate as that of the fundamental model using other methods, such as direct numerical integration. Section VII will further elaborate all of these features further and possible extensions of our work.
Henceforth, we assume the following conventions: \(x^{\mu}=(x^{0},x^{1},x^{2},x^{3})=(t,r,\chi,\phi)\), where \(\chi=\cos\theta\) and \(\theta\) is the azimuthal angle; the signature of the metric tensor is \((-,+,+,+)\); gravitational QNMs are labelled in the form of \(nlm\) or \((n,l,m)\), where \(n\) is the principal mode number, \(l\) is the azimuthal mode number and \(m\) is the magnetic mode number of the QNMs; Greek letters in index lists stand for spacetime coordinates; Greek letters in curly braces \(\{\mu\nu\}\) denote the collection of the \(\mu\nu\) components of the perturbed Einstein equations, \(G^{(1)}_{\mu\nu}=0\). For example, \(\{tr,t\chi,t\phi,rr,r\chi,r\phi\}\) stands for \(\{G^{(1)}_{tr}=0,G^{(1)}_{t\chi}=0,...,G^{(1)}_{r\phi}=0\}\). For the convenience of the reader, we have presented a list of all definitions and symbols in Appendix A.
## II Linearized Einstein field equations about a Schwarzschild black hole background
In this section, we discuss our representation of the background Schwarzschild spacetime, present the linearized Einstein field equations for a perturbed Schwarzschild BH, and then conclude with a quick description of the spectral decomposition of the metric perturbations.
### Background Spacetime, Metric Perturbation and the Linearized Einstein equations
The solution to the vacuum Einstein equation \(G_{\mu\nu}=0\) that represents a stationary and spherically symmetric (non-spinning) BH is the Schwarzschild metrics \(g^{(0)}_{\mu\nu}\). The line element associated with this metric can be written in Schwarzschild coordinates as
\[ds^{2}_{(0)} =g^{(0)}_{\mu\nu}dx^{\mu}dx^{\nu}\] \[=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+\frac{r^{2}}{1-\chi^{2}}d\chi^{2}\] \[\quad+r^{2}\left(1-\chi^{2}\right)d\phi^{2}\,, \tag{1}\]
where \(M\) is the BH mass, \(\chi\equiv\cos\theta\) with \(\theta\) the polar angle, \(\phi\) is the azimuthal angle and
\[f(r)=1-\frac{2M}{r} \tag{2}\]
is the so-called Schwarzschild factor. For a Schwarzschild BH in these coordinates, the event horizon is located at \(r_{\textsc{h}}=2M\).
We now consider linear perturbations of the metric tensor, such that
\[g_{\mu\nu}=g^{(0)}_{\mu\nu}+\epsilon\;h_{\mu\nu}\,, \tag{3}\]
where \(g^{(0)}_{\mu\nu}\) is the background metric of Eq. (II), \(h_{\mu\nu}\) is the metric perturbation, and \(\epsilon\) is a bookkeeping parameter for the perturbations. The metric perturbation is a function of spacetime coordinates and it can be decomposed into temporal, radial and angular components. Under a parity transformation (i.e., the simultaneous shifts \(\theta\rightarrow\pi-\theta\) and \(\phi\rightarrow\phi+\pi\)) these components can be classified into odd (or "axial") and even (or "polar") sectors, depending on whether they pick up a factor of \((-1)^{\ell+1}\) or \((-1)^{\ell}\) respectively. This allows us to decompose \(h_{\mu\nu}\) as [89; 90; 91; 114]
\[h_{\mu\nu}(t,r,\chi,\phi)=h^{\text{odd}}_{\mu\nu}(t,r,\chi,\phi)+h^{\text{ even}}_{\mu\nu}(t,r,\chi,\phi)\,, \tag{4}\]
where2
Footnote 2: Our choice of signs for \(h_{3}\) and \(h_{4}\) is different from that in some of the literature, such as [115].
\[h^{\text{odd}}_{\mu\nu}=e^{im\phi-i\omega t}\begin{pmatrix}0&0&-im(1-\chi^{ 2})^{-1}h_{5}(r,\chi)&(1-\chi^{2})\partial_{\chi}h_{5}(r,\chi)\\ *&0&-im(1-\chi^{2})^{-1}h_{6}(r,\chi)&(1-\chi^{2})\partial_{\chi}h_{6}(r, \chi)\\ *&*&0&0\\ *&*&*&0\end{pmatrix}\,,\] (5a) and \[h^{\text{even}}_{\mu\nu}=-e^{im\phi-i\omega t}\begin{pmatrix}f(r)h_{1}(r, \chi)&h_{2}(r,\chi)&0&0\\ *&\frac{1}{f(r)}h_{3}(r,\chi)&0&0\\ *&*&r^{2}(1-\chi^{2})^{-1}h_{4}(r,\chi)&0\\ *&*&*&r^{2}(1-\chi^{2})h_{4}(r,\chi)\end{pmatrix}\,, \tag{5b}\]
and where we have made use of the Regge-Wheeler gauge [89; 114]. We have also assumed that both sectors depend on the same QNM frequency because both the axial and polar perturbations that are purely ingoing at the event horizon and outgoing at spatial infinity depend on the same complex QNM frequencies in GR, a manifestation of _iso-spectrality_. If one were to generalize this method to beyond-GR theories that break iso-spectrality, then the above assumption may have to be relaxed.
With the ansatz defined, we can now find the system of equations that the metric perturbations \(h_{i}(r,\chi)\;\forall i\in(1,6)\) must satisfy. Unlike in the case of early studies in BH perturbations by Regge and Wheeler [89], Zerilli [90] and Moncrief [91], we do not treat the odd and even perturbations separately. Considering them simultaneously will allow us, in the future, to extend the spectral approach to QNMs of Kerr BHs, where these two parities are coupled. Substituting Eq. (5) into the vacuum Einstein equation, one finds a system of ten coupled, partial differential equations to solve for the six unknown functions \(h_{i}(r,\chi)\). Only six of these equations, however, are independent of each other, so the remaining four can be eliminated by the use of perturbed Bianchi identities. In this paper, we will mainly focus on solving the \(\{tr,t\chi,t\phi,rr,r\chi,r\phi\}\) components, because we found empirically that this system is the most convenient to work with. In Sec. VI.3 and Appendix. B, we will show that using a different set of components of the linearized Einstein equations also allows us to find the Schwarzschild QNMs.
Let us now massage the linearized Einstein equations. First, note that the components of the background metric tensor \(g^{(0)}_{\mu\nu}\) in Schwarzschild coordinates, whose line element is in Eq. (1), are rational functions of \(r\) and \(\chi\). Therefore, the coefficient functions multiplying the metric perturbations \(h_{i}\) in the linearized Einstein equations must also be rational functions of \(r\) and \(\chi\), since they can only depend on background quantities and their derivatives. With this understanding, we can always express the \(i\)-th linearized field equation3, after appropriate factorization and multiplying through the common denominator, as
Footnote 3: Throughout this work, when multiplied by \(m\) or \(\omega\), \(i\) stands for \(\sqrt{-1}\). Otherwise, \(i\) stands for one of the components of the linearized Einstein equations.
\[\sum_{j=1}^{6}\sum_{\alpha,\beta=0}^{\alpha+\beta\leq 3}\sum_{\gamma=0}^{ 2}\sum_{\delta=0}^{d_{r}}\sum_{\sigma=0}^{d_{\chi}}\mathcal{G}_{i,\gamma, \delta,\sigma,\alpha,\beta,j}\omega^{\gamma}r^{\delta}\chi^{\sigma}\partial_{ r}^{\alpha}\partial_{\chi}^{\beta}h_{j}=0\,, \tag{6}\]
where \(\sum_{\alpha,\beta=0}^{\alpha+\beta\leq 3}\) is a summation starting from \(\alpha=0\) and \(\beta=0\) up to \(\alpha+\beta=3\) for all non-negative \(\alpha\) and \(\beta\), while \(\mathcal{G}_{i,\gamma,\delta,\sigma,\alpha,\beta,j}\) is a complex function of \(M\) and \(m\) only. The constants \(d_{r}\) and \(d_{\chi}\) are the degree of \(r\) and \(\chi\) of the coefficient of a given term in the equations respectively, which depend on the specific equation we are looking at and can thus be thought of to be dependent on the summation indices \(\alpha\), \(\beta\), \(i\), \(j\). When factorizing each of the linearized Einstein equations to obtain the common denominator, there can be prefactors, such as some powers of \(1-\chi^{2}\), \(r\) and \(r-r_{\textsc{H}}\), which contain no metric perturbation functions and are non-zero except at \(r=r_{\textsc{H}}\), \(r=\infty\) and \(\chi=\pm 1\). Since these common factors are never zero in the computational domain (except at the boundaries), we will divide by them to simplify the equations and improve the numerical stability of the linearized Einstein equations. Equations (6) represent a system of coupled, two-dimensional, third-order partial differential equations. Notice that the perturbed field equations for the even perturbations are at most second order, whereas for odd perturbations, due to \(\partial_{\chi}h_{i}\) for \(h_{i}\in\{h_{5},h_{6}\}\), the system of equations is at most third order.
### Spectral decomposition of the metric perturbations
In this subsection, we present the spectral decomposition along the radial and angular coordinates of our metric perturbations, introduced in the previous subsection. The metric perturbation functions \(h_{i}(r,\chi)\) that enter the linearized Einstein equations are functions of \(r\) and \(\chi\). Using separation of variables, we can write these functions through the product decomposition
\[h_{i}(r,\chi)=y_{i}(r)\Theta_{i}(\chi)\,,\quad i=1,..,6\,, \tag{7}\]
with no summation over \(i\) implied, where \(y_{i}\) are new functions of \(r\) only and \(\Theta_{i}\) are functions of \(\chi\) only.
Let us now determine the angular dependence of the metric perturbation functions. We express the angular dependence as a linear combination of spectral function of \(\chi\). To determine the explicit spectral basis, we note that in general, the angular dependence of metric perturbations can be expressed in terms of scalar, vector and tensor spherical harmonics [114; 116; 117], whose \(\chi\)-part is the associate Legendre polynomials of \(\chi\). This is also the spectral function of \(\chi\) used in the original Regge-Wheeler [89] and Zerilli-Moncrief calculations [90; 91]. Taking all these into account, we represent the \(\chi\) dependence using associated Legendre polynomials \(P_{\ell}^{m}(\chi)\) of degree and order4\((\ell,m)\), namely
Footnote 4: Though \(l\), the azimuthal number that labels QNMs, and \(\ell\), the degree of the associated Legendre polynomials in the product decomposition of the metric perturbation functions, are the same for a Schwarzschild BH background, this is not necessarily the same in general, which is why we use different symbols for them here.
\[\Theta_{i}(\chi)=\sum_{\ell=|m|}^{\infty}a_{i,\ell}\;P_{\ell}^{|m|}(\chi)\,. \tag{8}\]
Absorbing the \(a_{i,\ell}\) coefficients into the \(y_{i}\) functions via \(y_{i}^{\ell}(r)=a_{i,\ell}y_{i}(r)\), we then have
\[h_{i}(r,\chi)=\sum_{\ell=|m|}^{\infty}y_{i}^{\ell}(r)P_{\ell}^{|m|}(\chi)\,, \quad i=1,..,6\,. \tag{9}\]
In practice, only a finite number of associated Legendre polynomials need to be included in our approximations, so let \(\mathcal{N}_{\chi}\) represent the maximum number of terms kept in these sums. In principle, different metric perturbation functions (i.e. different \(h_{i}\)) could be represented by a different number of terms in the sum (i.e. \(\mathcal{N}_{\chi}\) could be different for different \(h_{i}\) functions), but to maximize the symmetry of the spectral representation, we choose the same \(\mathcal{N}_{\chi}\) for all \(i\).
With the representation of the angular sector determined, let us now discuss the radial sector. Using the above product decomposition of Eq. (9) in the left-hand side of Eq. (6), we can rewrite any component of the linearized Einstein equation as
\[\mathcal{G}_{i,\gamma,\delta,\sigma,\alpha,\beta,j}\omega^{ \gamma}r^{\delta}\chi^{\sigma}\partial_{r}^{\alpha}\chi\left\{\sum_{\ell=|m|}^ {\mathcal{N}_{\chi}+|m|}y_{j}^{\ell}(r)P_{\ell}^{|m|}(\chi)\right\}\] \[=\sum_{\ell=|m|}^{\mathcal{N}_{\chi}+|m|}H_{i}^{\ell}(r)P_{\ell}^ {|m|}(\chi)\,, \tag{10}\]
where this equation defines the functions \(H_{i}^{\ell}(r)\), and the repeated indices in the left-hand side of Eq. (10) implicitly represent the summations used in Eq. (6). Since the linearized Einstein equations must be satisfied, Eq. (10) implies that
\[H_{i}^{\ell}(r)=0 \tag{11}\]
for \(\ell=|m|,|m|+1,...,\mathcal{N}_{\chi}+|m|\) and \(i=\{1,6\}\).
Let us now derive an expression for the \(H_{i}^{\ell}(r)\) expressions through the use of the orthogonality properties of the associated Legendre polynomials. Multiplying Eq. (10) by another associated Legendre polynomial of different degree and integrating over \(\chi\), we find
\[H_{i}^{\ell}(r)=\mathcal{G}_{i,\gamma,\delta,\sigma,\alpha,\beta,j}\omega^{\gamma}r^{\delta}\mathcal{I}_{j}^{\ell,\sigma,\alpha,\beta}\,,\] \[\mathcal{I}_{j}^{\ell,\sigma,\alpha,\beta}=\mathcal{N}_{\ell,m} \sum_{\ell^{\prime}=|m|}^{\mathcal{N}_{\chi}+|m|}\partial_{r}^{\alpha}y_{j}^{ \ell^{\prime}}\int_{-1}^{+1}d\chi P_{\ell}^{|m|}(\chi)\chi^{\sigma}\partial_{ \chi}^{\beta}P_{\ell^{\prime}}^{|m|}(\chi)\,, \tag{12}\]
with
\[\mathcal{N}_{\ell,m}=(2\ell+1)\frac{(\ell-m)!}{(\ell+m)!}\,. \tag{13}\]
Eq. (11) then becomes
\[\mathcal{G}_{i,\gamma,\delta,\sigma,\alpha,\beta,j}\omega^{\gamma}r^{\delta} \mathcal{I}_{j}^{\ell,\sigma,\alpha,\beta}=0\,, \tag{14}\]
which can be thought of as a coupled system of ordinary differential equations for the \(y_{i}^{\ell}\) radial functions.
Let us now convert this coupled system of ordinary differential equations into first-order form. First, we observe that the linearized Einstein equations can contain at most second-order radial derivatives of \(y_{i}^{\ell}\); although the \(\alpha\) sum in Eq. (6) ranges up to \(\alpha+\beta\leq 3\), in practice when \(\alpha+\beta=3\) then \((\alpha,\beta)=(0,3),(1,2)\) or \((2,1)\), so \(\alpha=2\) at most. To convert this system of ordinary differential equations to first-order form, we now introduce the following auxiliary fields
\[Y_{i}^{\ell}\equiv\frac{dy_{i}^{\ell}}{dr}\,, \tag{15}\]
where again \(\ell=|m|,|m|+1,....,\mathcal{N}_{\chi}+|m|\). Let us now promote these auxiliary fields to free fields and define the collection of all fields \(\mathbf{y}\) through the shortcut notation \(\mathbf{y}=\{y_{i}^{\ell}\}\cup\{Y_{i}^{\ell}\}\), or more explicitly,
\[\mathbf{y} =(y_{1}^{|m|},y_{1}^{|m|+1},...,y_{1}^{|m|+\mathcal{N}_{\chi}},\] \[...,\] \[y_{6}^{|m|},y_{6}^{|m|+1},...,y_{6}^{|m|+\mathcal{N}_{\chi}},\] \[Y_{1}^{|m|},Y_{1}^{|m|+1},...,Y_{1}^{|m|+\mathcal{N}_{\chi}},\] \[...,\] \[Y_{6}^{|m|},Y_{6}^{|m|+1},...,Y_{6}^{|m|+\mathcal{N}_{\chi}})^{ \mathrm{T}}\,.\]
Therefore, the resulting first-order system of ordinary differential equations of equations can then be written as
\[\mathbb{Q}(r)\frac{d\mathbf{y}}{dr}=\mathbb{R}(r)\mathbf{y}, \tag{17}\]
where \(\mathbb{Q}(r)\) and \(\mathbb{R}(r)\) are square matrices of order \(\mathcal{N}_{\chi}\cdot(6+6)\), whose elements are functions of the radial coordinate \(r\) only. The procedure to solve for the QNMs now reduces to solving the above equation. Before doing so, however, we will simplify this system by peeling off the asymptotic behavior of the solution near the event horizon and spatial infinity in the next section, and then absorbing it into the radial ansatz.
Eq. (17) depends on \(m\) only because of the metric-perturbation ansatz and the spectral basis of \(\chi\) that we used. The original calculations of Regge and Wheeler [89], and of Zerilli and Moncrief [90; 91], however, lead to master equations that do not explicitly depend on \(m\); this constant does appear in their metric ansatz but it is eliminated when they decouple the perturbed field equations and derive their master equations. This implies that the QNM frequencies of a perturbed Schwarzschild BH should be \(m\)-independent, which is physically reasonable for gravitational perturbations of a spherically-symmetric background spacetime. Our equations for the QNM frequencies [Eq. (17)], however, do depend on \(m\), and this is precisely because we are not decoupling the perturbed field equations to find master equations. Such \(m\) dependence, nonetheless, can be put to good use: if our numerical calculations are correct, the QNM frequencies we calculate numerically should be invariant under shifts
of \(m\) in Eq. (17), i.e. we should be able to compute QNM frequencies for any choice of \(m\) in this equation and find the same numerical answer. We apply this cross-check in Sec. VI.1 and find that our results for the QNM frequencies we calculate are indeed \(m\) independent.
## III Study of asymptotic behavior of linearized field equations
To perform a spectral decomposition of the metric perturbations defined in Sec. 5, we need to construct an ansatz for the \(y_{i}(r)\) functions that appear in Eq. (7). This ansatz must satisfy the appropriate boundary conditions at the BH event horizon and at spatial infinity. In order to simplify later analysis, we will construct a global ansatz for \(y_{i}(r)\) by pulling out the asymptotic behavior of the solution at the two boundaries, similar to what was done in [100; 102]. In this section, we present this asymptotic analysis. Readers familiar with this topic may wish to skip to Sec. III.2, where we summarize the results of this asymptotic analysis.
### Inversion of coefficient matrix
Let us begin by simplifying the first-order differential system of Eq. (17). Following [100; 102], we multiply this equation by \(\mathbb{Q}^{-1}(r)\) to recast it as
\[\frac{d\mathbf{y}}{dr}=\tilde{\mathbb{M}}(r)\mathbf{y}\,, \tag{18}\]
where \(\tilde{\mathbb{M}}(r)\) is another square matrix of order \(\mathcal{N}_{\chi}\cdot(6+6)\).
For a Schwarzschild or Kerr BH background, \(\mathbb{Q}(r)\) is singular because some \(y_{i}\) are _algebraic variables_. Such variables are defined as those whose radial derivative is not present in the selected ordinary differential equations. If this is the case, then some columns and rows in \(\mathbb{Q}(r)\) are null (see Fig. 1 for a graphical illustration), which renders \(\mathbb{Q}(r)\) non-invertible and singular. Algebraic variables can arise for two reasons. One reason is that the selected components of the linearized Einstein equations do not contain any explicit radial derivatives of some components of the metric perturbation functions. For example, the \(\{tr,t\theta,t\phi,rr,r\theta,r\phi\}\) equations do not contain \(\partial_{r}h_{3}\) and \(\partial_{r}^{2}h_{3}\), and therefore \(y_{3}\) and \(Y_{3}\) are algebraic. Another reason is that, although the selected components of the linearized Einstein equations do contain radial derivatives of the \(h_{i}\) functions, these can be eliminated by substituting in other components of the linearized Einstein equations. If the \(\mathrm{rank}(\mathbb{Q})<\mathcal{N}_{\chi}\cdot(6+6)\), then the system of ordinary differential equations contains \(\mathcal{N}_{\chi}\cdot(6+6)-\mathrm{rank}(\mathbb{Q})\) algebraic equations. All variables that are not algebraic (i.e. those whose radial derivatives are present and cannot be eliminated from the system of ordinary differential equations) will be called _differential variables_.
Though \(\mathbb{Q}(r)\) is singular, we can still write Eq. (17) in the form of Eq. (18) through the following procedure:
1. We first identify \(\mathcal{N}_{\chi}\cdot(6+6)-\mathrm{rank}(\mathbb{Q})\) algebraic equations through elementary row operations. This step gives \(\mathrm{rank}(\mathbb{Q})\) differential equations and some zero rows of \(\mathbb{Q}\).
2. We then identify the algebraic variable(s) of Eq. (17) by reading the column(s) of \(\mathbb{Q}(r)\) that is (are) null. For, say, \(N_{\mathrm{alg}}\) algebraic variables identified, we then select \(N_{\mathrm{alg}}\) differential equations. This allows us to solve for the \(N_{\mathrm{alg}}\) algebraic variables in terms of the differential variables and their first-order derivatives. These results can be verified to be independent of the choice of the differential equations made for these \(N_{\mathrm{alg}}\) algebraic variables. Substituting these solved algebraic variables into the remaining unsolved equations leaves us with a system of \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}\) differential variables. For convenience, we represent these \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}\) unsolved differential variables by \(\tilde{\mathbf{y}}\). Therefore, the remaining unsolved equations can then be written as \[\tilde{\mathbb{Q}}(r)\frac{d\tilde{\mathbf{y}}}{dr}=\tilde{\mathbb{R}}(r) \tilde{\mathbf{y}},\] (19) where \(\tilde{\mathbb{Q}}(r)\) and \(\tilde{\mathbb{R}}(r)\) are two square matrices of order \([\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}]\). Since \(N_{\mathrm{alg}}\) differential equations are eliminated, \(\mathrm{rank}(\tilde{\mathbb{Q}})=\mathrm{rank}(\mathbb{Q})-N_{\mathrm{alg}}\).
3. Some of the algebraic variables may contain \(r\)-derivatives, which upon substitution may convert some of the algebraic equations into differential
Figure 1: Schematic illustration of the structure of the system of ordinary differential equations obtained by spectral decomposition of the linearized Einstein field equations. The vector \(\mathbf{y}\) is related to the amplitude of metric perturbations at a given angular position. The matrix on the left-hand side is the coefficient matrix \(\mathbb{Q}(r)\) of the \(r\)-derivatives of the system of ordinary differential equations. The null column (red rectangle) indicates the existence of algebraic variables, which are those whose \(r\)-derivatives are not contained in the differential equations. The null row (blue rectangle) indicates the existence of algebraic equations in different components of \(\mathbf{y}\) (green rectangle).
equations. Using elementary row operations, we can then identify \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}-\mathrm{rank}(\tilde{\mathbb{Q}})\) algebraic equations. These algebraic equations allow us to express \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}-\mathrm{rank}(\tilde{\mathbb{Q}})\) differential variables in terms of the remaining \(\mathrm{rank}(\tilde{\mathbb{Q}})\) differential variables and possibly the algebraic variables. We can then eliminate another \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}-\mathrm{rank}(\tilde{\mathbb{Q}})\) equations from the system by differentiating \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}-\mathrm{rank}(\tilde{\mathbb{Q}})\) differential variables and expressing the first-order radial derivatives of \(\mathcal{N}_{\chi}\cdot(6+6)-N_{\mathrm{alg}}-\mathrm{rank}(\tilde{\mathbb{Q}})\) differential variables with the remaining differential variables and their first-order radial derivatives. This leaves us with a system of \(\mathrm{rank}(\tilde{\mathbb{Q}})\) ordinary differential equations of \(\mathrm{rank}(\tilde{\mathbb{Q}})\) differential variables. We then denote the \(\mathrm{rank}(\tilde{\mathbb{Q}})\) differential variables with a \(\mathrm{rank}(\tilde{\mathbb{Q}})\)-vector \(\mathbf{z}\) and the resulting system can then be expressed as
\[\frac{d\mathbf{z}}{dr}=\mathbb{M}(r)\mathbf{z}, \tag{20}\]
where \(\mathbb{M}(r)\) is a \(\mathrm{rank}(\tilde{\mathbb{Q}})\times\mathrm{rank}(\tilde{\mathbb{Q}})\) square matrix, such that \(\mathrm{rank}(\mathbb{M})=\mathrm{rank}(\tilde{\mathbb{Q}})=\mathrm{rank}( \mathbb{Q})-N_{\mathrm{alg}}\).
The procedure presented above allows us to construct a differential system without singular matrices, but in order to calculate the asymptotic behavior of the solution we must diagonalize it. We will do so through the algorithm presented in [102], whose essence involves asymptotically expanding \(\mathbb{M}(r)\) as a matrix-valued series in (positive or negative) powers of \(r\) at spatial infinity and \(r-r_{\mathrm{H}}\) at the event horizon, both of which are irregular singular points. Explicitly, at spatial infinity, we asymptotically expand \(\mathbb{M}(r)\) as
\[\mathbb{M}(r)=\sum_{k=-1}^{p_{\infty}}\mathbb{M}_{k}r^{k}+\mathcal{O}\left( \frac{1}{r^{2}}\right)\,. \tag{21}\]
Here \(p_{\infty}\) is the Poincare rank of \(\mathbb{M}(r)\) at spatial infinity, and \(\mathbb{M}_{k}\) are matrices independent of \(r\). We have also discarded terms that decay faster than \(r^{-1}\) at \(r=\infty\), as they have negligible effects at spatial infinity. The asymptotic behavior at the horizon can be studied similarly by a change of variable. Defining \(\epsilon=(r-r_{\mathrm{H}})^{-1}\), where recall that \(r_{\mathrm{H}}\) is the radial location of the event horizon, the differential system of Eq. (20) is correspondingly transformed to
\[\frac{d\mathbf{z}}{d\epsilon}=-\frac{1}{\epsilon^{2}}\mathbb{M}(\epsilon) \mathbf{z}, \tag{22}\]
where \(\mathbb{M}(\epsilon)=\mathbb{M}(r(\epsilon))\) is the asymptotic expansion of \(\mathbb{M}(r)\) near the event horizon. Since the leading-order term in an \(\epsilon\ll 1\) expansion of \(\mathbb{M}_{\epsilon}\) may be nilpotent, we discard the terms that decay faster than \(\epsilon^{-2}\)[102],
\[-\frac{1}{\epsilon^{2}}\mathbb{M}(\epsilon)=\sum_{k=-2}^{p_{H}}\mathbb{M}_{k} \epsilon^{k}+\mathcal{O}\left(\frac{1}{\epsilon^{3}}\right), \tag{23}\]
where \(p_{H}\) is the Poincare rank of \(\epsilon^{-2}M(\epsilon)\) at the horizon. The algorithm in [102] can reduce the Poincare rank and consecutively diagonalize every \(\mathbb{M}_{k}\) through successive transformations. Once every \(\mathbb{M}_{r}\) is diagonalized, we can immediately integrate the system of ordinary differential equations to give the asymptotic behavior of \(\mathbf{z}\). In Appendix B, we provide an explicit and concrete example of the implementation of the above procedure.
Although our spectral analysis formalism essentially requires the algorithm presented in [100; 102], unlike the previous works, our formalism does not require the decoupling between the \(r\)- and \(\chi\)-dependence of \(h_{i}(r,\chi)\), thereby enabling us to estimate the asymptotic behavior of the metric perturbations without explicitly separating \(r\) and \(\chi\), and rendering the spectral method more easily applicable to non-GR BH spacetimes.
### Summary of asymptotic behavior
Let us now summarize the results of applying the above procedure to determine the asymptotic behavior of the metric perturbation functions. Since we aim to study GW QNMs, we require purely ingoing boundary conditions at the horizon \(r_{\mathrm{H}}\) and purely outgoing boundary conditions at spatial infinity, such that
\[h_{i}\propto\left\{\begin{array}{ll}e^{-i\omega r_{*}}\,,&r\to r_{\mathrm{H }}\,,\\ e^{i\omega r_{*}}\,,&r\to\infty\,,\end{array}\right. \tag{24}\]
where \(r_{*}\) is the tortoise coordinate, and for a Schwarzschild BH in Schwarzschild coordinates is given by
\[r_{*}=r+2M\log\left(\frac{r}{2M}-1\right)\,. \tag{25}\]
Applying the above procedure (see App. B for a concrete example), the asymptotic behavior of \(y_{i}^{\ell}(r)\) that is consistent with these boundary conditions is
\[\lim_{r\to\infty}y_{i}^{\ell}(r)\sim e^{i\omega r}r^{i\omega r_{ \mathrm{H}}+\rho_{\infty}^{(i)}}\sum_{k=0}^{\infty}\frac{a_{\ell k}}{r^{k}}, \tag{26}\] \[\lim_{r\to r_{\mathrm{H}}}y_{i}^{\ell}(r)\sim(r-r_{\mathrm{H}})^{-i \omega r_{\mathrm{H}}-\rho_{\mu}^{(i)}}\sum_{k=0}^{\infty}b_{\ell k}(r-r_{ \mathrm{H}})^{k}, \tag{27}\]
where \(a_{\ell k}\) and \(b_{\ell k}\) are constants and
\[\rho_{H}^{(i)} =\begin{cases}1,&\text{for $i\neq 4$ and $5$},\\ 0,&\text{otherwise},\end{cases} \tag{28}\] \[\rho_{\infty}^{(i)} =\begin{cases}1,&\text{for $i\neq 4$},\\ 0,&\text{for $i=4$}.\end{cases}\]
Note that the controlling factors, the factors multiplying the series, do not depend on \(\ell\). Appendix B shows that this asymptotic behavior is consistent with that in the literature.
Let us conclude this section by stressing that Eq. (26) is the asymptotic expansion of the metric perturbations
at spatial infinity and the event horizon [118, 102], as we mentioned before. This is because these expansions are obtained by solving Eq. (20) with \(\mathbb{M}(r)\) replaced by its asymptotic expansion at \(r=\infty\) and \(r=r_{\text{n}}\). Both of these expansion points are irregular and singular. One can therefore show that the approximate solutions satisfy the criteria of an asymptotic series [119].
## IV Separation of the linearized Einstein equations through a spectral decomposition
In this section, we present a spectral decomposition of the linearized Einstein equations in Eq. (6) through the use of the product decomposition presented in Eq. (7), or equivalently Eq. (9). We begin with a refinement of the radial ansatz, which we then apply to the linearized Einstein equations to turn the differential system into a linear algebra problem.
### Refinements of the radial functions
Since the radial functions \(y_{i}^{\ell}(r)\) must satisfy the appropriate boundary conditions at the event horizon and at spatial infinity, it is convenient to pull out this asymptotic behavior in the radial ansatz. Let us then write
\[y_{i}^{\ell}(r)=A_{i}^{\ell}(r)u_{i}^{\ell}(r)\,, \tag{29}\]
where \(A_{i}^{\ell}(r)\) is the asymptotic controlling factor of the radial function \(y_{i}^{\ell}(r)\) and \(u_{i}^{\ell}(r)\) is a correction factor that is both bounded and has trivial boundary conditions. Using Eq. (26), we are motivated to construct \(A_{i}^{\ell}(r)\) as
\[A_{i}^{\ell}(r)=e^{i\omega r}r^{i\omega r_{\text{H}}+\rho_{\infty}^{(i)}} \left(\frac{r-r_{H}}{r}\right)^{-i\omega r_{\text{H}}-\rho_{H}^{(i)}}\,, \tag{30}\]
because then \(u_{i}^{\ell}(r)\) approaches to a constant both at the event horizon and spatial infinity.
Since the computational domain is finite, let us introduce one more refinement of our ansatz through compactification. More specifically, the radial coordinate \(r\) is semi-infinite, and thus, it is computationally inconvenient to perform spectral decompositions along this coordinate because the decomposition involves the evaluation of improper integrals. Let us then reduce the computational complexity by defining the compactified variable, \(z\)[100, 102], via
\[z=\frac{2r_{\text{n}}}{r}-1, \tag{31}\]
so that \(u_{i}\) is a bounded function in the finite domain \(z\in[-1,+1]\).
Finally, since \(u_{i}^{\ell}(z)\) is finite for \(z\in[-1,+1]\), we can express \(u_{i}^{\ell}(z)\) as a linear combination of a spectral function of \(z\). In this work, we choose to represent \(u_{i}^{\ell}(z)\) through a Chebyshev polynomials \(T_{n}(z)\) basis, which is uniformly convergent [120]. These functions are commonly used in numerical studies of gravitational physics [100, 102, 109, 110, 121, 122, 123, 124, 125, 126, 127, 128] for their computational advantages and accuracy when approximating certain functions.
Combining all of these refinements, Eq. (9) with Eqs. (29), (30) and a Chebyshev polynomial expansion takes the form
\[h_{i}(z,\chi)=A_{i}(z)\sum_{n=0}^{\infty}\sum_{\ell=|m|}^{\infty}v_{i}^{n\ell} T_{n}(z)P_{\ell}^{|m|}(\chi). \tag{32}\]
where \(v_{i}^{n\ell}\) are _constant_ coefficients, which one can think of as the component of \(u_{i}(r)\) along the basis of \(T_{n}(z)\) and \(P_{\ell}^{|m|}(\chi)\). Note that we have dropped the superscript \(\ell\) from \(A_{i}(r)\), as this quantity is the same for all \(\ell\), and we have factorized it out of the summation. Eq. (32) gives us the full spectral decomposition of the metric perturbation along the angular coordinate \(\chi\) and the compactified spatial coordinate \(z\).
In practice, however, we will only include a _finite_ number of spectral bases in our representation of the metric perturbation functions. More precisely, henceforth we will set
\[h_{i}(r,\chi)=A_{i}(r)\sum_{n=0}^{\mathcal{N}_{z}}\sum_{\ell=|m|}^{\mathcal{N}_ {\chi}+|m|}v_{i}^{n\ell}T_{n}(z)P_{\ell}^{|m|}(\chi)\,, \tag{33}\]
where \(\mathcal{N}_{z}\) and \(\mathcal{N}_{\chi}\) are respectively the number of Chebyshev polynomials and associated Legendre polynomials included. In the rest of this paper, we will investigate how our calculation of the QNM frequencies is affected by choice of \(\mathcal{N}_{z}\) and \(\mathcal{N}_{\chi}\).
Before we substitute Eq. (33) into the linearized Einstein equations, let us consider what type of series solution Eq. (33) is. Let us first consider this series expansion near spatial infinity. Since \(z=2r_{\text{n}}/r-1\), the Chebyshev polynomials of \(z\) are actually power series in \(r^{-1}\). Thus, as \(r\to\infty\), Eq. (33) is asymptotic to
\[\begin{split} h_{i}(z,\chi)\sim& e^{i\omega r}r^{i \omega r_{\text{H}}+\rho_{\infty}^{(i)}}\\ &\times\sum_{\ell}\left(\tilde{a}_{0\ell}+\frac{\tilde{a}_{1\ell} }{r}+\frac{\tilde{a}_{2\ell}}{r^{2}}+...\right)P_{\ell}^{|m|}(\chi),\end{split} \tag{34}\]
where \(\tilde{a}_{k\ell}\) (\(k=0,1,2,...\)) are constants. If Eq. (34) is to agree with Eq. (26), \(\tilde{a}_{i\ell}=a_{i\ell}\). Moreover, since Eq. (26) is an asymptotic expansion of the metric perturbation at spatial infinity, by the uniqueness of asymptotic expansions [119], Eq. (33) is also an asymptotic expansion of the metric perturbations as \(r\to\infty\).
Let us now study the behavior of the function near the horizon. As \(r\to r_{\text{n}}\), \(z=2r_{\text{n}}/r-1\sim(r-r_{\text{n}})/r_{\text{n}}\), the Chebyshev polynomials of \(z\) are asymptotic to power series of \(r-r_{\text{n}}\) as \(r\to r_{\text{n}}\). Thus, near the event horizon, Eq. (33) is asymptotic to
\[\begin{split} h_{i}(z,\chi)\sim& e^{i\omega r}r^{i \omega r_{\text{H}}+\rho_{\infty}^{(i)}}\\ &\times\sum_{\ell}\left[\tilde{b}_{0\ell}+\tilde{b}_{1\ell}(r-r_{ \text{n}})+...\right]P_{\ell}^{|m|}(\chi),\end{split} \tag{35}\]
where \(\tilde{b}_{i\ell}\) are constants. If Eq. (35) is to agree with Eq. (27), then \(\tilde{b}_{i\ell}=b_{i\ell}\). Therefore, applying the same uniqueness argument presented above, Eq. (33) is also an asymptotic expansion of the metric perturbations as \(r\to r_{\text{\tiny H}}\). In other words, even though, by itself, the series\(\sum_{n}v_{i}^{n\ell}T_{n}(z)\) represents a continuous function that can be approximated by the Chebyshev polynomials with polynomial convergence [120], as written in Eq. (33), the entire series behaves like an asymptotic one near the irregular singular points of the domain, due to the asymptotic nature of the controlling factor \(A_{i}(r)\).
### The linearized Einstein equations as a linear algebraic eigenvalue problem
Let us now use the spectral decomposition of the metric perturbation functions of Eq. (32) in the linearized Einstein equations to transform the latter into a system of linear algebraic equations. First, we note that the first or second radial derivatives of the asymptotic controlling factor are proportional to the product of a rational function of \(r\) and the controlling factor itself. Therefore, on substituting Eq. (33) into the linearized Einstein equations, we can factorize the partial differential equations as
\[\begin{split}\sum_{j=1}^{6}\sum_{\alpha,\beta=0}^{\alpha+\beta \leq 3}&\sum_{\gamma=0}^{2}\sum_{\delta=0}^{d_{\text{\tiny L}}} \sum_{\sigma=0}^{d_{\text{\tiny X}}}\mathcal{K}_{i,\gamma,\delta,\sigma, \alpha,\beta,j}\omega^{\gamma}z^{\delta}\chi^{\sigma}\\ &\times\partial_{z}^{\alpha}\partial_{\chi}^{\beta}\Bigg{\{}\sum _{n=0}^{\mathcal{N}_{x}}\sum_{\ell=|m|}^{\mathcal{N}_{x}+|m|}v_{j}^{n\ell}T_{ n}(z)P_{\ell}^{|m|}(\chi)\Bigg{\}}=0\,.\end{split} \tag{36}\]
Here \(d_{z}\) and \(d_{\chi}\) are the degree of \(z\) and \(\chi\) of the coefficient of the partial derivative \(\partial_{z}^{\alpha}\partial_{\chi}^{\beta}\{...\}\) in the equations respectively, while \(\mathcal{K}_{i,\alpha,\beta,\gamma,\delta,\sigma,j}\) is a complex number that depends on \(M\) and \(m\). As Eq. (36) now involves only ordinary derivatives of the spectral functions with respect to the respective coordinates, we make use of their defining equations to factor and simplify Eq. (36), namely
\[\begin{split}\frac{d^{2}T_{n}}{dz^{2}}&=\frac{1}{1- z^{2}}\left(z\frac{dT_{n}}{dz}-n^{2}T_{n}\right),\\ \frac{d^{2}P_{\ell}^{|m|}}{d\chi^{2}}&=\frac{1}{1- \chi^{2}}\Big{(}2\chi\frac{dP_{\ell}^{|m|}}{d\chi}-\ell(\ell+1)P_{\ell}^{|m|} \\ &\qquad\qquad\qquad-\frac{m^{2}}{1-\chi^{2}}P_{\ell}^{|m|}\Big{)}. \end{split} \tag{37}\]
These equations allow us to pull out more factors of \(1-\chi^{2},1-z\) or \(1+z\), further simplifying the Eq. (36).
To simplify our notation, we now rewrite the left-hand side of Eq. (36) in terms of the spectral functions as
\[\sum_{n=0}^{\mathcal{N}_{x}}\sum_{\ell=|m|}^{\mathcal{N}_{x}+|m|}w_{i}^{n\ell} T_{n}(z)P_{\ell}^{|m|}(\chi)=0\,, \tag{38}\]
where \(w_{i}^{n\ell}\) is hiding much of the complexity of Eq. (36). The orthogonality of \(T_{n}(z)P_{\ell}^{|m|}(\chi)\) implies that \(w_{i}^{n\ell}=0\) for every \(i,n\) and \(\ell\). Comparing Eq. (36) and Eq. (10), we can relate \(w_{i}^{n\ell}\) to \(v_{i}^{n\ell}\) by a linear combination,
\[w_{i}^{n\ell}=\sum_{j=1}^{6}\sum_{n^{\prime}=0}^{\mathcal{N}_{x}}\sum_{\ell^{ \prime}=|m|}^{\mathcal{N}_{x}+|m|}\left[\mathbb{D}_{n\ell,n^{\prime}\ell^{ \prime}}(\omega)\right]_{ij}v_{j}^{n^{\prime}\ell^{\prime}}=0, \tag{39}\]
where \(\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime}}(\omega)\) are quadratic matrix polynomials of \(\omega\),
\[\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime}}(\omega)=\sum_{\gamma=0}^{2} \mathbb{D}_{n\ell,n^{\prime}\ell^{\prime},\gamma}\omega^{\gamma}, \tag{40}\]
and \(\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime},\gamma}\) are constant \(6\times 6\) matrices, whose \(ij\)-th element is given by
\[\begin{split}&\left[\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime},0} \right]_{ij}\\ &=\mathcal{N}\int_{-1}^{+1}dz\int_{-1}^{+1}d\chi(1-z^{2})^{-\frac{1 }{2}}T_{n}(z)P_{\ell}^{|m|}(\chi)\\ &\qquad\qquad\times\mathcal{K}_{i,0,\delta,\sigma,\alpha,\beta,j }z^{\delta}\chi^{\sigma}\partial_{z}^{\alpha}\partial_{\chi}^{\beta}\left[T_{n^ {\prime}}(z)P_{\ell^{\prime}}^{|m|}(\chi)\right]\,,\\ &\left[\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime},1}\right]_{ij}\\ &=\mathcal{N}\int_{-1}^{+1}dz\int_{-1}^{+1}d\chi(1-z^{2})^{-\frac{ 1}{2}}T_{n}(z)P_{\ell}^{|m|}(\chi)\\ &\qquad\qquad\times\mathcal{K}_{i,1,\delta,\sigma,\alpha,\beta,j }z^{\delta}\chi^{\sigma}\partial_{z}^{\alpha}\partial_{\chi}^{\beta}\left[T_{n^ {\prime}}(z)P_{\ell^{\prime}}^{|m|}(\chi)\right]\,,\\ &\left[\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime},2}\right]_{ij}\\ &=\mathcal{N}\int_{-1}^{+1}dz\int_{-1}^{+1}d\chi(1-z^{2})^{-\frac{ 1}{2}}T_{n}(z)P_{\ell}^{|m|}(\chi)\\ &\qquad\qquad\times\mathcal{K}_{i,2,\delta,\sigma,\alpha,\beta,j }z^{\delta}\chi^{\sigma}\partial_{z}^{\alpha}\partial_{\chi}^{\beta}\left[T_{n^ {\prime}}(z)P_{\ell^{\prime}}^{|m|}(\chi)\right]\,.\end{split} \tag{41}\]
Here the repeated indices implicitly represent the summation defined in Eq. (39) (except for \(\gamma\)), and the prefactor \(\mathcal{N}\) is
\[\mathcal{N}=\begin{cases}\frac{2\ell+1}{\pi}\frac{(\ell-m)!}{(\ell+m)!}& \text{if}\quad n\neq 0\\ \frac{2\ell+1}{2\pi}\frac{(\ell-m)!}{(\ell+m)!}&\text{if}\quad n=0\,. \end{cases} \tag{42}\]
Eq. (39) can be cast into a quadratic eigenvalue problem with the QNM frequencies of the perturbed Schwarzschild BH being its generalized eigenvalues. To see this, we first introduce the following vector notation:
\[\begin{split}\mathbf{v}_{n\ell}&=\left(v_{1}^{n\ell},v_ {2}^{n\ell},v_{3}^{n\ell},v_{4}^{n\ell},v_{5}^{n\ell},v_{6}^{n\ell}\right)^{ \text{T}}\,,\\ \mathbf{w}_{n\ell}&=\left(w_{1}^{n\ell},w_{2}^{n\ell},w_ {3}^{n\ell},w_{4}^{n\ell},w_{5}^{n\ell},w_{6}^{n\ell}\right)^{\text{T}}\,. \end{split} \tag{43}\]
Then Eq. (39) can be written as
\[\mathbf{w}_{n\ell}=\sum_{n^{\prime}=0}^{\mathcal{N}_{x}}\sum_{\ell^{\prime}=|m|}^{ \mathcal{N}_{x}+|m|}\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime}}(\omega)\mathbf{v} _{n^{\prime}\ell^{\prime}}=0\,, \tag{44}\]
where the \(\mathbb{D}_{n\ell,n^{\prime}\ell^{\prime}}\) matrix is now dotted into our new vector \(\mathbf{v}_{n^{\prime}\ell^{\prime}}\). Furthermore, let us define a vector \(\mathbf{v}\) and \(\mathbf{w}\), which respectively store all \(\mathbf{v}_{n\ell}\) and \(\mathbf{w}_{n\ell}\),
\[\mathbf{v}=\left\{\mathbf{v}_{00}^{\mathrm{T}},\mathbf{v}_{01}^{ \mathrm{T}},...,\mathbf{v}_{0\mathcal{N}_{\mathrm{x}}}^{\mathrm{T}},..., \mathbf{v}_{1\mathcal{N}_{\mathrm{x}}}^{\mathrm{T}},...,\mathbf{v}_{\mathcal{N }_{\mathrm{x}}\mathcal{N}_{\mathrm{x}}}^{\mathrm{T}}\right\}^{\mathrm{T}}, \tag{45}\] \[\mathbf{v}_{n\ell}=\left(v_{1}^{n\ell},v_{2}^{n\ell},v_{3}^{n\ell },v_{4}^{n\ell},v_{5}^{n\ell},v_{6}^{n\ell}\right)^{\mathrm{T}}.\]
and the following block matrix,
\[\tilde{\mathbb{D}}(\omega)= \tag{46}\] \[\left(\begin{array}{cccccccc}\mathbb{D}_{0[m,0]m}&\mathbb{D}_{0 [m],0(1+|m|)}&...&\mathbb{D}_{0[m],0\mathcal{N}_{\mathrm{x}}}&...&\mathbb{D}_ {0[m],1\ell_{\max}}&...&\mathbb{D}_{0[m],\mathcal{N}_{\mathrm{x}}\ell_{\max}} \\ \mathbb{D}_{0(1+|m|),0|m|}&\mathbb{D}_{0(1+|m|),0(1+|m|)}&...&\mathbb{D}_{0(1+ |m|),0\mathcal{N}_{\mathrm{x}}}&...&\mathbb{D}_{0(1+|m|),1\ell_{\max}}&...& \mathbb{D}_{0(1+|m|),\mathcal{N}_{\mathrm{x}}\ell_{\max}}\\...&...&...&...&...&...&...&...\\ \mathbb{D}_{0(\mathcal{N}_{\mathrm{x}}+|m|),0|m|}&\mathbb{D}_{0(\mathcal{N}_{ \mathrm{x}}+|m|),0(1+|m|)}&...&\mathbb{D}_{0(\mathcal{N}_{\mathrm{x}}+|m|),0 \mathcal{N}_{\mathrm{x}}}&...&\mathbb{D}_{0(\mathcal{N}_{\mathrm{x}}+|m|),1 \ell_{\max}}&...&\mathbb{D}_{0(\mathcal{N}_{\mathrm{x}}+|m|),\mathcal{N}_{ \mathrm{x}}\ell_{\max}}\\...&...&...&...&...&...&...&...&...\\ \mathbb{D}_{1\ell_{\max,0[m|]}}&\mathbb{D}_{1\ell_{\max,0(1+|m|)}}&...&\mathbb{ D}_{1\ell_{\max,0\mathcal{N}_{\mathrm{x}}}}&...&\mathbb{D}_{1\ell_{\max,1\ell _{\max}}}&...&\mathbb{D}_{1\mathcal{N}_{\mathrm{x}}\ell_{\max}}\\...&...&...&...&...&...&...&...&...\\ \mathbb{D}_{\mathcal{N}_{\mathrm{x}}\ell_{\max,0[m|]}}&\mathbb{D}_{\mathcal{N}_ {\mathrm{x}}\ell_{\max,0(1+|m|)}}&...&\mathbb{D}_{\mathcal{N}_{\mathrm{x}}\ell _{\max,0\mathcal{N}_{\mathrm{x}}}}&...&\mathbb{D}_{\mathcal{N}_{\mathrm{x}} \ell_{\max,1\ell_{\max}}}&...&\mathbb{D}_{\mathcal{N}_{\mathrm{x}}\ell_{\max, \mathcal{N}_{\mathrm{x}}}\ell_{\max}}\end{array}\right).\]
Then, the system of linear vector equations (Eq. (39)) can be more compactly written as
\[\tilde{\mathbb{D}}(\omega)\mathbf{v}=\left[\tilde{\mathbb{D}}_{0}+\tilde{ \mathbb{D}}_{1}\omega+\tilde{\mathbb{D}}_{2}\omega^{2}\right]\mathbf{v}= \mathbf{0}\,, \tag{47}\]
which is a quadratic eigenvalue problem. Since \(\mathbf{v}\neq\mathbf{0}\) in the ringdown, \(\det[\tilde{\mathbb{D}}(\omega)]=0\) for QNM frequencies.
Numerically solving this quadratic eigenvalue equation, however, is computationally demanding. We can improve the numerical efficiency if we define,
\[\mathbf{x}=\left(\begin{array}{c}\mathbf{v}\\ \omega\mathbf{v}\end{array}\right), \tag{48}\]
so that the quadratic eigenvalue problem is transformed5 into a generalized eigenvalues problem that is linear in \(\omega\)[100, 102, 129, 130], namely
Footnote 5: In numerical linear algebra, such a transformation is more commonly known as “linearization” [129, 130]. However, through this paper, the name “linearization” has been reserved solely for the linearization of the Einstein equation. To avoid confusion, we call the process that casts a quadratic eigenvalue problem into a generalized eigenvalue problem a “transformation”.
\[M_{0}\mathbf{x}=-\omega M_{1}\mathbf{x}\,,\quad M_{0}=\begin{pmatrix}\tilde{ \mathbb{D}}_{0}&\tilde{\mathbb{D}}_{1}\\ 0&I\end{pmatrix},\;M_{1}=\begin{pmatrix}0&\tilde{\mathbb{D}}_{2}\\ -I&0\end{pmatrix}. \tag{49}\]
The QNM frequencies of the Schwarzschild BH are then the generalized eigenvalues of Eq. (49). The converse, however, is not true: not every generalized eigenvalue of Eq. (49) is a QNM frequency. As we will see in the next section, many surplus eigenvalues, which are not physically meaningful, will emerge, but we will develop a systematic method to identify the meaningful ones.
To explicitly illustrate how one can derive Eq. (47) from Eq. (44), let us consider an example with \(N_{z}=1\) and \(N_{\chi}=0\). In this example, the only components of \(\mathbf{w}_{n\ell}\) are
\[\mathbf{w}_{02}=\mathbb{D}_{02,02}(\omega)\mathbf{v}_{02}+ \mathbb{D}_{02,12}(\omega)\mathbf{v}_{12}=\mathbf{0}, \tag{50}\] \[\mathbf{w}_{12}=\mathbb{D}_{12,02}(\omega)\mathbf{v}_{02}+ \mathbb{D}_{12,12}(\omega)\mathbf{v}_{12}=\mathbf{0}.\]
Hence, as a block matrix, \(\tilde{\mathbb{D}}(\omega)\) can be written as
\[\tilde{\mathbb{D}}(\omega)=\begin{pmatrix}\mathbb{D}_{02,02}(\omega)& \mathbb{D}_{02,12}(\omega)\\ \mathbb{D}_{12,02}(\omega)&\mathbb{D}_{12,12}(\omega).\end{pmatrix} \tag{51}\]
Explicitly, the non-zero elements of \(\tilde{\mathbb{D}}(\omega)\) are
\[\tilde{\mathbb{D}}_{1,4}(\omega)=\frac{18}{35}\pi(7-2i\omega), \quad\tilde{\mathbb{D}}_{1,6}(\omega)=-\frac{9}{35}\pi(8\omega+3i),\quad\tilde{ \mathbb{D}}_{1,10}(\omega)=\frac{72}{35}(\pi+i\pi\omega),\quad\tilde{\mathbb{D }}_{1,12}(\omega)=\frac{18}{35}\pi(8\omega-i),\] \[\tilde{\mathbb{D}}_{2,1}(\omega)=\frac{216\pi}{35},\quad\tilde{ \mathbb{D}}_{2,3}(\omega)=-\frac{18}{35}\pi(4\omega-i),\quad\tilde{\mathbb{D }}_{2,4}(\omega)=-\frac{288}{35}i\pi\omega,\quad\tilde{\mathbb{D}}_{2,6}( \omega)=-\frac{288}{35}\quad\tilde{\mathbb{D}}_{2,7}(\omega)=\frac{108}{35},\] \[\tilde{\mathbb{D}}_{2,9}(\omega)=\frac{144\pi\omega}{35},\] \[\tilde{\mathbb{D}}_{3,1}(\omega)=\frac{288i\pi\omega}{35},\quad \tilde{\mathbb{D}}_{3,3}(\omega)=\frac{288\pi\omega}{35},,\quad\tilde{ \mathbb{D}}_{3,4}(\omega)=\frac{36\pi}{35},\quad\tilde{\mathbb{D}}_{3,6}(\omega)=- \frac{9i\pi}{7},\quad\tilde{\mathbb{D}}_{3,10}(\omega)=\frac{18\pi}{35},\]
\[\tilde{\mathbb{D}}_{3,12}(\omega) =-\frac{18i\pi}{35},\] \[\tilde{\mathbb{D}}_{4,1}(\omega) =\frac{9}{35}\pi\omega(8\omega+i),\quad\tilde{\mathbb{D}}_{4,3}( \omega)=\frac{18\pi\omega}{35},\quad\tilde{\mathbb{D}}_{4,4}(\omega)=-\frac{9} {560}\pi\left(256\omega^{2}-5i\omega-8\right),\quad\tilde{\mathbb{D}}_{4,6}( \omega)=-\frac{27\pi\omega}{140},\] \[\tilde{\mathbb{D}}_{4,7}(\omega) =-\frac{18}{35}\pi\omega(8\omega-i),\quad\tilde{\mathbb{D}}_{4,10 }(\omega)=\frac{9}{280}(\pi+8i\pi\omega),\quad\tilde{\mathbb{D}}_{4,12}(\omega )=\frac{9\pi\omega}{70},\] \[\tilde{\mathbb{D}}_{5,2}(\omega) =-\frac{9}{280}\pi\left(152\omega^{2}-54i\omega+19\right),\quad \tilde{\mathbb{D}}_{5,5}(\omega)=\frac{18}{35}\pi\omega(4\omega+i),\quad\tilde {\mathbb{D}}_{5,8}(\omega)=\frac{9}{70}\pi\left(8\omega^{2}+6i\omega-1\right),\] \[\tilde{\mathbb{D}}_{5,11}(\omega) =-\frac{144\pi\omega^{2}}{35},\] \[\tilde{\mathbb{D}}_{6,2}(\omega) =-\frac{72\pi\omega^{2}}{35},\quad\tilde{\mathbb{D}}_{6,5}( \omega)=\frac{18}{35}\pi\left(16\omega^{2}-1\right),\quad\tilde{\mathbb{D}}_{6,8}(\omega)=\frac{36}{35}\pi\omega(4\omega-i),\quad\tilde{\mathbb{D}}_{6,11}( \omega)=-\frac{9\pi}{70},\] \[\tilde{\mathbb{D}}_{7,4}(\omega) =\frac{9}{35}\pi(9+8i\omega),\quad\tilde{\mathbb{D}}_{7,6}( \omega)=\frac{36}{35}\pi(4\omega-i),\quad\tilde{\mathbb{D}}_{7,10}(\omega)= \frac{9}{140}\pi(31-4i\omega),\] \[\tilde{\mathbb{D}}_{7,12}(\omega) =-\frac{9}{140}\pi(8\omega+9i),\] \[\tilde{\mathbb{D}}_{8,1}(\omega) =\frac{108\pi}{35},\quad\tilde{\mathbb{D}}_{8,3}(\omega)=\frac{18 }{35}\pi(8\omega-i),\quad\tilde{\mathbb{D}}_{8,7}(\omega)=\frac{108\pi}{35}, \quad\tilde{\mathbb{D}}_{8,9}(\omega)=-\frac{18\pi\omega}{35},\quad\tilde{ \mathbb{D}}_{8,10}(\omega)=-\frac{144}{35}i\pi\omega,\] \[\tilde{\mathbb{D}}_{8,12}(\omega) =-\frac{144\pi\omega}{35},\] \[\tilde{\mathbb{D}}_{9,4}(\omega) =\frac{18\pi}{35},\quad\tilde{\mathbb{D}}_{9,6}(\omega)=-\frac{18 i\pi}{35},\quad\tilde{\mathbb{D}}_{9,7}(\omega)=\frac{144i\pi\omega}{35}, \quad\tilde{\mathbb{D}}_{9,9}(\omega)=\frac{144\pi\omega}{35},\quad\tilde{ \mathbb{D}}_{9,10}(\omega)=\frac{18\pi}{35},\] \[\tilde{\mathbb{D}}_{9,12}(\omega) =-\frac{81i\pi}{140},\] \[\tilde{\mathbb{D}}_{10,1}(\omega) =-\frac{36}{35}\pi\omega(4\omega-i),\quad\tilde{\mathbb{D}}_{10,4 }(\omega)=\frac{9\pi(21+64i\omega)}{2240},\quad\tilde{\mathbb{D}}_{10,6}( \omega)=\frac{9}{560}\pi(8\omega-3i),\] \[\tilde{\mathbb{D}}_{10,7}(\omega) =\frac{9}{140}\pi\omega(8\omega+7i),\tilde{\mathbb{D}}_{10,9}( \omega)=\frac{9\pi\omega}{70},\quad\tilde{\mathbb{D}}_{10,10}(\omega)=-\frac{9 \pi\left(1024\omega^{2}-76i\omega-25\right)}{4480},\] \[\tilde{\mathbb{D}}_{10,12}(\omega) =-\frac{9\pi(4\omega+i)}{1120},\] \[\tilde{\mathbb{D}}_{11,2}(\omega) =\frac{9}{70}\pi\left(8\omega^{2}+5i\omega-1\right),\quad\tilde{ \mathbb{D}}_{11,5}(\omega)=-\frac{18}{35}\pi\omega(8\omega-i),\] \[\tilde{\mathbb{D}}_{11,8}(\omega) =-\frac{9}{560}\pi\left(200\omega^{2}-70i\omega+9\right),\quad \tilde{\mathbb{D}}_{11,11}(\omega)=\frac{9}{35}\pi\omega(2\omega+i),\] \[\tilde{\mathbb{D}}_{12,2}(\omega) =\frac{18}{35}\pi\omega(8\omega-3i),\quad\tilde{\mathbb{D}}_{12,5 }(\omega)=-\frac{9\pi}{70},\quad\tilde{\mathbb{D}}_{12,8}(\omega)=-\frac{9}{70 }\pi\omega(4\omega+5i),\quad\tilde{\mathbb{D}}_{12,11}(\omega)=\frac{9}{70} \pi\left(32\omega^{2}-1\right).\]
By reading the coefficient of different terms, we can read \(\tilde{\mathbb{D}}_{0}\), \(\tilde{\mathbb{D}}_{1}\) and \(\tilde{\mathbb{D}}_{2}\), and we find
\[\tilde{\mathbb{D}}_{0}=\begin{pmatrix}0&0&0&\frac{18\pi}{5}&0&- \frac{27i\pi}{35}&0&0&0&\frac{72\pi}{35}&0&-\frac{18i\pi}{35}\\ \frac{216\pi}{35}&0&\frac{18i\pi}{35}&0&0&\frac{108\pi}{35}&0&0&0&0&0\\ 0&0&0&\frac{36\pi}{35}&0&-\frac{9i\pi}{7}&0&0&0&\frac{18\pi}{35}&0&-\frac{18i\pi} {35}\\ 0&0&0&\frac{98}{70}&0&0&0&0&0&\frac{98}{280}&0&0\\ 0&-\frac{171\pi}{280}&0&0&0&0&-\frac{9\pi}{70}&0&0&0\\ 0&0&0&0&-\frac{18\pi}{35}&0&0&0&0&-\frac{9\pi}{70}&0\\ 0&0&0&\frac{81\pi}{35}&0&-\frac{36i\pi}{35}&0&0&0&\frac{279\pi}{140}&0&-\frac{81 i\pi}{140}\\ \frac{108\pi}{35}&0&-\frac{18i\pi}{35}&0&0&0&\frac{108\pi}{35}&0&0&0&0\\ 0&0&0&\frac{18\pi}{35}&0&-\frac{18i\pi}{35}&0&0&0&0&0&0\\ 0&0&0&\frac{27\pi}{320}&0&-\frac{251\pi}{560}&0&0&\frac{45\pi}{806}&0&-\frac{34 i0}{1120}\\ 0&-\frac{9\pi}{70}&0&0&0&0&-\frac{81\pi}{560}&0&0&0&0\\ 0&0&0&0&-\frac{9\pi}{70}&0&0&0&0&0&-\frac{9\pi}{70}&0\end{pmatrix},\]
From this example, we see that \(\tilde{\mathbb{D}}_{0}\), \(\tilde{\mathbb{D}}_{1}\) and \(\tilde{\mathbb{D}}_{2}\) are sparse, singular and non-symmetric. With these matrices in hand, one can now straightforwardly calculate the generalized eigenvalues of Eq. (49), a subset of which will represent the QNMs of a Schwarzschild BH.
## V Extraction of the quasi-normal frequencies
In this section, we present our numerical analysis of the solutions to Eq. (49) for the QNM frequencies of a Schwarzschild BH. We begin with a description of the numerical set-up, followed by the distribution of eigenvalues and the presentation of a method to identify the modes obtained.
### Numerical set up
To simplify our discussion, hereon we assume \(\mathcal{N}_{z}=\mathcal{N}_{\chi}=N\) and denote the eigenvalues computed using \(N\times N\) spectral functions by \(\lambda(N)\). Therefore, \(\tilde{\mathbb{D}}(\omega)\) becomes a \(6(N+1)^{2}\) square matrix and \(M_{0}\) and \(M_{1}\) are \(12(N+1)^{2}\) square matrices. For a given \((m,\rho_{\infty}^{(i)},\rho_{H}^{(i)})\), we solve Eq. (49) for its generalized eigenvalues (from now just "eigenvalues") using the function Eigenvalues of Mathematica with double precision; this algorithm is sufficient for our purposes because the background spacetime is spherically symmetric and the modulus of the coefficients (\(\mathcal{K}_{i,\gamma,\delta,\sigma,\alpha,\beta,j}\) of Eq. (36)) are roughly of the same order of magnitude. We have checked that our results are not significantly affected by increasing the working precision in Mathematica beyond double. Since Schwarzschild BHs are stable, the imaginary part of their QNM frequencies is negative, so we only study the eigenvalues of the negative imaginary part and the positive real part.
Since we are working in spherical symmetry, the QNM frequencies should be independent of the \(m\) index of spherical harmonics. For concreteness, we hereafter set \(m=2\) (except in Sec. VI.1, in which we check whether our results are truly independent of \(m\)), with the understanding that any odd \(m\) QNM is not excited in spherical symmetry and the even \(m\) modes all have the same QNM frequencies (e.g. \(\omega_{040}=\omega_{042}=\omega_{044}\)).
### Possible sources of inaccuracies
Although the error in approximating a continuous function by a spectral function decreases with \(N\), one should not expect that the accuracy of the QNM frequencies computed using the spectral basis will always increase with \(N\). We have identified three possible sources of inaccuracies, which we list below:
1. _Asymptotic Nature_. As mentioned earlier, Eq. (32) is an asymptotic expansion with an asymptotic basis constructed from spectral functions. Typically, asymptotic expansions diverge if a large number of terms are included in the expansion [119]. Thus, the
accuracy of the QNM frequencies estimated using Eq. (33) cannot be improved indefinitely as \(N\) is increased.
2. _Numerical Precision_. Any numerical calculation is always an approximation to the exact answer that is limited by the precision with which we perform the calculation. Within a given precision, the accuracy of the eigenvalues computed using a spectral method can deteriorate with unsuitably many spectral functions included. Nonetheless, as mentioned before, we have checked that the results of our calculations are not affected by precision error (i.e. there are other sources of inaccuracies that dominate).
3. _Transformation Inaccuracies_. This is the error induced by transforming the quadratic eigenvalue problem (Eq. (47)) into a generalized eigenvalue problem (Eq. (49)). In fact, given a quadratic eigenvalue problem, there exist infinite transformations that cast the problem into a generalized eigenvalue problem. Each transformation has its own numerical sensitivity and stability issues [129, 130]. The specific transformation used in this work is chosen following [100, 102], where it was found to be accurate for computing BH QNM frequencies. But to improve the numerical condition of the matrices we work with, through this work, we scale \(\bar{\mathbb{D}}_{0}\) and \(\bar{\mathbb{D}}_{2}\) such that their 2-norm is one, as proposed and used in [131, 132], before calculating the generalized eigenvalues. We refer the reader to Appendix. D for the details of the scaling.
With all these three types of possible errors taken into account, one should expect the estimated QNM frequencies to be the most accurate at an _optimal_\(N\), with the accuracy deteriorating as \(N\) is increased further. In the subsequent sections, we will show that this deterioration of accuracy indeed emerges in our calculations, but, through the scheme we prescribe below, we can still accurately extract the QNM frequencies with a surprisingly high relative fractional precision.
### Distribution of the generalized eigenvalues
Let us now solve the \(\{tr,t\chi,t\phi,rr,r\chi,r\phi\}\) linearized Einstein equations and show how the eigenvalues emerge as we increase \(N\). Figure 2 shows the distribution of the eigenvalues in the complex plane from \(N=4\) to \(N=25\) in 4 panels. In general, the modulus of the eigenvalues ranges from \(\sim 0\) to \(10^{9}\). For QNM studies, we focus on eigenvalues in the range \(0.2\lesssim\mathrm{Re}\lambda\lesssim 0.6\) and \(-1\lesssim\mathrm{Im}\lambda\lesssim 0\), which is also the range of the complex plane covered by Fig. 2.
Figure 2 allows us to make several observations. As we begin to increase \(N\) starting at \(N=4\), groups of eigenvalues begin to cluster around certain areas in the complex plane. As \(N\) is increased further to \(\sim 20\), these clusters shrink to tiny areas, indicating that the eigenvalues are beginning to approach to certain values. Each tiny clustering area contains several slightly different eigenvalues, with relative differences in the real and imaginary parts of \(\sim 10^{-7}\). The distances between these slightly different eigenvalues are much smaller than the typical distances between the clustering areas. Once the eigenvalues begin to cluster inside some small areas, any surplus eigenvalue begins to disappear as \(N\) is increased, indicating that these surplus eigenvalues have no physical meaning.
As we further increase \(N\) above \(\sim 20\), surplus eigenvalues emerge again, indicating that the aforementioned sources of inaccuracies begin to affect the calculations. There is therefore an optimal \(N\) at which the eigenvalues have gotten as close as possible to the exact answer. These optimal eigenvalues coincide almost exactly with the Schwarzschild QNM frequencies computed by solving the Teukolsky equation, which we marked with crosses in Fig. 2. We will discuss later, in Sec. VI, what the relative fractional accuracy of the QNM frequencies computed with the spectral method is relative to other numerical solutions.
The above observations suggest a method for the identification of the QNM frequencies. In essence, the QNM frequencies can be identified by searching for repeatedly emerging eigenvalues of the matrix equation before the accuracy deteriorates. In the next section, we will explain this method in more detail and explain how it can be used to accurately identify different QNMs.
### Mode search
As shown in the previous subsection, not all eigenvalues represent actual QNM frequencies. For a Schwarzschild or Kerr background, we could determine which eigenvalues are correct by comparing them to known solutions found through other methods, such as Leaver's method [99]. In modified gravity, theories, however, such other solutions may not be known, and thus, it would be ideal to find a self-contained method to identify which eigenvalues correspond to physical QNM frequencies. In essence, this method must answer the following question: what complex number is a given cluster of eigenvalues approaching and does it correspond to an actual QNM frequency?
The answer to this question can be deduced from Fig. 2, which suggests that QNM frequencies can be identified by studying the cluster of eigenvalues that appear repeatedly in a small area in the complex plane for various choices of \(N\). More explicitly, we propose the following _search method_:
1. Since not every eigenvalue is physical, keep only the eigenvalues in a region in the complex plane where QNM frequencies are expected to reside. In this work, we keep eigenvalues whose real part is \(0.2\leq\mathrm{Re}\lambda\leq 0.6\) and imaginary part \(-1\leq\mathrm{Im}\leq 0\). In general, this region can be adjusted based on the BH spacetime that needs to be studied.
2. Compute the distance of the \(i\)-th eigenvalue obtained using \(N\times N\) spectral functions, \(\lambda_{i}(N)\), and the \(j\)-th eigenvalue using \((N+1)\times(N+1)\) functions, \(\lambda_{j}(N+1)\). If \(\lambda_{i}(N)\) and \(\lambda_{j}(N+1)\) are approaching a QNM frequency, their distance in the complex plane should be small. Thus, store all eigenvalues that satisfy \[|\lambda_{i}(N)-\lambda_{j}(N+1)|\leq\text{threshold},\] (52) where the threshold is a small number, which we choose here to be \(10^{-3}\). This number corresponds to an error much smaller than the current relative uncertainty in the QNM frequency measurement of the detected ringdown signals [7, 77, 78, 79, 10, 11, 84, 7, 10].
3. As pointed out in Sec. V.3, the stored eigenvalues may be slightly different from each other, and yet approach the same QNM frequency, because the separation between them in the complex plane is much smaller than the separation between different \(nlm\) QNM frequencies. We thus select the average of these slightly different eigenvalues as the QNM frequency of mode \(q=nlm\) and denote it \(\omega_{q}(N)\).
4. Finally, just before the accuracy deteriorates, the difference of a mode-frequency between successive basis numbers, \(|\omega_{q}(N+1)-\omega_{q}(N)|\), should reach its minimum. Thus, we select the optimally6-truncated QNM frequencies as Footnote 6: The optimal \(N\) discussed here concerns the calculations of \(\omega\), not the asymptotic expansion of the metric perturbations. \[\begin{split}\omega_{q}^{\text{opt}}&=\omega_{q}(N_ {\text{opt}})\,,\\ N_{\text{opt}}&=\arg\min_{N}|\omega_{q}(N+1)- \omega_{q}(N)|\,,\end{split}\] (53) where we note that \(N_{\text{opt}}\) depends on the mode \(q\).
Figure 2: The distribution of the eigenvalues in the complex plane for \(4\leq N\leq 25\), where \(N\) is the number of spectral functions used in the spectral decomposition (see Sec. IV.2). Observe that the eigenvalues of the system of equations start to group together at certain points in the complex plane as N is increased. For comparison, we have also shown the corresponding QNM frequency calculated with Leaver’s method [99], using black crosses. The labels near the crosses follow a \((n,l,m)\) notation, where \(n\) is the principal mode number, \(l\) is the azimuthal mode number and \(m\) is the magnetic mode number. Since the QNM frequencies of the Schwarzschild BH do not depend on \(m\), we have left this quantity unspecified in the labels. An animated version of these plots is available in the Supplemental Material.
Let us give an example of this search method in action by focusing on the \(q=02\) mode. For any given \(N>4\), we find various eigenvalues clustered around \(M\omega_{q}\sim 0.37-0.1i\). For example, at \(N=4\) we find a cluster with the following eigenvalues
\[\begin{split}\lambda(4)=\{& 0.3737202242-0.0886296139i,\\ & 0.3729295055-0.0899641035i\},\end{split} \tag{54}\]
whose average is
\[\omega_{02}^{N=4}=0.37332486485-0.0892968587i. \tag{55}\]
Similarly, at \(N=5\) we find a cluster with the eigenvalues
\[\begin{split}\lambda(5)=\{& 0.3740968407-0.0888069404i,\\ & 0.3737492335-0.088943824i\}\end{split} \tag{56}\]
whose average is
\[\omega_{02}^{N=5}=0.3739230371-0.0888753822i. \tag{57}\]
As we increase \(N\), we find that the difference between the values of \(\omega_{02}\) for adjacent values of \(N\) first decreases, until \(N\sim 20\), after which point the difference between adjacent averaged eigenvalues begins to increase. More concretely, we find that
\[\begin{split}\ldots,\\ |\omega_{02}^{21}-\omega_{02}^{20}|=& 3.19\times 10^{-8},\\ |\omega_{02}^{22}-\omega_{02}^{21}|=& 9.04\times 10^{-9},\\ |\omega_{02}^{23}-\omega_{02}^{22}|=& 1.67\times 10^{-8},\\ |\omega_{02}^{24}-\omega_{02}^{23}|=& 3.47\times 10^{-8},\\ \ldots.\end{split} \tag{58}\]
From this sequence, we see that the optimal truncation is at \(N=21\), and the optimal eigenvalue is
\[\omega_{02}^{\rm opt}=0.3736716790-0.0889623151i, \tag{59}\]
which demonstrates concretely how our search method works.
### Mode identification
Once the QNM frequencies have been found through the search method of the previous subsection, we must now figure out which \(nlm\) mode has been found. Again, for QNMs of a Schwarzschild or Kerr BH, this identification is easy, since we can compute the QNM frequencies through other robust methods. In modified gravity, however, such methods are typically not available, so one must create a robust procedure that answers the following question: which QNMs (i.e. which \(nlm\)?) do the optimally truncated frequencies correspond to?
Before we can establish an identification procedure, we need to first understand some general properties of the QNMs we are studying. To determine \(n\) and \(l\), we notice the following. For a fixed \(n\), the real part of the QNM frequencies is much more sensitive to \(l\) than the imaginary part. Similarly, for a fixed \(l\), the imaginary part of the QNM frequencies is much more sensitive to \(n\) than the real part [133]. Although these trends hold strictly in GR, we expect them to also hold in effective-field-theory-like modified theories in which BH solutions can be treated as small deformations of Schwarzschild and Kerr BHs with a continuous GR limit [68, 69, 63, 64, 65, 66, 67, 68, 69, 70].
We can understand this dependence from the eikonal approximation [134, 135, 136, 137, 137] (valid when \(l\gg 1\)) and the geodesic analogy. In this approximation, the real part of the QNM frequency is roughly proportional to \(l\Omega_{ph}\), where \(\Omega_{ph}\) is the orbital frequency of the photon ring around the BH. Similarly, the imaginary part of the QNM frequency is roughly proportional to the Lyapunov exponent of photon ring, which does not sensitively depend on \(l\)[117].
With this understanding, let us now answer the question above by proposing the following _identification procedure_:
1. We divide the optimally truncated frequencies into groups of similar imaginary parts.
2. The group with the least negative imaginary parts takes \(n=0\), and the group with the second least negative imaginary parts takes \(n=1\). We repeat this assignment of \(n\) until we exhaust all the groups.
3. In a given group, the frequency with the smallest real part takes \(l=2\), and the frequency with the second-smallest real part takes \(l=3\). We repeat this assignment of \(l\) until we exhaust all the frequencies in the same group.
Let us provide a concrete example of this procedure. When \(N=24\), we have the following frequencies
\[\begin{split}\omega^{\rm opt}=\{& 0.3736716813-0.0889623387i,\\ & 0.5994432887-0.0927030486i,\\ & 0.3467101908-0.2739044520i,\\ & 0.5826436957-0.2812978402i,\\ & 0.3010607141-0.4783191864i,\\ & 0.5517068087-0.4790929296i\}.\end{split} \tag{60}\]
We immediately see that this list of optimally truncated frequencies can be divided into 3 groups of similar imaginary parts, namely, the first group consisting of the first and second frequencies, the second group of the third and fourth and the third of the fifth and sixth. Since the first group has the least negative imaginary parts, it takes \(n=0\), corresponding to the fundamental modes. Amongst the first group, the frequency with the smallest real part takes the smallest azimuthal mode number, i.e. \(l=2\), hence
\[\omega_{02}=0.3736716813-0.0889623387i, \tag{61}\]
and the frequency with a larger real part takes the next azimuthal mode number, i.e. \(l=3\),
\[\omega_{03}=0.5994432887-0.0927030486i. \tag{62}\]
Then, we move on to the second group with more negative imaginary parts, which takes the next principal mode number, i.e. \(n=1\), and the last group takes \(n=2\). The azimuthal number of the frequencies in these groups can be labelled as we did for the first group. Explicitly, the frequencies of the second and third groups are labelled as
\[\begin{split}\omega_{12}&=0.3467101908-0.2739044520i,\\ \omega_{13}&=0.5826436957-0.2812978402i,\\ \omega_{22}&=0.3010607141-0.4783191864i,\\ \omega_{23}&=0.5517068087-0.4790929296i.\end{split} \tag{63}\]
Following this procedure, we can confidently identify 6 QNMs (\(q=\{02,03,12,13,22,23\}\)), which is a smaller number than what was shown in Fig. 2. The reason that we cannot confidently identify the remaining modes (although they seem to clearly correspond to \(q=\{32,33,43,42\}\)) is that the absolute difference in any one of these clusters of eigenvalues is not yet smaller than the threshold defined in Eq. (52). If we had gone to higher \(N\), then this difference would continue to decrease and we would have been able to confidently make the remaining identifications.
### Accuracy quantification
Let us now assess the accuracy of the QNMs we have just calculated. To do so, let us define the following 4 accuracy measures:
\[\delta^{\text{Re/Im}}=\begin{cases}&\frac{\max\left(|\omega(N_{\text{opt}}+ 1)-\omega(N_{\text{opt}})|,|\omega(N_{\text{opt}})-\omega(N_{\text{opt}}-1)| \right)}{|\omega^{\text{Re/Im}}(N_{\text{opt}})|},\qquad\text{if }N_{\text{opt}}<N_{\text{ max}}\\ &\\ &\frac{|\omega(N_{\text{opt}})-\omega(N_{\text{opt}}-1)|}{|\omega^{\text{ Re/Im}}(N_{\text{opt}})|},\qquad\text{if }N_{\text{opt}}=N_{\text{ max}}\.\end{cases} \tag{67}\]
This quantity gauges how the accuracy of the spectral method is limited by the possible sources of inaccuracies mentioned in Sec. V.2. This measure will be useful to estimate the performance of the spectral method when applied to different systems of equations, as we do in Sec. VI.
To compute the above measures, we solve the Teukolsky equation in the zero-spin limit using Leaver's method of continued fractions [99] to find \(\omega(\text{L})\). Specifically, \(\omega(\text{L})\) is computed using Leaver's method with 1000 terms in the continued fractions. We find that \(\sim 200\) terms are already enough to converge to 14 digits of accuracy for the fundamental mode frequencies. Using 1000 terms, the first 16 digits of the real and imaginary parts of the QNM frequencies also converge for all modes studied here. For the convenience of reader, we list the QNM frequencies obtained through this method below:
\[\begin{split}\omega_{02}(\text{L})&=0.37367168441804 -0.08896231568894\,i,\\ \omega_{03}(\text{L})&=0.59944328843749-0.0927030479449 5\,i,\\ \omega_{12}(\text{L})&=0.34671099687916-0.27391487529 123\,i,\\ \omega_{13}(\text{L})&=0.58264380303330-0.2812981134 3504\,i,\\ \omega_{22}(\text{L})&=0.30105345461237-0.47827698322307\,i,\\ \omega_{23}(\text{L})&=0.55168490077845-0.47909275096696\,i. \end{split} \tag{68}\]
We note that the above frequencies are identical to the frequencies published in [138], except for differences in rounding off of the last digits.
The first three measures defined above are presented in Fig. 3 as a function of \(N\). The top left, right and bottom panels respectively show the base-10 logarithms of \(\mathcal{D}(N)\), \(\mathcal{E}(N)\), \(\Delta^{\text{Re}}\) (bottom left) and \(\Delta^{\text{Im}}\) (bottom
right) of the QNM frequencies as a function of \(N\). In general, all three measures first decrease as \(N\) increases from \(N=10\) to a QNM-dependent \(N\). This indicates that our QNM frequency calculations become increasingly accurate as \(N\) increases. Beyond the QNM-dependent \(N\), all three measures begin to increase, indicating the emergence of effects due to possible sources of numerical inaccuracies, consistent with our observations of Fig. 2. The optimal truncation order, \(N_{\rm opt}\), minimizes \(\mathcal{D}(N)\) and also approximately minimizes \(\mathcal{E}(N)\) and \(\delta^{\rm Re/Im}\), as we show with a circle in the figure. Observe that the relative fractional error of the optimal truncation is very small for all 6 QNM frequencies computed. Observe also that the higher the mode number, the fewer the errors we can present and the less accurate the QNM frequencies are. This is because the higher the mode number, the more the number of basis terms that are required for the eigenvalues to be within the threshold tolerance we selected.
## VI Robustness of quasinormal frequency extraction
In this section, we study the robustness of the calculations presented in the previous section. In particular, we first focus on the \(m\) independence of the QNM frequencies, which ought to hold for perturbations of a Schwarzschild background. We then study the effects of our choice of boundary conditions for the \(\rho\) function on the QNM calculation. Finally, we consider the use of other combinations of linearized Einstein equations.
### \(m\) independence of the quasinormal-spectrum
One important feature of gravitational perturbations of spherically-symmetric BHs is the independence of the QNM spectra on \(m\). Our matrix equations, however, explicitly depend on \(m\) because we have not decoupled
Figure 3: The top, left panel shows the absolute difference between the QNM frequencies computed with adjacent \(N\)s, \(\mathcal{D}(N)=|\omega(N+1)-\omega(N)|\), with the threshold \(10^{-4}\) denoted by the horizontal solid black line. The \(N\) that minimizes \(\mathcal{D}(N)\), \(N_{\rm opt}\) corresponds to the optimal truncation order and selects the optimal approximation \(\omega(N_{\rm opt})\) to the QNM frequency (circled symbol). To gauge the accuracy of the spectral method, we compare the QNM frequencies computed using this spectral method at various \(N\) (\(\omega\)(spectral)) to those computed through Leaver’s method [99] (\(\omega\)(L)). The top, right panel shows the absolute error \(\mathcal{E}(N)=|\omega(\rm spectral)-\omega(L)|\) as a function of \(N\), while the bottom panels show the relative fractional error in the real (\(\Delta^{\rm Re}=\left|1-\omega^{\rm Re}(\rm spectral)/\omega^{\rm Re}(L)\right|\)) and imaginary (\(\Delta^{\rm Im}=\left|1-\omega^{\rm Im}(\rm spectral)/\omega^{\rm Im}(L)\right|\) right panel) parts. Observe that the QNM frequencies calculated with the spectral method are highly accurate for the fundamental mode and its overtones.
the linearized Einstein equations to find master equations. Therefore, validating the \(m\) independence of our QNM frequency calculations constitutes a non-trivial test of the robustness of our spectral method.
Before comparing the QNMs computed by setting \(m\) to different values, let us comment on the structure of the linearized Einstein equations when \(m=0\). We have derived the linearized Einstein equations for general \(m\), so when we take the \(m=0\) limit, we find that each linearized EFE can be factorized with an additional term that is a power of \((1-\chi^{2})\). Following Sec. II, it is usually desirable to divide such pre-factors out (since they are never zero for a BH) to simplify the equations and potentially improve the accuracy and stability of the numerical calculations. Doing so then yields a somewhat simpler \(\hat{\mathbb{D}}(\omega)\) matrix, whose generalized eigenvalues contain the QNM frequencies of a Schwarzschild BH.
With that in hand, let us now compute the QNM frequencies by solving the linearized Einstein equations setting \(m=0\) and \(m=1\) and compare them to the results we obtained above when we set \(m=2\). We find that these two sets of QNM frequencies are very close to each other. Figure 4 shows the relative fractional difference between the real (left) and imaginary (right) parts of the \(m=2\) frequencies and the \(m=0\) (blue inverse triangles) and \(m=1\) frequencies (red triangles) for different \((n,l)\). Observe that this relative fractional difference ranges from \(10^{-10}\) to \(10^{-4}\). Comparing the relative differences with the numerical uncertainty of the \(m=2\) frequencies (green squares), we see that the relative fractional differences are smaller or approximately equal to the numerical uncertainty, which suggests that the differences between the \(m=0\) or the \(m=1\) frequencies and the \(m=2\) frequencies are due to numerical uncertainty. Thus, effectively, the spectral method obtains the same QNM frequency for a given \(n\) and \(l\) regardless of the value of \(m\) we choose in our calculations.
### Effects of \(\rho_{H}\) and \(\rho_{\infty}\)
The asymptotic behavior of the metric perturbation functions obtained in Sec. III.2 depends on the component of metric perturbations. We find that the extracted QNMs are not affected if we assume \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\) to be a same number for all \(i\), provided that the assumed \(\rho_{H}^{(i)}\geq\underset{1\leq i\leq 6}{\max}\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\geq\underset{1\leq i\leq 6}{\max}\rho_{\infty}^{(i)}\). To illustrate this property, Fig. 5 shows \(\delta^{\text{Re/Im}}\) of the 6 previously identified QNMs, obtained by numerically solving the linearized Einstein equations, using \(25\times 25\) spectral functions and assuming that for all \(i\)
* \(\rho_{H}^{(i)}=\rho_{\infty}^{(i)}=1\) (inverted blue triangles),
* \(\rho_{H}^{(i)}=1,\rho_{\infty}^{(i)}=2\) (red triangles),
* \(\rho_{H}^{(i)}=2,\rho_{\infty}^{(i)}=1\) (green squares), and
* \(\rho_{H}^{(i)}=2,\rho_{\infty}^{(i)}=2\) (black circles).
Figure 5 shows that if we assume a \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\) that is larger than the exponents obtained by our asymptotic analysis in Sec. III, we can still accurately extract the QNMs of the Schwarzschild BH. As the figure shows the minimal \(\delta^{\text{Re/Im}}\) of different QNM frequencies depends on \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\), but we leave further analysis of this relation to future work.
The extremely mild dependence of the QNM frequencies on \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\) is actually reasonable and can be understood as follows. Let us focus first on the \(\rho_{H}^{(i)}=\rho_{\infty}^{(i)}=1\) case. Even if we assume these boundary conditions, the
Figure 4: Base-10 logarithm of the relative fractional differences between the real (left) and imaginary parts (right) of the QNM frequencies when setting \(m=0\) and \(m=2\), both computed using our spectral method of at most \(25\times 25\) spectral functions. The relative fractional difference for different QNMs is between \(10^{-10}\) and \(10^{-4}\), which is smaller, or at worst approximately equal, to the numerical uncertainty of the \(m=2\) frequencies (green squares). Thus, effectively, the QNM frequencies computed by setting \(m\) to different values in our spectral method are the same. Such \(m\)-independence of our results is a non-trivial verification of the correctness and robustness of the spectral method.
boundary conditions obtained in Sec. III for all \(h_{i}\) are still satisfied, except when \(i=4\). When \(i=4\), Sec. III.2 tells us that the "correct" ansatz for \(y_{4}\) is
\[y_{4}(r)=e^{i\omega r}r^{i\omega r_{\rm H}}\left(\frac{r-r_{H}}{r}\right)^{-i \omega r_{\rm H}}u_{4}^{\rm(corr)}(r), \tag{69}\]
where \(u_{4}^{\rm(corr)}(r)\) is the finite part of \(y_{4}\) that we must calculate numerically. If we assume \(\rho_{H}^{(i)}=\rho_{\infty}^{(i)}=1\) instead, we are actually imposing the ansatz
\[y_{4}(r)=e^{i\omega r}r^{i\omega r_{\rm H}+1}\left(\frac{r-r_{H}}{r}\right)^{ -i\omega r_{\rm H}-1}u_{4}^{\rm(asum)}(r), \tag{70}\]
where \(u_{4}^{\rm(asum)}(r)\) is now the finite part of \(y_{4}\). If these two ansatzes are to agree, we must have that
\[u_{4}^{\rm(asum)}(r)=\frac{r-r_{\rm H}}{r}\frac{1}{r}u_{4}^{\rm(corr)}(r). \tag{71}\]
Hence, \(r\in[r_{\rm H},\infty)\), \(u_{4}^{\rm(asum)}(r)\) is bounded if \(u_{4}^{\rm(corr)}(r)\) is also bounded because \((r-r_{H})/r^{2}\) is finite. Thus, the spectral decomposition can still be applied regardless of our assumptions on the boundary conditions for \(\rho\). This argument also applies for an even larger \(\rho_{H}\) and \(\rho_{\infty}\).
This independence of our calculations on the choice of \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\) has three advantages. First, it can simplify the prescriptions of the boundary conditions for numerical computations because we can simply use the same \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\). Second, we can cross-check our results by repeating our calculations for different values of \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\). If the QNM frequencies are properly extracted, the same complex numbers should emerge regardless of the choice of \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\). Third, this property may allow us to bypass the estimate of the asymptotic behavior when studying the boundary conditions. This simplification could be welcomed when dealing with more sophisticated BHs, for which the estimation of the asymptotic behavior of the solution may be much more difficult.
### Other combination of the linearized equations
We have thus focused on the \(\{tr,t\chi,t\phi,rr,r\chi,r\phi\}\) set of linearized Einstein equations, but what if we had chosen a different set? We find that if we select a different set of linearized equations, we can still accurately estimate the QNM frequencies. Figure 6 compares the \(\delta^{\rm Re/Im}\) of the QNM frequencies computed by solving the following systems7:
Footnote 7: This list of linearized Einstein equations is by no means exhaustive. We also calculated various Schwarzschild QNM frequencies by solving other sets, but we found that a larger number of basis functions would then be required to obtain an accurate result.
* \(\{tr,t\chi,rr,r\chi,\chi\chi,\chi\phi\}\) (red triangles),
* \(\{tr,t\chi,t\phi,rr,\chi\chi,\chi\phi\}\) (green circles),
and \(\{tr,t\chi,t\phi,rr,r\chi,\chi\phi\}\) (inverted blue triangles, the system we have been focusing on) solved using at most \(25\times 25\) spectral functions. Observe that the choice of the components of the linearized Einstein equations one works with does not affect our ability to solve for the QNM frequencies. This flexibility allows us to cross-check our results by computing the QNM frequencies using different sets of linearized Einstein equations.
This flexibility is also an interesting result in its own right. Previous calculations of Schwarzschild QNM frequencies relied on solving certain master equations, which are computed by simplifying and eliminating various components of the Einstein tensor [114, 115, 89, 139]. To
Figure 5: Numerical uncertainty of the real (left) and imaginary parts (right) of different QNM frequencies (\(\delta^{\rm Re/Im}\), see Eq. (67)) computed using the spectral method with at most \(25\times 25\) spectral functions and assuming different \(\rho_{H}^{(i)}\) and \(\rho_{\infty}^{(i)}\) boundary conditions. Observe that the accuracy of our series solution, computed with different boundary conditions, is approximately the same, indicating the robustness of the spectral method.
keep the calculations tractable, those derivations naturally make use of the simplest linearized Einstein tensor components. Here, we show that different choices of the components of the linearized Einstein equations that one solves also lead to the accurate computation of Schwarzschild QNM frequencies.
## VII Concluding Remarks
In this paper, we have developed a spectral method to systematically study gravitational perturbations of a non-rotating BH. We first apply spectral decompositions to study the asymptotic behavior of gravitational perturbations at spatial infinity and at the BH event horizon. Using this asymptotic behavior, we then construct an ansatz for the metric perturbations. The ansatz allows us to spectrally decompose the linearized Einstein field equations along both the radial and polar coordinates, thereby transforming the linearized field equations into a linear eigenvalue problem. By solving the matrix equation for the generalized eigenvalues, and through a procedure to identify the QNMs these eigenvalues correspond to, we can calculate the frequency of many QNMs with excellent accuracy. For example, using our numerical scheme, we can simultaneously compute 6 QNM frequencies of the Schwarzschild BH with a relative error always better than (and sometimes much better than) \(\leq 10^{-4}\).
The spectral method contains several advantages over the existing approaches to studying gravitational perturbations of a BH. First, the spectral method can, in principle, be applied to any BH spacetimes irrespective of their classification under the Petrov scheme [140]. Unlike the derivation of the Teukolsky equation, our method does not require the background spacetime to be vacuum (i.e., no matter) and Petrov-type D [94]. This advantage enables us to apply the spectral method to other more complicated and generic BH spacetimes that cannot be easily studied through the Newman-Penrose formalism.
Second, the spectral method does not require simplifications of the linearized field equations into master equations through special master functions. The derivation of the Regee-Wheeler, the Zerilli-Moncrief, or the Teukolsky equation requires the simplification of the perturbed (metric or curvature) equations into several decoupled master equations, obtained through various transformations or redefinitions of perturbation variables. These transformations and redefinitions usually need to be modified for non-Schwarzschild or Kerr BHs, and precisely how to do so can be quite difficult [141; 142]. By applying the spectral method, we have a unified framework to accurately estimate the QNM frequencies without such simplifications or decouplings, bypassing the difficulties of deriving the necessary transformations or redefinition.
Third, the spectral method is computationally straightforward. When computing the QNM frequencies by solving the Teukolsky equation, one also needs to solve for the angular separation constants. The spectral method focuses on calculations of only the QNM frequencies, avoiding the need to compute these separation constants. Moreover, previous work had found that more than 100 spectral functions in the radial and angular coordinates are needed to compute higher-mode frequencies by spectrally decomposing the Teukolsky equation, even for the case of the Schwarzschild BH (\(a=0\)) [109]8. In contrast, the spectral method presented here requires a much smaller set of basis functions (\(\sim 25\)) for the accurate estimation of 6 QNM frequencies. These features aid in making the numerical computations more straight-forward and convenient.
Footnote 8: Though this number can be reduced by using a new-sparse spectral method [143].
Figure 6: Numerical uncertainty in the real (left) and imaginary parts (right) of different QNM frequencies by spectrally decomposing different sets of components of the linearized Einstein equations. In all cases, we use at most \(25\times 25\) spectral functions when computed the QNM frequencies. Observe that the accuracy of the QNM frequencies calculated is approximately independent of the choice of components of the linearized Einstein equations that we choose to solve.
Finally, the spectral method does not involve the calculation of the Weyl scalars, making the studies of gravitational perturbations more direct, and perhaps, more physically intuitive. The Teukolsky equation expresses all gravitational perturbations in terms of curvature perturbations that are encoded in perturbed Weyl scalars. Therefore, if one wishes to find the gravitational metric perturbations using solutions to the Teukolsky equation, one needs to reconstruct the metric from the Weyl scalars through a lengthy procedure [144; 145; 146]. The spectral method we presented here avoids all of these complications because it works directly with metric perturbations.
To fully realize the potential of the spectral method we presented here, we need to further develop it so that it can be applied to more sophisticated BH spacetimes. Our immediate next step is to apply the spectral method to spinning BH backgrounds, and more concretely to the Kerr background metric. When doing so, it may be beneficial to consider other basis functions for the spectral decomposition, instead of the associated Legendre polynomials for the angular sector and the Chebyshev polynomials for the radial sector that we used here. One option would be to use spheroidal harmonics or spin-weighted spherical harmonics for the angular sector, while one could use a rational polynomial basis for the radial sector. We have started this exploration already and have found some encouraging results, but their detailed presentation will be shown elsewhere. Moreover, thus far we have focused on the Regge-Wheeler gauge, which should be applicable to a wide range of modified BHs. But to make the spectral method more generally applicable, we also need to explore different gauges. One could also further investigate how exactly the sources of numerical inaccuracies, mentioned in Sec. V.2, affect the quasinormal frequencies, and how to improve their precision.
Other than rotating BHs, one still needs to explore the application of our spectral method to beyond-GR BHs whose metric is irrational (e.g. [147]) or numerical (e.g. [148; 149; 150]). For irrational BH solutions, a change of variables may rationalize the metric, which allows straightforward applications of our spectral method. Numerical BH solutions are commonly expressed in terms of spectral functions when the solutions are being calculated, and thus, our spectral method directly applies. Alternatively, we can also fit numerical BH solutions using spectral functions or by numerically evaluating their derivatives to derive the linearized field equations. Once the linearized field equation are obtained, even via numerical means, our spectral method still applies. In the future, we plan to explore various modifications to adapt our spectral method to irrational or numerical BHs.
Once the spectral method has been generalized and developed further, it could be applied to a plethora of problems. The most obvious one is perhaps the calculation of QNM frequencies in modified gravity theories, such as in dynamical Chern-Simons gravity [63; 64; 65; 66] or scalar-Gauss-Bonnet gravity [68; 69; 70]. In such theories, and in almost all other theories known to date, QNM frequencies are only known in the slow-rotation limit, a limitation that could be lifted with the spectral method. Another possible application of our spectral method is the study of BH spectral instabilities. Typically, the criterion of spectral instability is characterized by modifications to an effective potential [151; 152; 153; 154]. In the spectral method, however, the notion of the effective potential is not manifest, as the method does not need master equations governing the gravitational perturbations. To apply the spectral method to study spectral instabilities, we would need to reconcile it with the notion of an effective potential.
## Acknowledgement
The authors acknowledge the support from the Simons Foundation through Award No. 896696 and the NSF through award PHY-2207650. The authors would like to acknowledge Emanuele Berti, Mark H.Y. Cheung, Pedro Ferreira, Thomas Helfer, Justin Ripley for insightful discussion, and Vitor Cardoso and Leo Stein for comments on the initial manuscript. A.K.W.C would like to thank Alan Tsz Lok Lam and Lap Ming Lin for useful advice offered at the beginning of this work. The calculations and results reported in this paper were jointly produced using the computational resources of the department of physics at King's College London and the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus Cluster Program (ICCP) in conjunction with NCSA, and is supported by funds from the University of Illinois at Urbana-Champaign.
## Appendix A Symbols
The calculations presented in this paper involved numerous symbols. For convenience of the reader, we provide a list of the symbols and their definitions in this appendix.
* \(A_{i}^{\ell}(r)\) is the asymptotic prefactor of the \(i\)-th perturbation variable, first defined in Eq. (30).
* \(d_{r}\) is the degree of \(r\) of the coefficient of the partial derivative of the linearized Einstein equations, first defined in Eq. (6)
* \(d_{\chi}\) is the degree of \(\chi\) of the coefficient of the partial derivative of the linearized Einstein equations, first defined in Eq. (6).
* \(d_{z}\) is the degree of \(z\) of the coefficient of the partial derivative of the compactified linearized Einstein equations, first defined in Eq. (36).
* \(\mathcal{D}(N)\) is the modulus difference of the optimally truncated quasinormal-mode frequency over successive iterations, first defined in Eq. (64).
* \(\mathbb{D}(\omega)\) is the coefficient matrix of spectral decomposition, from one particular basis to another, first defined in Eq. (39).
* \(\tilde{\mathbb{D}}(\omega)\) is the augmented matrix of the coefficients of spectral decomposition, first defined in Eq. (46).
* \(\delta^{\mathrm{Re/Im}}\) is the numerical uncertainty of the real and imaginary parts of the QNM frequencies computed using the spectral method, first defined in Eq. (67).
* \(\Delta^{\mathrm{Re/Im}}\) is the relative fractional error in the real and imaginary parts of the QNM frequencies computed using the spectral method and the Teukolsky equations, first defined in Eq. (66).
* \(\mathcal{E}(N)\) is the absolute error between the QNM frequencies computed using the spectral method, \(\omega(\mathrm{spectral})\), and Leaver's method to solve for the QNM modes \(\omega(\mathrm{L})\), first defined in Eq. (65).
* \(\mathcal{G}_{i,\gamma,\delta,\sigma,\alpha,\beta,j}\) is the coefficient of \(\omega^{\gamma}r^{\delta}\chi^{\sigma}\partial_{r}^{\alpha}\partial_{\chi}^{ \beta}h_{j}\) of the linearized Einstein equations of \(h_{j}\), first defined in Eqs. (6).
* \(h_{i}(r,\chi)\) is the functions of metric perturbations, first defined in Eqs. (5a) and (5b).
* \(i\) the subscript is the component of the metric perturbation functions and \(i=1,...,6\), first defined in Eqs. (5a) and (5b).
* \(\mathcal{K}_{i,\alpha,\beta,\gamma,\delta,\sigma,j}\) is the coefficient of \(\omega^{\gamma}z^{\delta}\chi^{\sigma}\partial_{z}^{\alpha}\partial_{\chi}^{ \beta}(...)\) of the linearized Einstein equations in \(z\) and \(\chi\), first defined in Eq. (36).
* \(l\) is the azimuthal mode number of the gravitational QNMs, first defined in Eq. (1).
* \(\ell\) is the degree of associate Legendre polynomial used in spectral decomposition, first defined in Eq. (8).
* \(\lambda(N)\) is the generalized eigenvalue of the linear matrix equation Eq. (49) obtained using \(N\) Chebyshev and associated Legendre polynomials, first defined in Eq. (52).
* \(M\) is the BH mass, which is taken to be \(M=1\) throughout this work, first defined in Eq. (1).
* \(\mathbb{M}(r)\) is the coefficient matrix of the system of ordinary differential equations, first defined in Eq. (20).
* \(\mathbb{M}_{k}\) is the coefficient matrix of \(r^{k}\) term of the asymptotic expansion of \(\mathbb{M}(r)\), first defined in Eq. (21).
* \(m\) is the azimuthal number of the metric perturbations, first defined in Eqs. (5a) and (5b).
* \(N\) is the number of the Chebyshev and associated Legendre polynomials used in the full spectral decomposition, first defined in Sec. V.1.
* \(N_{\mathrm{opt}}\) is the optimal truncation order for the frequency computation, first defined in Eq. (53).
* \(\mathcal{N}\) is the normalization factor of spectral decomposition, first defined in Eq. (42).
* \(\mathcal{N}_{\chi}\) is the number of the associated Legendre polynomials included in the spectral decomposition, first defined in Eq. (33).
* \(r_{\mathrm{r}}=2M\) is the radial coordinate of the position of the event horizon of the Schwarzschild BH, first defined below Eq. (2).
* \(r_{\ast}\) is the tortoise coordinate, first defined in Eq. (25).
* \(p_{H}\) is the Poincare rank of \(-\epsilon^{-2}\mathbb{M}(\epsilon)\) at \(r=r_{\mathrm{u}}\), first defined in Eq. (22).
* \(p_{\infty}\) is the Poincare rank of \(\mathbb{M}(r)\) at \(r=\infty\), first defined in Eq. (20).
* \(\mathbb{Q}\) is the coefficient matrix of \(d\mathbf{y}/dr\) of the system of ordinary differential equations, first defined in Eq. (17).
* \(\tilde{\mathbb{Q}}\) is the coefficient matrix of \(d\mathbf{y}/dr\) of the system of ordinary differential equations, after algebraic variables have been removed, first defined in Eq. (19).
* \(\mathbb{R}\) is the coefficient matrix of \(\mathbf{y}\) of the system of ordinary differential equations, first defined in Eq. (17).
* \(\tilde{\mathbb{R}}\) is the coefficient matrix of \(\mathbf{y}\) of the system of ordinary differential equations, after algebraic variables have been removed, first defined in Eq. (19).
* \(\rho_{\infty}^{(i)}\) and \(\rho_{H}^{(i)}\) are the parameters that characterize the boundary conditions of \(h_{i}\) in spatial infinity and at the horizon, first defined in Eq. (26) and (27).
* \(\omega_{q}(\mathrm{L})\) is the frequency of the QNM \(q\) computed using the Leaver method, first defined in Eq. (68).
* \(\omega_{q}^{\mathrm{opt}}\) is the optimally truncated frequency of the QNM \(q\), first defined in Eq. (53).
* \(y_{i}^{\ell}\) is the component of \(h_{i}(r,\chi)\) projected along \(P_{\ell}^{|m|}\), first defined in Eq. (9).
* \(z=\frac{2r_{\mathrm{H}}}{r}-1\) is the variable that maps \(r\) into a finite domain, first defined in Eq. (31).
## Appendix B An explicit example of the asymptotic behavior at the event horizon and spatial infinity
In this appendix, we explicitly apply the procedures described in Sec. III to obtain the asymptotic behaviour of the metric variables for a Schwarzschild BH, setting its mass \(M=1\) and \(m=2\) for simplicity.
To estimate the asymptotic behaviour, we need to specifically study 6 equations out of the 10 linearized Einstein equations. In this example, we focus on
\(\{tr,t\chi,t\phi,rr,r\chi,r\phi\}\) because these 6 equations contain the second-order \(r\)-derivative of only one perturbation function, \(h_{5}(r,\chi)\). Thus, we have the \(Y_{5}\) element but no other \(Y_{i\neq 5}\) elements in \(\mathbf{y}\). Other choices of 6 equations contain the second-order \(r\)-derivatives of more perturbation functions, making the calculations less convenient. To limit the length of this example, we only include two associate Legendre polynomials (\(\mathcal{N}_{\chi}=2\)),
\[h_{i}(r,\chi)=\sum_{\ell=2,3}y_{i}^{\ell}(r)P_{\ell}^{|m|}(\chi). \tag{101}\]
The resulting system of ordinary differential equations contains
\[\frac{d^{2}y_{5}^{2}}{dr^{2}},\quad\text{and}\quad\frac{d^{2}y_{5}^{3}}{dr^{2}}.\]
To keep the system of ordinary differential equations first order, we write
\[\frac{dy_{5}^{2}}{dr}=Y_{5}^{2}\ \ \text{and}\ \ \frac{dy_{5}^{3}}{dr}=Y_{5}^{3}.\]
Hence, \(\mathbf{y}\) is a 14-vector (\(14=2\times(6+1)\)),
\[\begin{split}\mathbf{y}&=(y_{1}^{2},y_{1}^{3},y_{ 2}^{2},y_{2}^{3},y_{3}^{2},y_{3}^{3},y_{4}^{2},\\ &\ \
\[\mathbb{R}_{13}(r) =\frac{36(r-2)}{7}, \mathbb{R}_{6~{}12}(r) =\frac{20}{7}\left(r^{3}\omega^{2}-10r+20\right),\] \[\mathbb{R}_{15}(r) =\frac{12}{7}i(r-2)r\omega, \mathbb{R}_{7~{}13}(r) =1,\] \[\mathbb{R}_{84}(r) =8(r-2),\] \[\mathbb{R}_{86}(r) =\frac{4}{3}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}9}(r) =\frac{32}{15}i(r-2)r\omega, \mathbb{R}_{10~{}11}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{11~{}9}(r) =\frac{4}{3}\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)\] \[\mathbb{R}_{12~{}11}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}11}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}11}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}11}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}11}(r) =-\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}11}(r) =-\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{10~{}11}(r) =-\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{14~{}12}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{14~{}12}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{11~{}2}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{11~{}2}(r) =\frac{4}{3}\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)\] \[\mathbb{R}_{11~{}9}(r) =-\frac{8}{5}i(r-2)r\omega,\] \[\mathbb{R}_{11~{}11}(r) =\frac{4}{3}\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)\] \[\mathbb{R}_{11~{}13}(r) =\frac{4}{3}\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)\] \[\mathbb{R}_{12~{}11}(r) =\frac{16}{15}r(2r+3),\] \[\mathbb{R}_{12~{}3}(r) =-\frac{16i\left(r^{3}\omega^{2}-3\right)}{15\omega},\] \[\mathbb{R}_{12~{}7}(r) =-\frac{16r\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)}{15(r-2)},\] \[\mathbb{R}_{13~{}9}(r) =-\frac{32}{15}ir^{2}\omega,\] \[\mathbb{R}_{13~{}11}(r) =-\frac{16}{15}\left(r^{3}\omega^{2}-4r+8\right),\] \[\mathbb{R}_{13~{}13}(r) =\frac{16}{15}ir^{3}\omega,\] \[\mathbb{R}_{14~{}11}(r) =-\frac{6r+9}{5r^{2}\omega},\] \[\mathbb{R}_{14~{}3}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{14~{}3}(r) =\frac{32}{15}i(r-2)r\omega,\] \[\mathbb{R}_{11~{}2}(r) =\frac{4}{3}\left(-5r+\frac{6}{r}+7\right),\] \[\mathbb{R}_{11~{}4}(r) =\frac{4i(r-2)\left(r^{3}\omega^{2}-6\right)}{3r^{2}\omega},\] \[\mathbb{R}_{11~{}4}(r) =\frac{4}{3}\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)\] \[\mathbb{R}_{11~{}9}(r) =-\frac{8}{5}(r-2)\omega,\] \[\mathbb{R}_{11~{}11}(r) =\frac{4i(r-2)\left(r^{3}\omega^{2}-4r+8\right)}{5r^{2}},\] \[\mathbb{R}_{11~{}13}(r) =\frac{4}{5}(r-2)r\omega,\] \[\mathbb{R}_{12~{}1}(r) =\frac{16}{15}r(2r+3),\] \[\mathbb{R}_{12~{}3}(r) =-\frac{16i\left(r^{3}\omega^{2}-3\right)}{15\omega},\] \[\mathbb{R}_{12~{}7}(r) =-\frac{16r\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)}{15(r-2)},\] \[\mathbb{R}_{13~{}9}(r) =-\frac{32}{15}ir^{2}\omega,\] \[\mathbb{R}_{13~{}11}(r) =-\frac{16}{15}\left(r^{3}\omega^{2}-4r+8\right),\] \[\mathbb{R}_{13~{}13}(r) =\frac{16}{15}ir^{3}\omega,\] \[\mathbb{R}_{14~{}1}(r) =-\frac{6r+9}{5r^{2}\omega},\] \[\mathbb{R}_{14~{}3}(r) =\frac{3i}{5}-\frac{9i}{5r^{3}\omega^{2}},\] \[\mathbb{R}_{14~{}7}(r) =\frac{3\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)}{5(r-2)r^{2}\omega},\] \[\mathbb{R}_{14~{}10}(r) =-\frac{2}{r},\] \[\mathbb{R}_{14~{}12}(r) =\frac{i\left(r^{3}\omega^{2}-10r+20\right)}{r^{3}\omega},\] \[\mathbb{R}_{14~{}14}(r) =1.\]
By reading the 5th and 6th column of \(\mathbb{Q}(r)\), we identify two algebraic variables, \(y_{3}^{2}\) and \(y_{3}^{3}\). By solving the ODEs represented by the 1st and 2nd row of \(\mathbb{Q}(r)\) and \(\mathbb{R}(r)\) for \(y_{3}^{2}\) and \(y_{3}^{3}\), we have
\[y_{3}^{2} =\frac{3i}{r\omega}y_{2}^{2}+\frac{(r-3)}{r-2}y_{4}^{2}+r\frac{dy _{4}^{2}}{dr}, \tag{103}\] \[y_{3}^{3} =-\frac{1}{5r^{3}\omega}\left(5r^{3}\omega y_{4}^{3}+3r^{2}(r-2) \frac{dY_{5}^{5}}{dr}+ir\left((r-2)\left(3r\omega\frac{dy_{6}^{2}}{dr}+6\omega \frac{dy_{6}^{2}}{dr}+5r\frac{dy_{2}^{3}}{dr}\right)+10y_{2}^{3}\right)+(12-18r )y_{5}^{2}\right).\]
As all algebraic variables have been expressed in terms of the differential variables and at most their first-order \(r\) derivative, the system of ordinary differential equations remains first order if we substitute the algebraic variables back into the system.
We substitute \(y_{3}^{2}\) and \(y_{3}^{3}\) back to the system of ordinary differential equations. Now \(\tilde{\mathbf{y}}\) is a 12 vector (\(2=14-2\)), and \(\tilde{\mathbb{Q}}(r)\) and \(\tilde{\mathbb{R}}(r)\) are \(12\times 12\) matrices. After some elementary row operations to simplify \(\tilde{\mathbb{Q}}(r)\), we have
\[\tilde{\mathbb{Q}}(r)=\begin{pmatrix}0&0&\frac{12}{7}iA&0&\frac{12r^{4}\omega} {7}&0&0&0&\frac{20}{7}iA\omega&0&\frac{20}{7}A\\ -\frac{12}{7}B&0&0&0&\frac{12(7-2)}{7}&0&0&0&0&0&0\\ 0&\frac{20}{7}A&0&\frac{20iC}{7\omega}&0&\frac{20}{7}A&-\frac{12D}{7}&0&\frac {12}{7}iC&0&\frac{12C}{7\omega}&0\\ 0&0&0&0&0&0&0&\frac{20}{7}iD&0&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0\\ 0&0&0&-\frac{4}{3}B&0&\frac{4}{3}iA\omega&0&0&-\frac{4}{5}B\omega&0&\frac{4}{ 5}iB&0\\ 0&0&0&0&0&0&0&0&0&-\frac{16}{9}A\omega&0&\frac{16}{9}iA\\ 0&0&0&0&0&0&0&0&-\frac{16}{15}iA\omega&0&-\frac{16}{15}A&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0\\ \end{pmatrix}, \tag{104}\]
and the corresponding \(\tilde{\mathbb{R}}\) has the following non-zero elements,
\[\tilde{\mathbb{R}}_{13}(r) =-\frac{12}{7}ir(3r+2), \tilde{\mathbb{R}}_{41}(r) =\frac{12}{7}ir(2r+3),\] \[\tilde{\mathbb{R}}_{15}(r) =-\frac{12r^{3}(2r-5)\omega}{7(r-2)}, \tilde{\mathbb{R}}_{43}(r) =\frac{12}{7}\frac{\left(r^{3}\omega^{2}-3\right)}{7\omega},\] \[\tilde{\mathbb{R}}_{18}(r) =\frac{80}{7}(3r-1), \tilde{\mathbb{R}}_{45}(r) =-\frac{12ir\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)}{7(r-2)},\] \[\tilde{\mathbb{R}}_{11}\;_{10}(r) =\frac{1}{7}(-40)i(r-2)r\omega, \tilde{\mathbb{R}}_{48}(r) =\frac{40}{7}ir^{2}\omega,\] \[\tilde{\mathbb{R}}_{21}(r) =-\frac{36}{7}(r-2), \tilde{\mathbb{R}}_{4}\;_{10}(r) =\frac{20}{7}\left(r^{3}\omega^{2}-10r+20\right),\] \[\tilde{\mathbb{R}}_{23}(r) =\frac{12i(r-2)\left(2r^{2}\omega^{2}-3\right)}{7r\omega}, \tilde{\mathbb{R}}_{5\;\;11}(r) =1,\] \[\tilde{\mathbb{R}}_{25}(r) =\frac{12}{7}\left(r^{3}\omega^{2}-3r+7\right), \tilde{\mathbb{R}}_{32}(r) =\frac{20}{7}(r-3)r, \tilde{\mathbb{R}}_{44}(r) =\frac{8}{3}\left(3r-\frac{2}{r}-5\right),\] \[\tilde{\mathbb{R}}_{32}(r) =\frac{20}{7}(r-3)r, \tilde{\mathbb{R}}_{44}(r) =-\frac{4}{3}ir(2r-5)\omega,\] \[\tilde{\mathbb{R}}_{34}(r) =-\frac{20i\left(r^{4}\omega^{2}+2r-2\right)}{7r\omega}, \tilde{\mathbb{R}}_{36}(r) =\frac{8i\left(3r^{2}-8r+4\right)}{5r^{2}},\] \[\tilde{\mathbb{R}}_{36}(r) =-\frac{20}{7}(r-1)r, \tilde{\mathbb{R}}_{37}(r) =\frac{8r-2)^{2}\omega}{5r},\] \[\tilde{\mathbb{R}}_{37}(r) =-\frac{24\left(r^{4}\omega^{2}-3r^{2}+5r-2\right)}{7r^{2}\omega}, \tilde{\mathbb{R}}_{45}(r) =\frac{64}{9}i(3r-1),\] \[\tilde{\mathbb{R}}_{39}(r) =\frac{12i\left(r^{4}\omega^{2}-6r^{2}+14r-4\right)}{7r}, \tilde{\mathbb{R}}_{7\;\;10}(r) =\frac{32}{9}(r-2)r\omega,\] \[\tilde{\mathbb{R}}_{87}(r) =-\frac{32}{15}(3r-2),\]
\[\begin{split}\tilde{\mathbb{R}}_{88}(r)&=\frac{32}{15}i(r-2 )r\omega,\\ \tilde{\mathbb{R}}_{92}(r)&=\frac{4}{3}\left(-5r+ \frac{6}{r}+7\right),\\ \tilde{\mathbb{R}}_{94}(r)&=\frac{4i(r-2)\left(r^{3} \omega^{2}-6\right)}{3r^{2}\omega},\\ \tilde{\mathbb{R}}_{96}(r)&=\frac{4\left(r^{4} \omega^{2}-5r^{2}+9r+3\right)}{3r},\\ \tilde{\mathbb{R}}_{97}(r)&=-\frac{8}{5}(r-2)\omega, \\ \tilde{\mathbb{R}}_{99}(r)&=\frac{4i(r-2)\left(r^{3} \omega^{2}-4r+8\right)}{5r^{2}},\\ \tilde{\mathbb{R}}_{9~{}11}(r)&=\frac{4}{5}(r-2)r \omega,\\ \tilde{\mathbb{R}}_{10~{}1}(r)&=\frac{16}{15}r(2r+3),\\ \tilde{\mathbb{R}}_{10~{}3}(r)&=-\frac{16i\left(r^{ 3}\omega^{2}-3\right)}{15\omega},\\ \tilde{\mathbb{R}}_{10~{}5}(r)&=-\frac{16r\left(r^{ 4}\omega^{2}-2r^{2}+3r+3\right)}{15(r-2)},\\ \tilde{\mathbb{R}}_{11~{}7}(r)&=-\frac{32}{15}ir^{2 }\omega,\\ \tilde{\mathbb{R}}_{11~{}9}(r)&=-\frac{16}{15}\left(r ^{3}\omega^{2}-4r+8\right),\\ \tilde{\mathbb{R}}_{11~{}11}(r)&=\frac{16}{15}ir^{3} \omega,\\ \tilde{\mathbb{R}}_{12~{}1}(r)&=-\frac{6r+9}{5r^{2} \omega},\\ \tilde{\mathbb{R}}_{12~{}3}(r)&=\frac{3i}{5}-\frac{9 i}{5r^{3}\omega^{2}},\\ \tilde{\mathbb{R}}_{12~{}5}(r)&=\frac{3\left(r^{4} \omega^{2}-2r^{2}+3r+3\right)}{5(r-2)r^{2}\omega},\\ \tilde{\mathbb{R}}_{12~{}8}(r)&=-\frac{2}{r},\\ \tilde{\mathbb{R}}_{12~{}10}(r)&=\frac{i\left(r^{3} \omega^{2}-10r+20\right)}{r^{3}\omega},\\ \tilde{\mathbb{R}}_{12~{}12}(r)&=1\end{split} \tag{105}\]
Now the 9th to 12th row of \(\tilde{\mathbb{Q}}(r)\) are all zeros. By reading the corresponding rows of \(\tilde{\mathbb{R}}(r)\), we obtain the following 4 algebraic equations,
\[\begin{split}& 4\big{(}5i(r-2)\left(r^{3}\omega^{2}-6\right)y_{2}^{ 3}-5r\left(5r^{2}-7r-6\right)\omega y_{1}^{3}+5r\omega\left(r^{4}\omega^{2}-5r ^{2}+9r+3\right)y_{4}^{3}\\ &\qquad\qquad+3(r-2)\omega\left(i\left(r^{3}\omega^{2}-4r+8 \right)y_{6}^{2}+r^{3}\omega Y_{5}^{2}-2r^{2}\omega y_{5}^{2}\right)\big{)}=0, \\ & 16\left(i(r-2)\left(r^{3}\omega^{2}-3\right)y_{2}^{2}+r\left( -2r^{2}+r+6\right)\omega y_{1}^{2}+r\omega\left(r^{4}\omega^{2}-2r^{2}+3r+3 \right)y_{4}^{2}\right)=0,\\ &\quad\left(r^{3}\omega^{2}-4r+8\right)y_{6}^{2}-ir^{3}\omega Y _{5}^{2}+2ir^{2}\omega y_{5}^{2}=0\\ &\quad 3i(r-2)\left(r^{3}\omega^{2}-3\right)y_{2}^{2}-3r\left(2r^{2}- r-6\right)\omega y_{1}^{2}+3r\omega\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)y_{4}^{2} \\ &\qquad\qquad+5i(r-2)\omega\left(\left(r^{3}\omega^{2}-10r+20 \right)y_{6}^{3}-ir^{3}\omega Y_{5}^{3}+2ir^{2}\omega Y_{5}^{3}\right)=0\,. \end{split} \tag{106}\]
These algebraic equations allow us to express 4 differential variables in terms of the remaining 8 (\(=12-4\)) differential variables in 81 different combinations. Each of these 81 combinations leads to a \(\mathbb{M}(r)\) of \(0\leq p_{\infty}\leq 2\). Eventually, we solve the algebraic equations for \(y_{4}^{2},y_{4}^{3},y_{7}^{2}\) and \(y_{7}^{3}\),
\[\begin{split} y_{4}^{2}&=\frac{(r-2)\left(r(2r+3) \omega y_{1}^{2}-i\left(r^{3}\omega^{2}-3\right)y_{2}^{2}\right)}{r\omega\left( r\left(r^{3}\omega^{2}-2r+3\right)+3\right)},\\ y_{4}^{3}&=\frac{(r-2)\left(r(5r+3)\omega y_{1}^{3}- i\left(r^{3}\omega^{2}-6\right)y_{2}^{3}\right)}{r\omega\left(r\left(r^{3}\omega^{2}-5r+9 \right)+3\right)},\\ Y_{5}^{2}&=\frac{2y_{5}^{2}}{r}-\frac{i\left(r^{3} \omega^{2}-4r+8\right)y_{6}^{2}}{r^{3}\omega},\\ Y_{5}^{3}&=\frac{2y_{5}^{3}}{r}-\frac{i\left(r^{3} \omega^{2}-10r+20\right)y_{6}^{3}}{r^{3}\omega},\end{split} \tag{107}\]
for two advantages. Firstly, eliminating these 4 variables leads to a \(\mathbb{M}(r)\) of \(p_{\infty}=0\), with both \(\mathbb{M}_{0}\) and \(\mathbb{M}_{-1}\) diagonalizable. This is a crucial advantage because it drastically reduces the difficulty to diagonalize \(\mathbb{M}(r)\) and study the asymptotic behaviour of \(\mathbf{y}\) for larger \(p_{\infty}\). Secondly, this combination eliminates all the differential variables concerning the second-order \(r\)-derivative of the metric perturbation functions, which are less relevant to our studies as no metric perturbations are expressed as the \(r\)-derivatives of \(h_{i}\).
We now have a system of ordinary differential equations of the form of Eq. (20) concerning \(\text{rank}(\mathbb{Q})-N_{\text{alg}}=10-2=8\) differential variables, with
\[\mathbf{z}=(y_{1}^{2},y_{1}^{3},y_{2}^{2},y_{2}^{3},y_{5}^{2},y_{5}^{3},y_{6}^{ 2}.y_{6}^{3})^{\text{T}}. \tag{108}\]
The non-zero elements of \(\mathbb{M}(r)\) are
\[\begin{split}\mathbb{M}_{11}(r)&=\frac{r^{5}\omega^{2}-4r ^{4}\omega^{2}+4r^{2}-12r+6}{(r-2)r\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)}, \\ \mathbb{M}_{13}(r)&=-\frac{i\left(r^{6}\omega^{4}-4r ^{4}\omega^{2}+4r^{3}\omega^{2}+r^{2}\left(9\omega^{2}+6\right)-24r+24\right) }{(r-2)r\omega\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)},\end{split} \tag{111}\]
\[\begin{split}\mathbb{M}_{22}(r)&=\frac{r^{5}\omega^{2} -4r^{4}\omega^{2}+7r^{2}-18r+6}{(r-2)r\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)},\\ \mathbb{M}_{24}(r)&=-\frac{i\left(r^{6}\omega^{4}- 10r^{4}\omega^{2}+16r^{3}\omega^{2}+r^{2}\left(9\omega^{2}+30\right)-120r+120 \right)}{(r-2)r\omega\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)},\end{split} \tag{112}\]
\[\begin{split}\mathbb{M}_{31}(r)&=-\frac{ir\omega \left(r^{4}\omega^{2}-4r^{2}+4r+9\right)}{(r-2)\left(r^{4}\omega^{2}-2r^{2}+3r +3\right)},\\ \mathbb{M}_{33}(r)&=\frac{r^{5}\omega^{2}-4r^{4} \omega^{2}+r^{2}-6}{(r-2)r\left(r^{4}\omega^{2}-2r^{2}+3r+3\right)},\\ \mathbb{M}_{42}(r)&=-\frac{ir\omega\left(r^{4}\omega^ {2}-10r^{2}+16r+9\right)}{(r-2)\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)},\\ \mathbb{M}_{44}(r)&=\frac{r^{5}\omega^{2}-4r^{4} \omega^{2}+4r^{2}-6r-6}{(r-2)r\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)},\\ \mathbb{M}_{55}(r)&=\frac{2}{r},\\ \mathbb{M}_{57}(r)&=-\frac{i\left(r^{3}\omega^{2}-4r+ 8\right)}{r^{3}\omega},\end{split}\qquad\qquad\qquad\qquad\qquad\qquad \mathbb{M}_{66}(r)=\frac{2}{r},\\ \mathbb{M}_{68}(r)&=-\frac{i\left(r^{3}\omega^{2}-10r+ 20\right)}{r^{3}\omega},\\ \mathbb{M}_{69}(r)&=-\frac{i\left(r^{3}\omega^{2}-10r+ 20\right)}{r^{3}\omega},\end{split}\qquad\qquad\qquad\qquad\qquad\mathbb{ M}_{68}(r)=-\frac{i\left(r^{3}\omega^{2}-10r+20\right)}{r^{3}\omega}, \tag{113}\]
\[\begin{split}\mathbb{M}_{44}(r)&=\frac{r^{5}\omega^{2} -4r^{4}\omega^{2}+4r^{2}-6r-6}{(r-2)r\left(r^{4}\omega^{2}-5r^{2}+9r+3\right)},\\ \mathbb{M}_{55}(r)&=\frac{2}{r},\\ \mathbb{M}_{57}(r)&=-\frac{i\left(r^{3}\omega^{2}-4r+ 8\right)}{r^{3}\omega},\end{split}\qquad\qquad\qquad\qquad\qquad\mathbb{ M}_{88}(r)=-\frac{2}{(r-2)r}.\end{split}\]
At \(r=\infty\), we express \(\mathbb{M}(r)\) as a power series of \(r\) and discard terms that drop faster than \(\mathcal{O}(r^{-2})\),
\[\mathbb{M}(r)\approx\mathbb{M}_{0}+\frac{\mathbb{M}_{-1}}{r}, \tag{114}\]
Both \(\mathbb{M}_{0}\) and \(\mathbb{M}_{-1}\) are diagonalizable. We first diagonalize \(\mathbb{M}_{0}\) by writing \(\mathbb{M}_{0}=\mathbb{P}_{1}\mathbb{M}_{0}^{(1)}\mathbb{P}_{1}^{-1}\) such that
\[\mathbb{M}_{0}^{(1)}=\begin{pmatrix}-i\omega&0&0&0&0&0&0\\ 0&-i\omega&0&0&0&0&0\\ 0&0&-i\omega&0&0&0&0\\ 0&0&0&-i\omega&0&0&0&0\\ 0&0&0&i\omega&0&0&0\\ 0&0&0&0&i\omega&0&0\\ 0&0&0&0&0&i\omega&0&0\\ 0&0&0&0&0&i\omega&0\\ 0&0&0&0&0&0&i\omega\\ \end{pmatrix}\quad\text{and}\quad\mathbb{P}_{1}=\begin{pmatrix}0&0&0&1&0&0&0&-1\\ 0&0&1&0&0&0&-1&0\\ 0&0&0&1&0&0&0&1\\ 0&1&0&0&0&-1&0&0\\ 1&0&0&0&-1&0&0&0\\ 0&1&0&0&0&1&0&0\\ 1&0&0&0&1&0&0&0\\ \end{pmatrix}. \tag{115}\]
We change \(\mathbf{z}\) into \(\mathbf{z}^{(1)}=\mathbb{P}_{1}\mathbf{z}\), which satisfies another system of ordinary differential equations,
\[\frac{d\mathbf{z}^{(1)}}{dr} =\mathbb{M}^{(1)}(r)\mathbf{z}^{(1)}, \tag{108}\] \[\mathbb{M}^{(1)}(r) =\mathbb{P}_{1}^{-1}\mathbb{M}(r)\mathbb{P}_{1}-\mathbb{P}_{1}^{ -1}\frac{d\mathbb{P}_{1}}{dr}=\mathbb{M}_{0}^{(1)}+\frac{\mathbb{M}_{-1}^{(1)} }{r},\] (109) \[\mathbb{M}_{-1}^{(1)} =\begin{pmatrix}1-2i\omega&0&0&0&2i\omega-1&0&0&0\\ 0&1-2i\omega&0&0&0&2i\omega-1&0&0\\ 0&0&1-2i\omega&0&0&0&0\\ 0&0&0&1-2i\omega&0&0&0&0\\ -2i\omega-1&0&0&0&2i\omega+1&0&0\\ 0&-2i\omega-1&0&0&0&2i\omega+1&0&0\\ 0&0&0&0&0&0&2i\omega+1&0\\ 0&0&0&0&0&0&2i\omega+1\end{pmatrix}.\]
We can diagonalize \(\mathbb{M}_{-1}^{(1)}\) while keeping \(\mathbb{M}_{0}^{(1)}\) unchanged by further changing \(\mathbf{z}^{(1)}\) into \(\mathbf{z}^{(2)}=\mathbb{P}_{2}\mathbf{z}^{(1)}\), where \(\mathbb{P}_{2}=1+\frac{\Sigma}{r}\), provided that \(\Sigma\) satisfies the matrix equation
\[D_{-1}=\mathbb{M}_{-1}^{(1)}+\left[D_{0},\Sigma\right]. \tag{110}\]
Here \(D_{0}\) and \(D_{-1}\) are respectively the diagonal part of \(\mathbb{M}_{0}^{(1)}\) and \(\mathbb{M}_{-1}^{(1)}\). The matrix equation gives
\[\Sigma=\frac{1}{2\omega}\begin{pmatrix}0&0&0&0&-i(2i\omega-1)&0&0&0\\ 0&0&0&0&0&-i(2i\omega-1)&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ i(-2i\omega-1)&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\end{pmatrix}. \tag{111}\]
With this \(\mathbb{P}_{2}\), \(\mathbf{z}^{(2)}\) satisfies the system of ordinary differential equations whose coefficient matrix \(\mathbb{M}^{(2)}(r)\) is given by
\[\mathbb{M}^{(2)}(r)=\begin{pmatrix}\frac{1-2i\omega}{r}-i\omega&0&0&0&0&0&0&0 \\ 0&\frac{1-2i\omega}{r}-i\omega&0&0&0&0&0&0\\ 0&0&\frac{1-2i\omega}{r}-i\omega&0&0&0&0&0\\ 0&0&0&\frac{1-2i\omega}{r}-i\omega&0&0&0&0\\ 0&0&0&0&i\omega+\frac{2i\omega+1}{r}&0&0&0\\ 0&0&0&0&0&i\omega+\frac{2i\omega+1}{r}&0&0\\ 0&0&0&0&0&0&i\omega+\frac{2i\omega+1}{r}&0\\ 0&0&0&0&0&0&i\omega+\frac{2i\omega+1}{r}\\ 0&0&0&0&0&0&0&i\omega+\frac{2i\omega+1}{r}\end{pmatrix}. \tag{112}\]
Since \(\mathbb{M}^{(2)}(r)\) is now diagonal, we can readily solve the system of ordinary differential equations for \(\mathbf{z}^{(2)}\),
\[\mathbf{z}^{(2)}=\begin{pmatrix}c_{1}r^{1-2i\omega}e^{-i\omega r}\\ c_{2}r^{1-2i\omega}e^{-i\omega r}\\ c_{3}r^{1-2i\omega}e^{-i\omega r}\\ c_{4}r^{1-2i\omega}e^{-i\omega r}\\ c_{5}r^{1+2i\omega}e^{+i\omega r}\\ c_{6}r^{1+2i\omega}e^{+i\omega r}\\ c_{7}r^{1+2i\omega}e^{+i\omega r}\\ c_{8}r^{1+2i\omega}e^{+i\omega r}\end{pmatrix}, \tag{113}\]
where \(c_{1},c_{2},...,c_{8}\) are constants. The asymptotic behaviour of \(\mathbf{z}\) can be obtained by the inverse transformations
\[\mathbf{z}=\mathbb{P}_{1}\mathbb{P}_{2}\mathbf{z}^{(2)}. \tag{114}\]
As QNMs correspond to GWs that are purely outgoing at spatial infinity, we can just set \(c_{1}=c_{2}=c_{3}=c_{4}=0\). By setting these constants to be zero, using the algebraic equations and the relations between the algebraic variables and differential variables, we find
\[y_{i}^{\ell=2,3}(r\rightarrow+\infty)\propto\begin{cases}r^{1+2i\omega}e^{i \omega r}\text{ for }i\neq 4,\\ r^{2i\omega}e^{i\omega r}\text{ for }i=4.\end{cases} \tag{115}\]
Eq. (113) makes good physical sense. First, we simultaneously obtain the ingoing and outgoing asymptotic
behaviour at spatial infinity. This is consistent with the wave nature of the metric perturbations, since these can be ingoing and outgoing at spatial infinity. Second, we recognize that \(r^{\pm 2i\omega}e^{\pm i\omega r-i\omega t}\approx e^{\pm i\omega r_{*}-i\omega t}\) at spatial infinity, which implies that the waves are propagating at the speed of light relative to observers at spatial infinity. Finally, we observe that Eq. (107) does not depend on \(\ell\). We confirm this observation by extending our calculations to \(\mathcal{N}_{\chi}=19\) (thus 20 associated Legendre polynomials are included) and we obtain the same asymptotic behaviour. The independence on \(\ell\) is consistent with the existing calculations of the asymptotic behaviour of the gravitational perturbations around a Schwarzschild BH
The asymptotic behaviour of \(y^{\ell}\) at the event horizon can be similarly obtained. We shall omit the details of the calculations at the horizon as they are completely analogous to the above, and simply report the asymptotic behaviour, which is purely ingoing at the horizon,
\[y_{i}^{\ell=2,3}(r\to r_{n})\propto\begin{cases}(r-r_{n})^{-1-2i\omega}&\text{ for $i\neq 4$ and $5$},\\ (r-r_{n})^{-2i\omega}&\text{ for $i=4$ and $5$}.\end{cases} \tag{108}\]
We would like to point out the flexibility of our estimates of the asymptotic behaviour of the metric perturbation functions. We can eliminate the algebraic variables by solving 2 differential equations, such as those corresponding to the 1st and 5th row. We can also eliminate different differential variables using the obtained algebraic equations. One can show that eliminating other differential variables will not affect the QNM frequencies. To see this, we go back to step 4 and eliminate \(y_{2}^{\ell=2,3}\) instead, which leads to another vector,
\[\bar{\mathbf{z}}=(y_{1}^{2},y_{1}^{3},y_{4}^{2},y_{4}^{3},y_{5}^{2},y_{5}^{3}, y_{6}^{2}.y_{6}^{3})^{\text{T}}. \tag{109}\]
According to Eq. (107), \(\bar{\mathbf{z}}\) and \(\mathbf{z}\) are related by a transformation matrix,
\[\bar{\mathbf{z}}=\bar{\mathbb{P}}(r,\omega)\mathbf{z}, \tag{110}\]
where
\[\bar{\mathbb{P}}(r,\omega)=\begin{pmatrix}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ \frac{(r-2)(r(2r+3)\omega}{r\omega(r(r^{3}\omega^{2}-2r+3)+3)}&0&-\frac{(r-2) \left(r^{3}\omega^{2}-3\right)}{r\omega(r(r^{3}\omega^{2}-2r+3)+3)}&0&0&0&0\\ 0&\frac{(r-2)(r(5r+3)\omega)}{r\omega(r(r^{3}\omega^{2}-5r+9)+3)}&0&-i\frac{(r -2)\left(i\left(r^{3}\omega^{2}-6\right)\right)}{r\omega(r(r^{3}\omega^{2}-5r +9)+3)}&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&1\end{pmatrix}. \tag{111}\]
Thus, the system of ordinary differential equations satisfied by \(\bar{\mathbf{z}}\) and that by \(\mathbf{z}\),
\[\begin{split}\frac{d\bar{\mathbf{z}}}{dr}&=\bar{\mathbb{M}}(r, \omega)\bar{\mathbf{z}},\\ \frac{d\mathbf{z}}{dr}&=\mathbb{M}(r,\omega)\mathbf{z},\end{split} \tag{112}\]
is related by
\[\mathbb{M}=\bar{\mathbb{P}}^{-1}\bar{\mathbb{M}}\bar{\mathbb{P}}-\bar{\mathbb{ P}}^{-1}\frac{d\bar{\mathbb{P}}}{dr}. \tag{113}\]
In other words, the two systems of ordinary differential equations s are equivalent, even though the \(p_{\infty}\) and \(p_{H}\) of \(\mathbb{M}\) and \(\bar{\mathbb{M}}\) may be different. Moreover, both systems of ordinary differential equations admit the same QNM frequencies, even though \(\mathbb{M}\) and \(\bar{\mathbb{M}}\) are seemingly different, because \(\omega\) is not altered through the transformation of \(\bar{\mathbb{P}}\). We have checked that all these changes eventually lead to the same asymptotic behaviour of the perturbation functions, despite the calculations in the middle being slightly different. This flexibility allows us to adjust the details of the calculations to make them the most convenient.
Finally, using the above calculations, we can derive the ODE satisfied by every \(y_{i=1,2,\ldots,6}^{\ell=2,3}\). The explicit equations are contained in a Mathematica notebook which is available upon request. A key feature of these ODEs is that those governing \(y_{i=1,2,\ldots,4}^{\ell=2,3}\) contain only \(y_{i=1,2,\ldots,4}^{\ell=2,3}\), and those governing \(y_{i=5,6}^{\ell=2,3}\) contain only \(y_{i=5,6}^{\ell=2,3}\). This property is consistent with the fact that, for perturbations of a Schwarzschild BH, the odd- and even-parity modes decouple.
## Appendix C Comparison with the existing calculations
In this appendix, we check the validity of our calculations by comparing their details to those in the existing literature [114; 115; 89; 155]. We find that the first equa
tion in Eqs. (101) is equivalent to
\[\frac{dK(r)}{dr}-\frac{1}{r}H_{0}(r)-\frac{i(\lambda+1)}{\omega r^{2 }}H_{1}(r)+\frac{1}{r}\frac{2r-3r_{s}}{2\left(r-r_{s}\right)}K(r)\] \[=0, \tag{102}\]
in the literature, where \(\lambda=\ell(\ell+1)-2\), and the definition of \(K\), \(H_{0}\) and \(H_{1}\) is given by [155, 90, 102, 90]. The second equation in Eqs. (101) is seemingly different from Eq. (102), but substituting the ordinary differential equations into Eq. (101), both equations are simplified to
\[y_{3}^{\ell=2,3}=-y_{1}^{\ell=2,3}, \tag{103}\]
which is equivalent to
\[H_{0}(r)=H_{2}(r) \tag{104}\]
in the existing literature. We note also that the first two lines of Eq. (100) correspond to
\[\left(\frac{3r_{\text{\tiny H}}}{r}+2\lambda\right)H_{0}(r)+ \left(\frac{ir_{\text{\tiny H}}(\lambda+1)}{\omega r^{2}}-2i\omega r\right)H_ {1}(r)\] \[+\frac{3r_{\text{\tiny H}}^{2}+2r_{\text{\tiny H}}(2\lambda-1)r- 4\lambda r^{2}+4\omega^{2}r^{4}}{2r\left(r-r_{\text{\tiny H}}\right)}K(r)=0. \tag{105}\]
This relation is consistent with previous calculations of even-parity perturbations of Schwarzschild BHs [115]. We have checked that our calculations are consistent with Eq. (102), Eq. (104), and Eq. (105) as we expand our calculations to \(\mathcal{N}_{\chi}=20\). Finally, we compare the asymptotic behaviour obtained in this paper with those in the existing literature. Our calculations of the asymptotic behaviour are clearly consistent with that of previous calculations.
## Appendix D Normwise scaling of quadratic eigenvalue problem
For the completeness of this paper, we briefly summerize the procedures of normwise scaling of quadratic eigenvalue problem. We refer the reader for the details of this scaling to [131, 132].
Consider a quadratic eigenvalue problem
\[\left[\tilde{\mathbb{D}}_{0}+\tilde{\mathbb{D}}_{1}\omega+\tilde{\mathbb{D}}_ {2}\omega^{2}\right]\mathbf{v}=\mathbf{0}. \tag{106}\]
This quadratic eigenvalue problem is equivalent to
\[\left[\tilde{\mathbb{M}}_{0}+\tilde{\mathbb{M}}_{1}\omega+\tilde{\mathbb{M}}_{ 2}\omega^{2}\right]\mathbf{v}=\mathbf{0}, \tag{107}\]
where
\[\tilde{\mathbb{M}}_{2} =\sqrt{\frac{\|\tilde{\mathbb{D}}_{0}\|_{2}}{\|\tilde{\mathbb{D} }_{2}\|}}\,\tilde{\mathbb{D}}_{2}\] \[\tilde{\mathbb{M}}_{1} =\frac{1}{\sqrt{\|\tilde{\mathbb{D}}_{0}\|_{2}\|\tilde{\mathbb{D} }_{2}\|}}\tilde{\mathbb{D}}_{1} \tag{108}\] \[\tilde{\mathbb{M}}_{0} =\frac{\tilde{\mathbb{D}}_{0}}{\|\tilde{\mathbb{D}}_{0}\|_{2}},\]
where \(\|\mathbb{A}\|_{2}\) is the 2-norm of the matrix \(\mathbb{A}\), defined as
\[\|\mathbb{A}\|_{2}=\sup_{\mathbf{x}\neq 0}\frac{\|\mathbb{A}\mathbf{x}\|_{2}}{ \|\mathbf{x}\|_{2}}, \tag{109}\]
and \(\|\mathbf{x}\|_{2}\) is the 2-norm of the vector \(\mathbf{x}\). It is shown that this definition is equivalent to [156]
\[\|\mathbb{A}\|_{2}=\sqrt{\max|\lambda(\mathbb{A}^{\dagger}\mathbb{A})|}, \tag{110}\]
where \(\max|\lambda(\mathbb{A}^{\dagger}\mathbb{A})|\) stands for the maximum modulus of the eigenvalue of the Hermitian matrix \(\mathbb{A}^{\dagger}\mathbb{A}\). Eq. (109) is also how we computed the 2-norm of \(\tilde{\mathbb{D}}_{0}\), \(\tilde{\mathbb{D}}_{1}\) and \(\tilde{\mathbb{D}}_{3}\) for the scaling before computing the generalized eigenvalues.
|
2308.04514 | Heuristics for Supporting Cooperative Dashboard Design | Dashboards are no longer mere static displays of metrics; through
functionality such as interaction and storytelling, they have evolved to
support analytic and communicative goals like monitoring and reporting.
Existing dashboard design guidelines, however, are often unable to account for
this expanded scope as they largely focus on best practices for visual design.
In contrast, we frame dashboard design as facilitating an analytical
conversation: a cooperative, interactive experience where a user may interact
with, reason about, or freely query the underlying data. By drawing on
established principles of conversational flow and communication, we define the
concept of a cooperative dashboard as one that enables a fruitful and
productive analytical conversation, and derive a set of 39 dashboard design
heuristics to support effective analytical conversations. To assess the utility
of this framing, we asked 52 computer science and engineering graduate students
to apply our heuristics to critique and design dashboards as part of an
ungraded, opt-in homework assignment. Feedback from participants demonstrates
that our heuristics surface new reasons dashboards may fail, and encourage a
more fluid, supportive, and responsive style of dashboard design. Our approach
suggests several compelling directions for future work, including dashboard
authoring tools that better anticipate conversational turn-taking, repair, and
refinement and extending cooperative principles to other analytical workflows. | Vidya Setlur, Michael Correll, Arvind Satyanarayan, Melanie Tory | 2023-08-08T18:23:04Z | http://arxiv.org/abs/2308.04514v1 | # Heuristics for Supporting Cooperative Dashboard Design
###### Abstract
Dashboards are no longer mere static displays of metrics; through functionality such as interaction and storytelling, they have evolved to support analytic and communicative goals like monitoring and reporting. Existing dashboard design guidelines, however, are often unable to account for this expanded scope as they largely focus on best practices for visual design. In contrast, we frame dashboard design as facilitating an _analytical conversation_: a cooperative, interactive experience where a user may interact with, reason about, or freely query the underlying data. By drawing on established principles of conversational flow and communication, we define the concept of a _cooperative dashboard_ as one that enables a fruitful and productive analytical conversation, and derive a set of \(39\) dashboard design heuristics to support effective analytical conversations. To assess the utility of this framing, we asked \(52\) computer science and engineering graduate students to apply our heuristics to critique and design dashboards as part of an ungraded, opt-in homework assignment. Feedbacks from participants demonstrates that our heuristics surface new reasons dashboards may fail, and encourage a more fluid, supportive, and responsive style of dashboard design. Our approach suggests several compelling directions for future work, including dashboard authoring tools that better anticipate conversational turn-taking, repair, and refinement and extending cooperative principles to other analytical workflows.
Gricean maxims, interactive visualization, conversation initiation, grounding, turn-taking, repair and refinement.
## 1 Introduction
Dashboards have become ubiquitous for analyzing and communicating data because their expressive designs allow them to address a diverse range of purposes and contexts [66]. However, existing guidelines for dashboard design -- whether in research or popular press [27, 95] -- largely focus on issues of visual representation, perception, and graphic design. While important, this focus ignores the central role that interactivity and storytelling increasingly play in enabling users to explore, analyze, monitor, and track various data metrics [66]. What design guidance can we provide for these interactive capabilities?
Prior work has described interaction as _"engaging the data in dialogue"_[19, 85, 87] -- an analogy to human-human conversation that we find productive for thinking about what it means for a dashboard's interaction to be designed effectively. Just as a human conversationalist can be circumlocutory, obscuration, or rude, so too can interactions in a dashboard be repetitive, unclear, or user-unfriendly. Moreover, cleaning insights from data is most productive and enjoyable when users can focus on answering the questions they have about their data rather than the mechanics of doing so [88]. But, what makes a dashboard an effective conversational partner?
We operationalize the conversation analogy by studying the _pragmatics_ of language use, or how language shapes meaning [67]. We
define a dashboard as _cooperative_ if it facilitates an interactive loop that follows the _Gricean Maxims_[34] -- influential work in pragatics that assesses the quality of a cooperative, communicative interaction based on the quantity, quality, relation, and manner of information communicated. Moreover, we draw on work by Beebe et al. [7] to model a cooperative analytical conversation as one where participants (in our case, the dashboard and the analyst) move between states of initiation, grounding, turn-taking, repair & refinement, and close.
Guided by these two frameworks, we enumerate a set of heuristics focused on interactive, cooperative communication between a dashboard and a user. Through an iterative process with 16 visualization practitioners, we distill down to a set of 39 design heuristics for promoting the design of cooperative dashboard conversations. To evaluate the utility of these heuristics in practice, we conduct two exercises with 52 computer science and engineering graduate students as part of optional, ungraded homework assignments. First, students were asked to use the heuristics to reflect on the efficacy of existing dashboard designs. Next, the students were asked to create a new dashboard or update the design of an existing dashboard based on the heuristics to better support cooperative conversational behavior with their target users.
Results of the classroom exercises indicate that our heuristics afford a new perspective for thinking about dashboard design. While dashboards tend to be effective at initiation and grounding a conversation, they are often weaker with respect to turn-taking, repair & refinement, and close. For instance, students noted how interactive results updating in place without any accompanying cues or messaging hinders turn-taking, as it can be difficult for a user to assess when an interaction is complete so they can resume their dialogue. When applying these heuristics to improve existing dashboard designs, students relied on textual annotation to provide contextual information and deliberately traded off visual aesthetics for clearer communication. Our results suggest opportunities for future work to study the impact of cooperative vs. uncooperative dashboard designs and to extend principles of cooperative conversation to analytical workflows beyond the dashboard.
## 1 Related Work
Our work builds on three lines of research: understanding dashboard design and usage as representational media, conversational interactions with data, and design heuristics in HCI and visualization.
### Understanding Dashboard Design and Usage
Dashboards are pervasive. They operate as the primary portal to data for many people in work and daily life. Yet until recently, dashboards were given little attention by the visualization research community. A survey of dashboards in the wild [66] offered a classification of dashboards and highlighted their criticality as a means of circulating data within organizations. An extension by Bach et al. [5] identified six distinct dashboard genres and characterized content and composition design patterns. Dimara et al. discussed the role dashboards play in supporting data-driven decision making [20], Zhang et al. described the work practices and challenges of dashboard creators [99], Lee-Robins and Adar [49] characterized affective intents in visualizations and dashboards, and Tory et al. [86] discussed the work practices of dashboard users. Research into dashboard design and construction includes approaches to enable layout and view consistency [63] and semantic snapping [45]. Research into multiple coordinated views and composite visualizations is also relevant to dashboard design (for a survey see Roberts [64] or Deng et al. [17]). Dashboard design is typically a manual process that can be aided by design heuristics such as those introduced in this work. However, design heuristics may also be codified into systems that automatically generate dashboards (e.g. [41]) or provide mixed-initiative support for dashboard creation [12, 59, 97]. Our research focuses on better understanding the dialogue around dashboards by introducing a set of heuristics to support dashboard design and evaluation for analytical conversation.
### Conversational Interaction with Data
The novelty in our heuristics stems from framing people's interaction with dashboards as a conversation. Designers have long recognized the power of interacting with computers in ways that emulate our conversational interactions with people. A long history of research on chatbots and other conversational interfaces is summarized in several surveys [2, 3, 11, 52]. In recent years, this research theme has extended into interactions with data. A survey of natural language interfaces (NLIs) for data visualization was introduced by Shen et al. [75]. This body of work has led to an understanding of principles for cooperative communication design in conversational bots [11, 73], including behaviors such as communicability, conscientiousness, conniseness, manner, proactivity, and turn-taking (drawing on the Gricean Maxims [34]).
Recent work recognizes that these cooperative principles apply beyond the scope of interfaces that employ spoken or written language. Most relevant are papers that characterize interactions with data and/or dashboards as data _conversations_[58, 28, 55]. Muller et al. [55] described how data science workers engage in back-and-forth interactions with data, especially for data wrangling. Tory et al. [86] described how dashboards serve as a portal to data and a jumping-off point to further data activities. Their observation that dashboards alone are often ineffective in supporting these conversations, resulting in data being exported for use in spreadsheets, presentation tools, and reports, suggests a strong need for dashboards to evolve in ways that support more conversational forms of interaction. _BOLT_ explores the use of NLIs for dashboard authoring, wherein NL utterances are mapped to prevalent dashboard objectives to generate dashboard recommendations [80].Our work contributes a set of heuristics that can support designers in creating such cooperative, conversational dashboards and the systems that generate them.
### Heuristics in HCI and Visualization
Our dashboard heuristics build upon a long history of design heuristics for interfaces and visualizations. In interface design and evaluation, perhaps the most well-known are Nielsen's [56, 57] ten usability guidelines and Shneiderman et al.'s [77] eight golden rules. More specific heuristics have been developed for topics such as human-AI interaction [4], augmented reality [24, 31], and mobile computing [8], among many others. Researchers have proposed numerous heuristics specific to conversational interaction with chatbots and voice assistants [30, 36, 47, 53, 58, 82, 94].
Tory & Moller [87] explored usability heuristics as a way to evaluate visualizations. Subsequently, there have been numerous efforts to develop and evaluate visualization-specific heuristics [15, 21, 89, 83, 89, 100]. The numerous high-level books, guidelines, and principles around dashboard design are also relevant (e.g., [98, 27, 95]), as are frameworks of user goals or intents that may help to guide visualization design (e.g., [46, 49]), and design tools considered to support cognition [93]. More recently, Lin et al. [51] introduced a data-driven approach for identifying a set of dashboard design rules from dashboards mined from the web. The rules describe view-wise relationships in terms of data, encoding, layout, and interactions and subsequently develop a recommender for dashboard design.
However, heuristics and guidelines for dashboards tend to focus on layout, structure, data and its visual representation, and usability. Our work augments these guidelines based on principles of cooperative conversation. The conversational framing offers a different perspective that aligns with an evolution of dashboards away from autocratic information artifacts and towards cooperative conversational partners.
## 2 Analytic Conversation States
The motivation for this work stems from exploring how cooperative conversation guidelines for human-computer interfaces could inform the design and evaluation of interactive dashboards. Conversation is highly structured and organized according to set principles. Sacks et al. [65] initiated the modern literature on conversational behavior by outlining a system of social interactions with specific properties. This interaction is characterized by a mechanism of exchange based on alternating dialogues of information.
Beebe et al. [7] break conversation down into five states (i.e., initiation, grounding, turn-taking, repair & refinement, and close) that we adapt here for our discussion around interactive dashboards. While
Gricean Maxims [34] provide guidelines for assessing the overall quality of a conversation, the conversation states specifically help define how an analytical conversation progresses through different interaction states; they also help organize the heuristics. We maintain, as per Tony et al. [86], that the users of dashboards are similarly engaged in "data conversations", so conversational structures (and pitfalls) can apply to dashboards and to considerations for their design. In this section, we introduce and apply these conversational states to dashboard interaction for supporting analytical conversation with the user (Figure 1).
### Initiation
Initiation is the first stage of conversation and requires one to be open to interacting with the other conversational participant(s). Greetings such as, "Hello!" and "How nice to see you!" are common ways to set the tone to welcome further dialogue. Conversations can also be initiated without any preliminaries using utterances such as, "when will it stop raining?" or including vocative or attention-seeking utterances such as, "excuse me" or "hey!"
With respect to dashboard design, initiation can be thought of as both the state of the dashboard when the user first interacts with it, as well as any tutorials, explanations, or other tools for orienting the user to the dashboard's contents. Dhanoa et al. [18] suggest an "onboarding model" for new users of dashboards. A successful onboarding process, per this model, ismidt of the target user, the dashboard components that will need likely explanation, how these explanations will be serviced, and how this onboarding process connects to later patterns of usage. The means and goals of onboarding are then connected with an "onboarding narrative". For instance, a "depth-first narrative" might involve a serial explanation of every dashboard component (and their subcomponents) in detail. As in Figure 1, a successful initiation in dashboard design provides the user with **information and explanations** of components, but also clear options for where to begin to understand their data. A failure can occur either through the lack of appropriate onboarding (e.g., an insufficient quantity of onboarding for the user, insufficient relevance to their task, or missing context) or even by presenting a "data deluge" of too many unconnected or unstructured views without a clear reading order or spatial organization.
Other strategies for successful initiation are to provide users with **curated information and metadata**. For instance, as in Srinivasan et al. [79], dashboards can be augmented with "data facts" of potentially important relationships or patterns in the data. Or, as Gebru et al. [33], a "datasheet" or important context and metadata could be provided to a user prior to any analysis.
### Grounding
Grounding refers to establishing the time, location, or actuality of a situation according to some reference point in the conversation [13]. Two people in a conversation need to coordinate not only the content of what they say but also how that message is delivered. For example, if Mary wants to get Clara to join her for lunch at a particular restaurant, she cannot simply email her with - "Let's meet at Sol at noon." After sending her invitation, Mary awaits evidence that Clara has received, understood, and committed to the lunch invitation. Meanwhile, Clara does not find a taxi as soon as she gets Mary's message but sends an email response. To be at common ground, if Mary and Clara need to further clarify or modify their plans, they may exchange additional emails before they consider their plan to meet at the restaurant.
With respect to dashboards, while onboarding (as discussed above) can assist in moving users through the grounding stage, there are other actions designers can take to build shared understanding of expectations and terms. The first might be the direct solicitation of priors and predictions from users. Hullman & Gelman [38] suggest that existing (ungrounded) "model free" visualizations are inherently limited for visual analytics, and point to examples where either asking the user to predict data [42] or, alternatively, showing users the predictions of others [43], can not only result in improved recall and retention of information, but also avoid drawing spurious conclusions. Shi et al. [76] similarly point to cases where data stories solicit information from users in order to ensure that the resulting information is **relevant, interesting, or contextualized** for users, and Lin et al. [50] call for incorporating users' "data hunches" into charts.
### Turn-taking
Turn-taking is a fundamental aspect of dialogue and occurs in a conversation when one person listens while the other person speaks [65]. As the conversation progresses, the listener and speaker roles are exchanged back and forth. Participants need to coordinate who is currently speaking and when the next person can start to speak. Humans are very good at this coordination and typically achieve fluent turn-taking with very small gaps and little overlap. A conversationalist who does not allow others a turn, or speaks over others, may be considered rude. An example of turn-taking in conversation is:
speaker a: "Lovey weather this week." speaker b: "Isn't it? I hope it's nice on the weekend." speaker a: "Me too. I have plans to go for a hike." speaker b: "That's fun! Which trail are you going on?"
As dashboards move from static displays to more complex and interactive forms [66], there are an increasing number of examples of **bi-directional communication** between the user and a visualization system. Examples of this communication can be as simple as providing tooltips or annotations on a user's request, supporting filtering or aggregation options, to more complex forms such as soliciting personal information from the user [76] or even incorporating "analytical chatbots" [73] that respond to natural language queries. Failure to allow the user to perform follow-up actions (as in Figure 1) can result in frustrating analytical experiences where a user has a question or concern that the dashboard is not equipped to address. For example, [86] a sales dashboard that only allows a user to see a snapshot of the data at a single point in time can be frustrating if the user's next step is to try to understand the data in the context of the last month or year.
Dashboards systems can also take conversational initiative, and there are potential analytical benefits for such "proactive design." [88] An example is the Frontier system [48], where the user can select recommended views based on a set of analytical intents. Other forms of bidirectional interaction can be more subtle: for instance, the autocompletion metaphor in visual analytics [71] represents an attempt to match a user's utterance or intended action with the system's understanding of valid or popular alternatives. One consideration with turn-taking in dashboards is to allow bi-directional communication and useful division of labor between the person and the system, while respecting the user's agency and autonomy [35]. Systems that steal focus, override user choices, and lead to **dead-ends** in the communication flow, are "impolite" [96] and produce friction and user enmity.
### Repair and Refinement
Conversational repair and refinement is the process conversation participants use to detect and resolve problems of speaking, hearing, and understanding [68]. If dialogue is to proceed smoothly, it is vital that there are opportunities for checking to understand and provide clarification when misunderstanding does occur. Everyday interaction is full of such checks and repairs, though these may be so automatic as to be almost seamless, rarely disturbing the flow of the interaction. In human conversation, there are continual implicit acknowledgments that communication is proceeding smoothly. The speaker monitors the participants in the conversation in different ways to see if they understand (e.g., using checking moves such as "Do you know what I mean?") and the other participants are often giving verbal acknowledgments to the speaker (e.g., "yes", "uh huh"). However, if the utterance is not understood, repair may be initiated. Through repair, participants display how they establish and maintain communication and mutual understanding during the turn-taking process.
Repair and refinement are both critical components of interactive dashboard design. NLIs for data provide a model for this sort of interaction, as natural language utterances (and the intents behind them) are often vague [72], under-specified [74], or misinterpreted by the natural language system. Some systems **afford follow-up conversations** for repairing or re-specifying intents. Perhaps more relevant to dashboard
design are systems like DataTone's [32] "ambiguity widgets" that explicitly afford the resolution of ambiguous queries. The inability to update a dashboard when information is stale, irrelevant, incorrect, or misaligned with the user's goal can lead to frustration, as in Figure 1.
Another way to support repair in analytic conversations with dashboards is to support fluid switching of tools and contexts if the existing dashboard is insufficient for a particular analytical task. Both Tory et al. [86] and Bartram et al. [6], in their interviews with "data workers": reveal a recurring need to move data between tools (for instance, into a spreadsheet tool for manual data cleaning or inspection, or into a presentation tool for curated storytelling), and frustration with existing dashboard software that makes this process difficult.
A last intriguing potential for repair in dashboard design is to **employ summaries or recommendations to prevent or ameliorate cognitive biases** on the part of the users [90, 92]. That is, a belief that the user is making a potential analytical error or oversight and intervening. For instance, Wall et al. [91] propose the incorporation of a user's interaction records to provide a summary report explaining whether they are interacting with biased subsamples of the whole dataset, or whether they have considered representative facets of the data.
### Close
Close is the process by which two partners end a conversation by offering and accepting each other's final bids to close the conversation. Politeness strategies can avoid miscommunication when terminating the conversation. Coppock [14] proposed several strategies used to close the conversation: positive comment, excuse, and the imperative (e.g., "it looks like our time is up"). A positive comment implies that the conversation is pleasant, but the other does not want to continue. Excuse expresses an intent to end the conversation by providing an alternative motivation (e.g., "I better get back to work"). The imperative strategy explicitly employs an imperative tone to end the conversation (e.g., "It was nice talking to you").
While the end of a specific analytical _session_ may be clear cut (say, navigating away from a website or closing a piece of software), a user's analytical _conversation_ does not end when they stop looking at a dashboard; the notion of a final close is more fraught. As users of dashboards are commonly impacted by _reliance_ on others [86] (either for data, stakeholder buy-in, or discussion of goals), there is often a step of sharing the insights gained from an analytical conversation with various levels of formality and practice [9]. **Providing useful summaries of information or insights in a dashboard**, and particularly summaries that can "travel" across different modalities, is, therefore, a critical (but often overlooked) component of dashboard design. Of particular interest to us is how summaries can concisely present not only the insights gained by the user over the course of an analytical conversation but also the supporting evidence for these insights (and the strength of this evidence).
Beyond post hoc summaries, we point to two potential examples of visualizations making good use of the _end_ of analytical conversations. The first involves systems where past users can provide important context for future users, as with Feng et al.'s [26] Hindsight system where the interaction history of other users can be used to suggest potential starting places for new users, or in Kim et al. [43] where other viewers' predictions can help situate one's own expectations of the relationships between data values. The second example embraces the multiplicity of potential methods and the potential fragility of conclusions, as in Dragicevic et al.'s [22] multiverse analysis reports, where the goal is to produce a report (with included conclusion and discussion sections) that is robust across a variety of different analytical choices or even natural data variability.
## 4 Iterative Development of Conversation Heuristics
We apply the notion of cooperative conversation and its maxims by examining the conversational properties that are specifically relevant to interactive dashboards, drawing from the following sources:
* **Natural language interfaces for visual analysis**: We explore how language pragmatics in the context of natural language interfaces can help support analytical conversation. A review of previous academic prior art and software systems that implement techniques for supporting analytical conversation in the context of NLIs for visual analysis [32, 37, 69, 62, 73, 81, 84] provided guidelines for informing the various heuristics for supporting the various conversational states when interacting with data.
* **Cooperative conversation behaviors in human-computer interfaces**: The design of such interfaces often draws inspiration from human-to-human conversation and mechanisms that facilitate the exchange of information between speaker and listener. There exists an expectation that the information shared is relevant and that intentions are clearly conveyed to support a cooperative conversation that is truthful, relevant, concise, and clear. A review of the various applications of Gricean Maxims and cooperative conversation guidelines in interactive interfaces and experiences between humans and computers, ranging from human-but interaction, chatbots, smart assistants, and embodied agents [23, 60, 10] helped define the various heuristics that satisfy the maxims.
* **Practioner examples of dashboard design**: Interactive design guidelines for authoring functionally useful interactive dashboards as described in practitioner literature [70, 44], provided examples to help inform the creation of the initial set of heuristics.
However, as indicated in Section 2, many of the guidelines from the
\begin{table}
\begin{tabular}{l|l} \hline \hline \multicolumn{1}{c|}{**Metric**} & **Metric** \\ \hline \(\text{ML}\) (The advanced supports specific new/output) & **Q12: There is no further potential information that (abortions or units).** & **Q13: There is no further potential information that (abortions or units).** & **Q14:** describes what the dashboard is about.** \\ \hline \(\text{ML}\) (The new thought be able to explore the data using the shadow) & **Q15: There can this the dashboard data sense as an (small) possible approach to the platform of the container (see, \\ \hline \(\text{ML}\) (The text in the user where there need to start) & **Q16:** Like our first choice they can start) & **Q17:** Like our first choice they can start, \\ \(\text{ML}\) (The charts in the dashboard input table that) & **Q18:** The actual purpose of what the wants in the dashboard (except recorded intended in the user). & **Q19:** Like our first choice they can start (see, \\ \(\text{MS}\) (The base a clear reading movie on the dashboard) & **Q18:** The combined communications a certain style or method for the user. & **Q19:** Like the dashboard input table, two (bottom, bottom, bottom).** & **Q19:** Like the dashboard input table, two (bottom, bottom, bottom).** & **Q19:** Like the dashboard input table, two (bottom, bottom, bottom).** & **Q19:** Like the dashboard input table and supports the target table, two (bottom, bottom, bottom).** & **Q19:** Like the dashboard input table to capture a new analytical task or starting a (small) possible approach to the platform of the container (see, \\ \(\text{MS}\) (The base a clear reading movie) & **Q19:** Like the dashboard input table that) & **Q19:** Like the dashboard input table the user with the next table. & **Q19:** Like the dashboard input table, two (bottom, bottom, bottom).** & **Q19:** Like the dashboard input table, two (bottom, bottom).** & **Q19:** Like the dashboard input table table, two (bottom, bottom).** & **Q19:** Like the dashboard input table, two (bottom, bottom).
visualization literature tend to focus on recommendations and best practices for layout, visual composition, data encodings, and chart types, as well as for natural language interfaces and systems. We instead focus on Grice's Cooperative Principle and its associated maxims as a way to identify themes to support analytical conversations in interactive dashboards. In particular, we apply the notion of _conversational implicature_ as a way to systematize the properties of interactive dashboards. Conversational implicature, as introduced by Grice, is an indirect or implicit act within a conversation, determined by the conversational context that supports the primary dialogue [16, 34]. Implicature serves a variety of conversation goals towards effective communication, supporting pragmatics, maintaining good social relations, and overall efficiency in conveying the intended message. To come up with an initial set of heuristics, the co-authors adapted guidelines and heuristics developed for natural language interfaces to interactive dashboards (e.g., "Does the dashboard freeze, crash, display errors, or otherwise unexpectedly interrupt the user?") and drew inspiration from example dashboards authored by visualization experts (e.g., "Is there a clear reading order and is it logical (e.g., top-down, bottom-up)?".
All the co-authors iteratively developed a set of heuristics, organized into themes, that support conversational implicature through _both_ the presentation and interaction of dashboards with a human. Each co-author picked one of three dashboard examples of their choice that they encountered recently - a Tabacian Public dashboard showing the best states to retire in the US [25], a COVID-19 Dashboard [1], and a Tableau World Indicators Business Dashboard [78] and independently reviewed the current heuristics to assess if they were relevant (including whether they were supported or violated) to the corresponding dashboard example. Subsequently, the co-authors collectively discussed and compared insights on what it meant for a dashboard to be cooperative.
We initially collected 95 potential heuristics. Note that we chose the term 'heuristic' defined as "serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods" and "relating to exploratory problem-solving techniques that utilize self-educating techniques to improve performance" [54] as a means to help guide a dashboard author. Through our experience in using the heuristics, we clustered them into related themes and iteratively reworded and clarified them to minimize unfamiliar jargon or other vague terms. This process resulted in 56 heuristics.
### Phase 1: Pilot Review
We tested the modified set of heuristics with two pilot participants. The instructions asked each participant to pick an interactive dashboard that they recently authored, run the heuristic checklist by the dashboard, and respond with detail about whether the dashboard supported the given heuristic or not. Lastly, they were asked to indicate if any of the heuristics were confusing to understand or apply. Based on feedback from this exercise, we updated the instructions to include an example along with a screenshot of a sample dashboard for a heuristic, refined and consolidated the heuristics further, resulting in 53 heuristics under 13 themes (analytical conversation support, multi-modal conversation support, use of semiotics, clarification of vague concepts, communication goal, summaries and takeaways, exposition, integrating text with visual information, composition and layout, visual scaffolding, level of detail, trust and transparency, register).
### Phase 2: Expert Feedback
We then conducted a self-reflection exercise with 16 expert visualization researchers and practitioners. Based on self-reporting, experts comprised six business intelligence analysts, five data visualization consultants, and five Ph.D. visualization students with at least three years of experience authoring visualizations and dashboards. One participant did not complete the exercise, leading to a total of 15 completed exercises. The goals of Phase 2 were to 1) understand how the heuristics are applied when critiguing the design and interaction of a dashboard and 2) get feedback about the clarity and usefulness of the heuristics.
#### 4.2.1 Expert Reflection Exercise
The user study was designed as a self-reflection exercise where participants were asked to evaluate each heuristic against a dashboard example that did not contain confidential or proprietary data. We asked them to include a link and a screenshot of the dashboard the picked with an explanation for their choice. We included a link to a spreadsheet of heuristics, and for each heuristic, the spreadsheet asked participants to first determine if the heuristic applied to their chosen dashboard and, if not, to explain the reason. They were also asked to rate the extent of the application or violation on a 5-point semantic differential scale ranging from "Strong violation" to "Strong application." The spreadsheet also requested participants to provide visual examples of applications and violations of the chosen dashboard for each heuristic wherever possible. To help the participants understand the expectations for the exercise, we provided an example response to one of the heuristics.
After the participants completed the heuristics spreadsheet, they were requested to answer a set of questions:
* Were the heuristics and/or themes useful? How? Which ones in particular? Explain in detail.
* Were any of the heuristics not helpful or confusing to you? If so, please elaborate.
* Did any of the heuristics make you think of dashboard design in a new way?
* Were there any heuristics that you thought were missing?
* What changes would you make to your dashboard based on this assessment? Please describe in detail.
* Do you plan on updating your dashboard in response to these heuristics? If so, would you be willing to send us an update?
We estimated the study would take approximately 45 minutes to complete. Participants were given three days to complete the study on their own time and were compensated with a $30 Amazon gift card. We recruited the expert participants (indicate by the notation \([E\#]\)) through a screening survey (included in supplementary material) posted on social media channels and distribution lists at a large software company. Participants were required to have experience (at least five years) designing or evaluating interactive dashboards using software like Tableau or PowerBI, notebook environments like Jupyter or Observable, or libraries like D3 or matplotlib. We also required participants to have a dashboard they were working on and that they were willing to share with us in some form (as a web link or a screenshot). We collected background information of the survey respondents that included a description of their current job role, years of experience designing dashboards, and a description of the topic and the audience of the interactive dashboard that they were designed for.
#### 4.2.2 Assessing the Utility of the Heuristics
To assess the utility and comprehensibility of the heuristics, we reviewed participant responses for the following scenarios:
* **Heuristics indicated as 'does not apply'**. Instances where participants indicated that the heuristic was not relevant to the dashboard they were evaluating.
* **Misinterpreted or hard to understand heuristics**. Instances where participants misinterpreted a heuristic for another or simply did not understand them.
* **Heuristics marked as'strong violation' / 'weak violation'**. Instances where participants indicated that their dashboard violated a given heuristic.
* **Heuristics marked as'strong application' / 'weak application'**. Instances where participants indicated that their dashboard satisfied a given heuristic.
* **Duplicate or similar heuristics**. Instances where participants marked two or more heuristics as either duplicates or very similar.
#### 4.2.3 Expert Responses
All co-authors inspected the 15 expert responses. The expert participants chose dashboards that they had authored for an audience that included either a client, a data visualization class, or sharing on Tableau Public. The themes of dashboards ranged from health monitoring, crime and violence, visual eye tracking analytics, sports, and finance. Figure 2 shows an example dashboard assessed by the heuristics. Here is an overall summary of how the heuristics were labeled:
* **Heuristics indicated as 'does not apply'**. 23 of the 53 heuristics were labeled as "does not apply" to their dashboards by at least one participant. For example, several participants marked the heuristic, _"Does iconography support or potentially replace repetitive text directives? If not, are there opportunities to do so?"_ to not applied to the dashboards they were assessing.
* **Misinterpreted heuristics**. 28 of the heuristics were marked as difficult to interpret by at least one participant. For example, participants reported having trouble understanding heuristics that were rather vague: _"Does the dashboard support open-ended data exploration? If not, why?"_ or contained jargon: _"Does the visualization disclose the provenance of the data?"_
* **Heuristics marked as'strong violation' / 'weak violation'**. On average, 11 out of 53 heuristics were marked as being either strongly violated or weakly violated (min: 0, max: 27). For example, for the heuristic, _"Are vague concepts clarified if they exist within the data? (e.g., tail or high-performing) If no, which vague concepts should be clarified?"_ was commonly marked as a'strong violation'. _ES_ commented, _"This dashboard is meant for the public but uses many difficult terms like 'Case trajectory' instead of "number of people with covid"_ and _'wastewater concentration.'_ We should put the text through a plain language _garder_ and improve the language for ease of understanding."_
* **Heuristics marked as'strong application' / 'weak application'**. On average, 39 out of 53 heuristics were marked as being either strongly applicable or weakly applicable (min: 21, max: 49). We hypothesize that given that the dashboards are authored by experts, a high number of heuristics were labeled as applicable to the dashboards. For instance, most participants (13 out of 15) stated that _"Is the dashboard interactive to support the user in completing a new analytical task or starting a new line of inquiry? Are there interactions that could be added to enhance the experience?"_ strongly applied to their dashboards. \(E\)11 marked, _"If there is interaction, does the dashboard update as expected?"_ as a 'weak application' and commented _"Filtering and hovering interactions update the story as expected. But the lack of instructions makes the user perceive filters as labels."_
* **Duplicate or similar heuristics**. Six sets of heuristics were marked as either being duplicates of one or more other heuristics or very similar. For example, under the theme, "Composition, layout, space, and sequencing", heuristics such as _"The layout, placement of charts, and the flow in the visualization should be easy to follow"_, _"There is a clear reading order within the dashboard and is it logical (e.g., top-down, bottom-up)"_, and _"The charts, text, and any other visuals are laid out in a way that is helpful for understanding the structure of the information being presented in the dashboard"_ were identified to be similar.
Generally, participants found the dashboard reflection exercise to be helpful. \(E\)14 said, _" The heuristics and themes are very helpful in understanding many of the considerations that need to be made while designing a dashboard such as (1) Multi-modal conversational support, (2) Integrating text with visual information for communication, and (3) Visual scaffolding for helping with conversation clarity."_ Participants also found that the reflection inspired them to consider dashboard design in new ways. \(E\)05 said, _" Multimodal interactivity and NLI provide a new way of thinking. It would be exciting to integrate his in an eye-tracking analysis tool for improving the sense-making loop."_
After reviewing the experts' reflections, we clarified heuristics that were unclear and ambiguous as well as consolidated redundant ones, resulting in a total of 46 heuristics. For example, we removed redundant heuristics such as _"The quantitative units are clearly defined or specified."_ as we already included the heuristic, _"Concepts or metrics are either easily understandable or clearly defined in the dashboard."_
Fig. 2: Example of four heuristics used to evaluate a visual eye tracking dashboard by an expert. The table indicates the heuristic, its violation or applicability to the dashboard, and the states of the dashboard before and after the interaction, respectively.
and added heuristics suggested by the participants, such as _"Is there adequate evidence that the dashboard is truthful? Is the dashboard able to convince the key takeaway through credibility and trustworthiness?"_
### Phase 3: Author Reflections and Final Iterations
We further reflected on the set of heuristics, given that the goal was to evaluate them with a student population to assess how the students would apply and critique the conversational nature of dashboards. We reformulated the remaining heuristics to follow a clear and consistent format and to clarify issues identified by the expert evaluators. Specifically, we further iterated on the heuristics based on the criteria:
* Rewarded heuristics posed as questions to be imperative guidelines of what the dashboard ought to support. For example, heuristic _"Is the text in the dashboard legible, easy to read, and useful? Are the different parts of the chart (e.g., titles, captions, or nartration) well-described?"_ was rephrased as _"There are text and visual elements to frame or guide salient information."_
* Ensured that the heuristics were understandable without technical jargon where heuristics such as _"Do starting points for interactivity align with user experience and expectations?"_ were reworded as _"The dashboard is interactive and supports the user in completing a new analytical task or starting a new line of inquiry."_
* Made sure that each heuristic could be clearly validated for whether it was applied or violated in an interactive dashboard. To that end, any conjunctions, if present, were removed to prevent the inclusion of multiple guidelines within a single heuristic.
Finally, after winnowing down the heuristics to 39, we found that they could be reorganized thematically into the 5 basic conversational states: initiation, grounding, turn-taking, repair & refinement, and close. While the final set of heuristics provides an initial framework for assessing cooperative conversation in interactive dashboards, we do not guarantee completeness; rather, we sought to assess their utility and identify opportunities to further improve and refine them. The next section describes how the heuristics were applied by students in a visualization education setting. The final table of conversational dashboard heuristics is shown in Table I, and its various iterations leading to the final set are included in the supplementary material.
## 5 Use of Heuristics in Education Practice
To evaluate the utility of the heuristics, we provided two opt-in homework exercises with visualization learners in a post-graduate data visualization class at a university. Part A was a heuristics reflection exercise on pre-authored interactive dashboards, while Part B was an exercise to apply the heuristics to improve the conversational nature of an existing dashboard. Both exercises were not graded to mitigate any biases when students provided feedback. The university review board granted formal approval to conduct the exercises. We include class exercise material and evaluations as supplementary material.
### Part A: Heuristics Reflection Exercise
The goals of the heuristics reflection exercise were to 1) assess the heuristics' value in supporting visualization learners and 2) gain feedback on the heuristics for iterative improvement. Since our development phases involved visualization experts, we focused on learners to ensure the heuristics were understandable by a less experienced population.
The 52 participants were master's students (with backgrounds in Computer Science or Engineering). We use the notation [P#] when referring to participants in this heuristics evaluation. We refer to particular heuristics from our final list as [H#].
The homework exercise was conducted similarly to the reflection exercise described in section 4.2.1 but with the updated heuristics table (Table I), organized by the five conversational states). The exercise was introduced during class by the class instructor and then completed as a homework assignment over a week. To ensure that students remained engaged when applying the heuristics to evaluate dashboards, we provided a list of 18 dashboards and asked students to describe which dashboard they chose and why. Four dashboards were not picked from the list, with the highest number of students (six) choosing a renewable energy consumption dashboard. The complete list of dashboards and the frequency of choices is included in the supplementary material. The actual reflection exercise was the same as in section 4.2.1; it involved assigning the dashboard a rating ('strong application,' 'weak application,' 'weak violation,' or'strong violation') for each heuristic with written commentary and screenshots to justify the ratings and then answering the reflection questions. The students additionally gave an in-class presentation of their findings from the homework exercise.
We conducted a thematic analysis of the heuristic reflection responses and survey answers. We looked for feedback on the heuristics, interesting examples of how the heuristics were applied to the dashboards, and insights that were revealed. We also examined frequency data on how dashboards were ranked across the different heuristics.
#### 5.1.1 Rating Frequencies
Relative frequencies of heuristic applications and violations, as rated by participants for their chosen dashboard, are summarized in Table II. Because the conversational state categories contain different numbers of heuristics, we used a normalized metric rather than raw counts. To compute these scores, we first combined strong and weak application ratings, and similarly combined strong and weak violation ratings. We averaged the number of ratings across participants and normalized the result by the number of heuristics in each state on a \(0-100\) scale. Note that these are not exactly percentages because a participant could identify multiple applications and/or violations of a single heuristic.
Table II shows that the rate of violations increased for later conversation phases, and the rate of applications decreased. This observation was consistent across participants. It suggests that today's dashboards offer reasonable support for initiation and grounding, but are progressively less supportive as human-data conversations get into turn-taking, repair, and close activities. For instance, turn-taking repeatedly showed up as a challenge, where dashboard inflexibility or awkward interactions made it difficult for users to complete analytical workflows.
#### 5.1.2 Use of Heuristics Across Conversational States
Next, we examine themes and interesting examples of how the heuristics were used across the conversational states.
**Initiation**. Heuristics in the initiation state (\(H1-H14\)) were often marked as either strong or weak application (normalized application frequency of 74 in Table II). Participants noted that dashboards initiated the conversation by including instructions on how to use the dashboard, making it easier to explore the data. The reading order, encodings, and formatting conventions used were often easy to understand and follow. _P13_ stated _"the color combination used by the chart maker keeps the reader attentive and focuses the attention at the right regions."_ However, dashboards did have violations in revealing the provenance of their data. _P8_ stated, _"strong violation as the dataset source hyperlink they tried to give doesn't work and the data preparation is not mentioned."_
**Grounding**. Similar to the initiation state, heuristics in this conversation state (\(H15-H24\)) were often marked as either strong or weak application (normalized application score of 76). Many of the dashboards (38 out of 52) were described as having a clear presentation of context and level of detail. P38 commented, _"Yes ordering is logical, It's sorted in highest to lowest expense. First row shows line chart and next row shows details of breakdown."_
\begin{table}
\end{table}
Table II: Part A heatmap showing the frequency of dashboards applying or violating the heuristics, grouped by conversational state. Frequencies are averaged across participants and normalized by the number of heuristics per conversational state on a 0 to 100 scale.
**Turn-taking**. For this conversational state, there was a lower frequency of application ratings (41) and a higher frequency of violations (68), indicating the limited interactivity (\(H25\)) that the dashboards provided. _P20_ stated, _"The dashboard should update its view based on what is selected, highlighted, or filtered by the user. As there are no filters and update options available in the dashboard."_ Participants also noticed some friction when interacting with the dashboard (\(H26\)). _P3_: _"The process of zooming in is clundy and disrupts continuity."_ and guiding to the next step. _P3_: _"Very little visual warning/cueing to accompany changes to graphs, particularly in the side panel. Some changes are initially off screen and have to be scrolled to."_
**Repair & Refinement**. Participants (46 out of the 52 students) often found that the dashboards violated the functional and navigational deadends (\(H32\) and \(H33\)) (63 violations per 100 cases). _P28_ commented, _"The dashboard doesn't provide interactivity at all. It has no filters or searches. Just a basic static visual. Just looking at the graph doesn't make any sense unless we hover over it."_ Further multi-modal support (\(H34\)) was violated in many cases as the interactions were limited to selecting filters in the drop-down, for example. _P4_ said, _"There are no filters. Filters could have helped a lot when analyzing certain time periods but are given as only two values between year ranges."_
**Close**. This category had the highest frequency of violations (95 violations per 100 cases). For several dashboards, it was not apparent what the key takeaway was to close the conversation (\(H36-H38\)). _P36_ commented, _"Weak Violation. Just by looking at this dashboard, one cannot conclude something; the user has to gather data from each hexagon, then analyse it and only then something can be concluded."_ Other violations concerned around trust (\(H39\)). P5 said, _"Strong violation: though there's no reason the believe the dashboard is lying, without key context a user with no additional information could easily come away with a confused message, or even the wrong idea entirely."_ Similarly _P45_ said, _"Weak Violation. The source of the data is nowhere mentioned, which would have increased the credibility of the dashboard."_
In summary, we found that while dashboards tend to be effective at initiation and grounding of the conversation, they struggle with other aspects of conversation that include turn-taking, repair & refinement.
### Part B: Update or Create Dashboards Using Heuristics
The 52 students self-organized into groups of three or four, forming a total of 15 groups where they applied the heuristics to update an existing dashboard from Tableau Public (4 out of 15 groups) or create new dashboards from a Kaggle dataset [40] (11 out of 15 groups). Students completed the exercise over a week and rated the dashboard with the same set of heuristics (Table 1), providing commentary and screenshots. For exercises involving updating an existing dashboard, students rated the dashboard before and after the update.
We conducted a thematic analysis of the heuristic reflection responses. We looked for feedback on the heuristics, interesting examples of how the heuristics were applied to the dashboards, and insights that were revealed. We also examined frequency data on how dashboards were ranked across the different heuristics.
#### 5.2.1 Application of Heuristics Across Conversational States
Similar to Part A (Section 5.1.2), we computed frequencies of heuristic applications and violations, as reported by students (Table 3). In cases where students modified an existing dashboard, students reported an overall increase in the rate of applications and a decrease in the rate of violations of heuristics across all conversation states. In particular, we saw a higher rate of decrease in violations for 'turn taking,' repair & refinement,' and 'close'; states that had fewer application rates in general during Part A's exercise. However, note that students performed these self-evaluations, which could contribute to a higher rate of applications. While the class instructor reviewed the self-reflection ratings, future work should consider an external reviewer to validate these ratings. Figure 3 shows an example of an original dashboard on hospital admittances (left) with a corresponding modified version (right). Updates include supporting better turn-taking by adding interactivity and multi-view coordination, along with a search bar to navigate to a specific medical department specialty. Iconography was added to better convey the meaning of the information being presented to the user, along with additional descriptive text to ground the conversation.
We found that having done Part A, students were familiar with the heuristics and focused specifically on addressing heuristics for turn-taking, repair & refinement, and close. _P1_8 stated, _"I was more observant of how the dashboard behaved when I interacted with it. I
Figure 3: Dashboard used in Part B classroom exercise. Left: Original dashboard. Right: Modified dashboard. Updated dashboard annotated in yellow based on heuristics from the conversational states, indicated by \(H\#\): (a) Initiation: The text in the dashboard is updated to provide more context (\(H6\)). (b) Grounding: Dashboard contains iconography to add meaning to the data being presented (\(H17\)). (c) Turn-taking: The dashboard is updated to add filtering across views (\(H28\)). (d) Close: The “Unknown” category was removed from the original dashboard to further clarify the takeaway of the dashboard (\(H37\)). Note that students made aesthetic changes (not always an improvement), such as modifying the dashboard’s background in addition to applying the heuristics. (Permission granted to use original and modified dashboards).
\begin{table}
\end{table}
Table 3: Part B heatmaps showing the frequency of applying or violating the conversational heuristics, grouped by conversational state. Frequencies are averaged across participants and normalized by the number of heuristics per state on a 0 to 100 scale.
focused on making sure there were no dead-ends when I clicked on the widgets and all the views updated appropriately._" In both the updated and newly created dashboards, we observed a greater prevalence of text to help ground contextual information alongside the visualizations (applying heuristics, Initiation - H6, Grounding - H16, H21, H23, Turntaking - H29, and Close - H36). \(P\)4 remarked, _"For each category, the heuristics reminded me that text plays a vital role with the charts for communicating the key ideas._" Some students reported that they sacrificed visual style for clearer communication with the user. \(P\)37 stated, _"Although visual style get [sic] little disturbed in the color part, it looks necessary to make dashboard more easy to understand."_ Future work should further explore how these heuristics, alongside visual design guidelines can support the dashboard authoring process.
### _Feedback on the Heuristics_
Now, we summarize the various qualitative themes of feedback on the heuristics across both exercises.
**Heuristics were helpful and understandable.** Participants found the heuristics to be useful for understanding the structure and flow of dashboards as part of an analytical conversation, as well as for authoring new ones. \(P\)8 commented, _"The dashboard communicates a certain style or mood to the user, and there are clear strategies employed in the dashboard to mark charts or marks more prominently to encourage a user to interact with them, as well as de-emphasize items not relevant to the conversation.". \(P\)4 stated, _"all heuristics were explained clearly, and I did not encounter any confusion while completing the form."_
**Unique and unexpected heuristics.** Others found some heuristics to be rather unique and unexpected when considering dashboard design. For example, participants found heuristics, \(H\)11_, \(H\)17 - 19 on visual symbols and iconography to be helpful - _"The use of semiotics for symbolic communication as well as the exposition sections stood out to me in particular [P4]. \(P\)15 found \(H\)26 to be useful when thinking of evaluating friction in dashboards: _"I was not able to find or look at all the cities at once, and it was difficult to click on the small bubbles."_\(P\)13 was intrigued by the heuristic on navigational dead-ends (_H_33) and said, _"This made me think to make visualization work in every case, whenever the user selects or searches anything on the dashboard, to navigate easily."_\(P\)23 found \(H\)5 concerning logical reading order to be insightful - _"Before this assignment, I never thought that the placement of charts should have a logical sequence. It makes perfect sense, and I will apply this in my future dashboards."_
**Confusing and missing heuristics.** Participants found the heuristics on bias (_H_13) and mood (_H_24) to be vague and not very actionable. \(P\)11 stated, _"This [the bias] heuristic was confusing as I did not understand the biases in the dashboard very well. Mood communication in terms of the dashboard was somewhat confusing to me."_ There were suggestions for considering adding animation as part of the analytical conversation (_P_24, \(P\)27, \(P\)36) and further helping users recognize, diagnose, and recover from errors when interacting with the dashboards (_P_47).
## 5 Discussion and Future Work
While existing dashboard guidelines capture visual design issues such as legibility and complexity, our development and evaluation of heuristics from the lens of analytical conversations suggest that there are ways that dashboard design can succeed (or fail), which are not captured by existing recommendations or pedagogy, and so are often overlooked.
**Dashboards struggle with turn-taking, repair & refinement.** Participants pointed out many violations of heuristics in the turn-taking and repair & refinement phases, suggesting that today's dashboards may be weak in these aspects. We strongly encourage future work that makes dashboards more flexible, cooperative conversational partners. Future dashboards could enable users to more easily pivot between analytical goals (e.g., via flexible construction so the end user can change dashboard metrics, field ordering, chart type, etc.) and could employ predictive analytics to anticipate a user's upcoming information needs.
**Interpreting heuristics for guidance and mitigation strategies.** By their nature, heuristics offer guidance rather than prescriptive solutions. They should be considered in the context of the designer's expert knowledge of the domain, design goals, and audience. Heuristics may, at times, contradict each other or suggest design directions that are counter to specific communication goals or domain conventions. For example, in Sarikaya et al.'s [66] framework of dashboard types, dashboards for learning may need greater emphasis on grounding (e.g., contextual information) than dashboards for ongoing awareness of well-understood metrics. We envision that designers will use the heuristics to inspire ideas and identify potential gaps and flaws while thoughtfully discarding less suitable suggestions. Utilizing the heuristics to provide in-situ mitigation strategies in dashboard authoring tools is an area of future research. For example, tools could flag warnings if the interactions have errors or there are no graceful fallbacks for preventing functional dead-ends. Other guidelines can support authors with progressive disclosure of content through interaction and templates for adding text to prevent cognitive overload during conversational initiation.
**Developing heuristics for conversations around data.** We encourage the revision and extension of the heuristics themselves, as academics and practitioners use and adopt them. For example, we introduced heuristics to guide dashboard design and evaluation, with the lens of dashboards as a medium to enable conversations _with_ data. Dashboards also support the important role of human-human communication _around_ data [86], including discussing and circulating the data within an organization. A future extension to the heuristics could focus on dashboard characteristics to support circulation or persuasion.
**Assessing utility of heuristics during dashboard design.** Our evaluation focuses on applying heuristics to critique or improve existing dashboards. An acid test of heuristics' utility is their ability to productively shape the design process: we would ideally see how mindfulness of our heuristics impacts the final design of dashboards or the iterative process of choosing design alternatives. While we do note examples of participants saying that they would, as per \(P\)25, _"apply this [heuristic] in my future dashboards,"_ we leave this longitudinal assessment to future work. Additional future work is the connection of our heuristics to other forms of evaluation. For instance, does a (re-) designed cooperative dashboard result in benefits to user performance or satisfaction?
**Extending the cooperative principles to other analytical workflows.** We believe that _cooperative dashboards_ represent a new perspective on visual analytics and potentially an emerging genre of visualization design. While it has been long understood that analytics is a multi-stage process (e.g., the Pirolli/Card sensemaking loop [61]), there has been less work on visual analytics tools that operate _across_ stages. We consider dashboards to be useful testbeds for learning about the structure of analytical conversations, and for testing novel designs to support users. Cooperative dashboards allow a wide range of potential design or technique work for topics like mixed-initiative systems, NLIs, and rhetoric. Beyond dashboards, we also wish to apply these cooperative principles to other related forms (such as data stories) and media (such as designing visualizations for mobile or wearable devices).
## 6 Conclusion
In this paper, we explore the design of interactive dashboards as artifacts that support analytical conversations with their users. In particular, we explore how the role of language pragmatics and cooperative conversation can support data exploration, interaction, and reasoning. Inspired by existing models of conversational implicature and its states, we proposed and evaluated 39 heuristics for helping guide the design of analytical conversation with interactive dashboards. These heuristics were iteratively validated with 16 visualization practitioners and subsequently evaluated by 52 students to assess how useful they are for effectively authoring dashboards. Through the evaluation of these heuristics, we found that while dashboards tend to be effective at initiation and grounding of the conversation with the user, they struggle with other aspects of conversation that include turn-taking, repair & refinement, and close. We hope that this work inspires the broader research and practitioner communities to explore new design and interaction paradigms for authoring more cooperative dashboard conversations.
## Acknowledgments
We thank the visualization researchers, practitioners, and students of Jio Institute, India, for their participation and feedback that helped inform the utility of this work. This research is also supported by NSF Award #1900991 and The Roux Family Foundation.
|
2306.12802 | Otter-Knowledge: benchmarks of multimodal knowledge graph representation
learning from different sources for drug discovery | Recent research on predicting the binding affinity between drug molecules and
proteins use representations learned, through unsupervised learning techniques,
from large databases of molecule SMILES and protein sequences. While these
representations have significantly enhanced the predictions, they are usually
based on a limited set of modalities, and they do not exploit available
knowledge about existing relations among molecules and proteins. In this study,
we demonstrate that by incorporating knowledge graphs from diverse sources and
modalities into the sequences or SMILES representation, we can further enrich
the representation and achieve state-of-the-art results for drug-target binding
affinity prediction in the established Therapeutic Data Commons (TDC)
benchmarks. We release a set of multimodal knowledge graphs, integrating data
from seven public data sources, and containing over 30 million triples. Our
intention is to foster additional research to explore how multimodal knowledge
enhanced protein/molecule embeddings can improve prediction tasks, including
prediction of binding affinity. We also release some pretrained models learned
from our multimodal knowledge graphs, along with source code for running
standard benchmark tasks for prediction of biding affinity. | Hoang Thanh Lam, Marco Luca Sbodio, Marcos Martínez Galindo, Mykhaylo Zayats, Raúl Fernández-Díaz, Víctor Valls, Gabriele Picco, Cesar Berrospi Ramis, Vanessa López | 2023-06-22T11:01:41Z | http://arxiv.org/abs/2306.12802v3 | Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
###### Abstract
Recent research in representation learning utilizes large databases of proteins or molecules to acquire knowledge of drug and protein structures through unsupervised learning techniques. These pre-trained representations have proven to significantly enhance the accuracy of subsequent tasks, such as predicting the affinity between drugs and target proteins. In this study, we demonstrate that by incorporating knowledge graphs from diverse sources and modalities into the sequences or SMILES representation, we can further enrich the representation and achieve state-of-the-art results on established benchmark datasets. We provide preprocessed and integrated data obtained from 7 public sources, which encompass over 30M triples. Additionally, we make available the pre-trained models based on this data, along with the reported outcomes of their performance on three widely-used benchmark datasets for drug-target binding affinity prediction found in the Therapeutic Data Commons (TDC) benchmarks. Additionally, we make the source code for training models on benchmark datasets publicly available. Our objective in releasing these pre-trained models, accompanied by clean data for model pretraining and benchmark results, is to encourage research in knowledge-enhanced representation learning.
## 1 Introduction
Developing a concise representation of proteins and small molecules is a crucial task in AI-based drug discovery. Recent studies [22, 26] have focused on utilizing large databases of protein sequences or molecules for self-supervised representation learning. These representations are then fine-tuned using limited labeled data for tasks like predicting the binding affinity between drugs and targets. In the field of protein representation learning, [38] and [39] have demonstrated that enhancing protein representations with additional information from knowledge graphs, such as comprehensive textual data from the gene ontology [1], can enhance the performance of pre-trained representations on various protein properties and protein-protein interaction tasks.
While early results concerning knowledge-enriched representations for proteins show promise, there is a notable absence of openly available, carefully curated, integrated, and readily usable datasets in the research community for studying pre-training methods. This scarcity motivates us to preprocess datasets from diverse knowledge sources, incorporating abundant factual information and modalities, and make the data available alongside the pre-trained models, along with the prediction accuracy of downstream models. Our primary objective is to establish foundational datasets for research on multimodal knowledge-enhanced representation learning and evaluate the outcomes against established benchmarks for downstream tasks. In addition to presenting new state-of-the-art findings on standard benchmark datasets, we offer a comprehensive discussion that shares the insights gained during the creation of pretrained models, as well as the research challenges that must be overcome to develop knowledge-enhanced foundation models for therapeutic
sciences. The preprocessed datasets, pretrained models and the code for benchmark models training are open-sourced and available in our project github repository1.
Footnote 1: [https://github.com/IBM/otter-knowledge](https://github.com/IBM/otter-knowledge)
## 2 Multimodal knowledge representation learning
The diagram in Figure 1 illustrates the overall process of our system named _Otter-Knowledge_. This process involves constructing multimodal knowledge graphs from diverse sources, generating initial embeddings for each modality using pretrained models available in the model zoo, and subsequently improving the embeddings by incorporating information from the knowledge graphs through the utilization of graph neural networks (GNN).
### Multimodal Knowledge graph construction
A Multimodal Knowledge Graph (MKG) is a directed labeled graph where labels have well-defined meanings, and each node has a modality, a particular mode that qualifies its type (text, image, protein, molecule, etc.). We consider two node subsets: _entity nodes_ (or entities), which corresponds to concepts in the knowledge graph (for example protein, or molecule), and _attribute nodes_ (or attributes), which represent qualifying attributes of an entity (for example the mass of a molecule, or the description of a protein). We refer to an edge that connect an entity to an attribute as _data property_, and an edge that connect two entities as _object property_. Each node in the graph has a unique identifier, and a unique modality (specified as a string).
We developed a framework for automating the construction of a multimodal knowledge graph by extracting and fusing data from a variety of sources, including text delimited files, JSON, and proprietary data sources [32]. The framework takes as input a schema file (specified in JSON), which declaratively describe how to build the desired graph from a set of data sources.
The framework that builds the MKG ensures that each triple is unique, and it automatically merges entities having the same unique identifier, but whose data is extracted from different data sources. It is also possible to use the special relation sameAs 2 to indicate that two entities having different unique identifiers are to be considered as the same entity. The sameAs relation is useful when creating a MKG from multiple partially overlapping data sources; when the graph is built. Additionally, it is possible to build an MKG incrementally, by merging two or more graphs built using different schemas; the merge operation automatically fuses entities having the same unique identifier or the same value of a distinctive attribute (for example the same sequence for two proteins).
Footnote 2: We borrow the semantic of owl:sameAs - see [https://www.w3.org/TR/owl-ref/#sameAs-def](https://www.w3.org/TR/owl-ref/#sameAs-def)
The framework builds the graph in memory, but can also provide support for building the graph using a database on disk; the graph triples can also be serialised using GML 3 or any RDF 4 serialization formats. Finally, the framework provides scalable functionalities for parallel and GPU-based computation of the initial embedding vectors for the nodes; such embeddings are computed based on the modality of each node.
Figure 1: _Otter-Knowledge_ workflow.
### Computing initial embeddings
As explained before, the MKG contains nodes representing entities and nodes representing attributes of those entities. In the MKG, each node has a modality assigned, e.g: entity nodes can have a modality _Protein_ or _Drug_, nodes containing text could have a modality _text_. We assign a model for each one of the modalities in our graph, as specified by the user in the _schema_. The models, which we refer to as _handlers_, are capable of preprocessing the values in the nodes and computing their initial embeddings. Our framework allows to easily retrieve all the nodes in the graph with the same modality to efficiently compute the initial embeddings with the assigned _handler_, facilitating parallelization, GPU utilization, batching, and avoiding the need to load different models in memory simultaneously.
Some of these modalities do not have a _handler_ assigned, like _Protein_, and therefore, no initial embedding is computed for them. For each modality, only one _handler_ is being used, although it is possible to change the _handler_ assigned. For instance, for SMILES it is possible to use _morgan-fingerprint_ or _MolFormer_. These are the _handlers_ that we have used for computing the initial embeddings of the graph:
* _morgan-fingerprint_ We use the Morgan fingerprint in RDKit5 for processing the SMILES. Specifically, we use _GetMorganFingerprintAsBitVect_. We use by default a shape (also known as nBits or size) of 2048, and a radius of 2. If RDKit is not able to get the Morgan fingerprint of the SMILES we return an embedding of the same shape full of zeros. Footnote 5: [https://www.rdkit.org/docs/index.html](https://www.rdkit.org/docs/index.html)
* _MolFormer_[27] is a large-scale chemical language model designed with the intention of learning a model trained on small molecules which are represented as SMILES strings. _MolFormer_ leverages Masked Language Modeling and employs a linear attention Transformer combined with rotary embeddings.
* _protein-sequence-mean_ For the protein sequences, we use \(esm1b\_t33\_650M\_UR50S\)[26]. We use the 33rd layer as representation layer, and compute the mean for the contacts. For the \(batch\_converter\) we use \(truncation\_seq\_length=1022\).
* _text_ For the textual values, we use the'sentence-transformers/paraphrase-albert-small-v2' [24] model available in the Huggingface Hub6. Footnote 6: [https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2)
* _number_ In the case of numbers, we do not use any model to get initial embeddings. We convert the numerical value to a torch tensor and use it as embedding.
### Pretraining with inductive R-GNN
GNN architectureTo improve representations of MKG entities we assimilate initial embeddings with the information about the MKG structure, i.e., connection patterns between graph nodes. To this end we train a Graph Neural Network (GNN) [40] that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours. The architecture of GNN consists of two main blocks: encoder and decoder. For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) [29] which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type. For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval \([0;1]\). Decoder and scoring function, and details about the hyperparameters are reported in the Appendix.
Learning objectiveFor link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function. Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
Negative samplingWe note that to train GNN with link prediction we need to provide network with both positive and negative examples. While positive examples come from the MKG itself, negative triples are generated synthetically. For this for each relation type we extract a set of admissible source and target nodes and then we randomly sample source and target from the corresponding admissible sets. We use an equal ratio between positive and negative links in the pretraining.
Scaling the GNN trainingDue to the integration of data from various sources, the size of the integrated data can become quite large. For instance, the combination of Uniprot, ChemBL, and BindingDB necessitates over 200GB of CPU memory for training. However, training a graph neural network (GNN) on such a massive graph using limited GPU memory is highly inefficient. To address this, we employ a graph auto-scaling approach (GAS) described in the reference [8]. This method divides the graph into smaller partitions using Metis7. GAS performs training on each partition separately, so it is able scale to arbitrarily large graphs. To avoid information loss due to the connection between partitions, it keeps node embeddings of previous training step in CPU memory thus saving GPU memory during training and uses this historical embeddings to update the nodes inside a partition. Please refer to Appendix for details about the hyper-parameter settings of GAS.
Footnote 7: [https://github.com/KarypisLab/METIS](https://github.com/KarypisLab/METIS)
Information flow control and noisy link predictionOne crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks. An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
## 3 Pretraining datasets collections
Table 1 summarises the pretrained datasets. In this subsection, we discuss data preprocessing methods. For details about datasets and schema, readers can refer to the appendices and the github repository.
Uniprot[5] comprises of 573,227 proteins from SwissProt, which is the subset of manually curated entries within UniProt, including attributes with different modalities like the sequence (567,483 of them), full name, organism, protein family, description of its function, catalysts activity, pathways and its length. The number of edges are 38,665 of type _target_of_ from Uniprot ids to both ChEMBL and Drugbank ids, and 196,133 interactants between Uniprot protein ids.
BindingDB[16] consists of 2,656,221 data points, involving 1.2 million compounds and 9,000 targets. Instead of utilizing the affinity score, we generate a triple for each combination of drugs and proteins. In order to prevent any data leakage, we eliminate overlapping triples with the TDC DTI dataset. As a result, the dataset concludes with a total of 2,232,392 triples.
ChEMBL[9] comprises of drug-like bioactive molecules, 10,261 ChEMBL ids with their corresponding SMILES were downloaded from OpenTargets [18], from which 7,610 have a _sameAs_ link to drugbank id molecules.
Drugbank[13] comprises of detailed chemical data on 9,749 drugs (such as SMILES, description, indication, mechanism of action, affected organism, average mass, toxicity, calculated and experimental properties, absorption), classification, drug pathways and 1,301,422 drug interactions
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Datasets** & **\# triples** & **Entities** & **Modalities** & **Data license** & **Released** \\ \hline UBC & 6,207,654 & Pro teins/Drugs & sequences, SMILES & Open & Yes \\ & & & text, number, category & & \\ PrimeKG & 12,757,257 &
\begin{tabular}{l} Proteins/Drugs/ \\ Diseases \\ \end{tabular} & sequences, SMILES & & Open & Yes \\ & & & & text & Open & Yes \\ & & & & sequences, SMILES & Open & Yes \\ & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the KGs for pretraining models with the number of triples used for GNN training. More details on the modalities, entities and data properties are discussed in the Appendices.
DUDe[17] comprises a collection of 22,886 active compounds and their corresponding affinities towards 102 targets. For our study, we utilized a preprocessed version of the DUDe [31], which includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
PrimeKG(the Precision Medicine Knowledge Graph) [4] integrates 20 biomedical resources, it describes 17,080 diseases with 4 million relationships. PrimeKG includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The MKG that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
Stitch(Search Tool for Interacting Chemicals) [34] is a database of known and predicted interactions between chemicals represented by SMILES strings and proteins whose sequences are taken from STRING database [33]. Those interactions are obtained from computational prediction, from knowledge transfer between organisms, and from interactions aggregated from other (primary) databases. For the MKG curation we filtered only the interaction with highest confidence, i.e., the one which is higher 0.9. This resulted into 10,717,791 triples for 17,572 different chemicals and 1,886,496 different proteins. Furthermore, the graph was split into 5 roughly same size subgraphs and GNN was trained sequentially on each of them by upgrading the model trained using the previous subgraph.
## 4 Experiments
We summarise the main results from our experiments and further details are available in the appendices.
### Results and discussion
### Benchmark tasks and downstream models
Downstream benchmarksWe use the TDC [12] benchmark datasets regarding drug-target binding affinity for evaluation. The DTI-DG dataset has a leaderboard with the state-of-the-art metrics reported for different methods. The data was temporally split based on patent application dates making this dataset suitable for method generalization evaluation. On the other hand, the DAVIS and the KIBA datasets have a random split, together with two more splits based on target or drug. The latter splits help validate the learning methods against new drugs/proteins never seen during training.
Downstream modelsThe TDC framework adapts the DeepDTA framework [21] to learn drug/protein features using separated convolution layers before concatenating transformed features for binding affinity prediction. We discovered that the given architecture is not effective in the context where both drug and protein embeddings are already given as input. With the original TDC framework, the baseline used in Table 3, based on ESM embeddings for proteins and Morgan fingerprints for molecules SMILES, gives much worse results (0.539) than using an architecture that directly concatenates protein and drug embeddings for further transformation (0.569) on the DTI DG dataset. Therefore, we create a new model architecture for the evaluation of the pretrained embeddings. Besides the network that transforms the ESM and Morgan fingerprint, we create a parallel network that transforms the GNN embeddings. Both networks predict the binding affinity and the final prediction is the sum of two predictions (for details about the network size refer to the Appendices).
Ensemble learningThe pretrained representation is sensitive to various factors, such as the chosen objectives for the Graph Neural Network (GNN) and the specific graphs used for training. Additionally, combining all the datasets into a single large graph requires substantial computational resources and poses challenges in aligning different databases. In this study, we propose a simple approach that involves combining pretrained embeddings obtained from different settings and datasets. This allows us to train multiple GNN models simultaneously without the need to merge all the
\begin{table}
\begin{tabular}{l l l l} \hline Datasets & **DTI DG** & **DAVIS** & **KIBA** \\ \hline \# triples & 232460 & 27621 & 118036 \\ \# drugs & 140745 & 68 & 2068 \\ \# proteins & 569 & 379 & 299 \\ Type of splits & Temporal & Random/Drug/Target & Random/Drug/Target \\ \hline \end{tabular}
\end{table}
Table 2: TDC benchmark datasets and statistics.
datasets into a single location, resulting in savings in computational resources and effort required for database alignment. We create a linear ensemble by assigning equal weights to individual downstream models, each trained with separate pretrained embeddings given by individual GNN models.
Knowledge enhanced representation versus vanilla representationIn Table 3, we present the outcomes of the _Otter-Knowledge_ models, which were pretrained on graphs generated from Uniprot (U), BindingDB (B), ChemBL (C), DUDe, PrimeKG and STITCH with three different training objectives: TransE, DistMult, and binary classifier respectively. Also, and as described in Section 2.3, we control the information propagated to the Drug/Protein entities, and manually handpick a subset of links from each database that are relevant to drug discovery. In all of these methods, we started with the initial embeddings of sequences using ESM-1b models, and Morgan fingerprints were utilized for SMILES, we call this baseline method vanilla representation as oppose to methods with knowledge enhanced representation. The embeddings were then fine-tuned with knowledge graphs. Our results demonstrate that _Otter-Knowledge_ outperforms the baseline vanilla representation without the enhanced knowledge from the graphs. Notably, a significant improvement was observed when we created an ensemble of 12 models trained on UDC and DUDe, PrimeKG and STITCH. We achieved state-of-the-art (SOTA) results on the leaderboard of the DTI DG dataset. However, for the KIBA dataset with drug split, the improvements were not substantial. As indicated in Table 4, the KIBA dataset consists of only 68 drugs. The limited number of drugs makes this specific split particularly challenging, and we consider it an open challenge for future research.
Information flow and noisy linksTable 4 shows the results of _Otter-Knowledge_ for UBC when (i) we do _not_ control the information that is propagated to Drug/Protein entities, (ii) we do _not_ cherry-pick a subset of links from each database that are relevant to the downstream task, (iii) regression for numerical data properties is added to the
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Datasets (UBC) & \multicolumn{2}{c}{**DTI DG**} & \multicolumn{2}{c}{**DAVIS**} & \multicolumn{2}{c}{**KIBA**} \\ \cline{2-9} Splits & Temporal & Random & Target & Drug & Random & Target & Drug \\ \hline Otter DistMult (C) & 0.575 & 0.809 & 0.571 & 0.126 & 0.861 & **0.643** & 0.617 \\ Otter TransE (C) & 0.576 & 0.809 & 0.570 & **0.157** & 0.858 & 0.632 & 0.585 \\ Otter Classifier (C) & 0.578 & 0.814 & **0.577** & 0.097 & 0.861 & 0.633 & 0.631 \\ \hline Otter DistMult (N+C) & 0.578 & 0.809 & 0.574 & 0.105 & 0.862 & **0.643** & 0.615 \\ Otter TransE (N+C) & 0.579 & 0.809 & 0.573 & 0.108 & 0.857 & 0.633 & 0.583 \\ Otter Classifier (N+C) & 0.580 & **0.816** & **0.577** & 0.147 & **0.864** & 0.639 & **0.641** \\ \hline Otter DistMult (N+C+R) & 0.579 & 0.810 & 0.572 & 0.145 & 0.862 & 0.629 & 0.625 \\ Otter TransE (N+C+R) & 0.580 & 0.811 & 0.576 & 0.073 & 0.859 & 0.627 & 0.594 \\ Otter Classifier (N+C+R) & **0.582** & 0.812 & 0.574 & 0.124 & 0.860 & 0.619 & 0.600 \\ \hline \end{tabular}
\end{table}
Table 4: Information flow control and noisy links results for UBC for different scoring functions. The table results should be compared with the results in Table 3 (UBC). The evaluation metrics is Pearson correlation (higher is better). N (noisy links); C (no flow control); R (regression).
\begin{table}
\begin{tabular}{l l l|l l l|l l l} \hline & \multicolumn{2}{c}{**Downstream**} & \multicolumn{2}{c}{**DTI DG**} & \multicolumn{2}{c}{**DAVIS**} & \multicolumn{2}{c}{**KIBA**} \\ \cline{3-8}
**Upstream** & Splits & Temporal & Random & Target & Drug & Random & Target & Drug \\ \hline \multirow{4}{*}{} & Leaderboard\({}^{\sharp}\) & 0.538 & NA & NA & NA & NA & NA & NA \\ & Baseline & 0.569 & 0.805 & 0.554 & **0.264** & 0.852 & 0.630 & 0.576 \\ \hline \multirow{4}{*}{UBC} & Otter DistMult & 0.578 & 0.808 & 0.572 & 0.152 & 0.859 & 0.627 & 0.593 \\ & Otter TransE & 0.577 & 0.807 & 0.571 & 0.130 & 0.858 & 0.644 & 0.583 \\ & Otter Classifier & 0.580 & 0.810 & 0.573 & 0.104 & 0.861 & 0.631 & 0.616 \\ \hline \multirow{4}{*}{DUDe} & Otter DistMult & 0.577 & 0.805 & 0.573 & 0.132 & 0.857 & 0.650 & 0.607 \\ & Otter TransE & 0.576 & 0.807 & 0.570 & 0.170 & 0.858 & 0.653 & 0.604 \\ & Otter Classifier & 0.579 & 0.808 & 0.574 & 0.167 & 0.860 & 0.641 & 0.630 \\ \hline \multirow{4}{*}{PrimeKG} & Otter DistMult & 0.575 & 0.806 & 0.571 & 0.162 & 0.858 & 0.611 & 0.617 \\ & Otter TransE & 0.573 & 0.807 & 0.568 & 0.186 & 0.858 & 0.642 & 0.607 \\ \cline{1-1} & Otter Classifier & 0.576 & 0.813 & 0.576 & 0.133 & 0.861 & 0.630 & 0.635 \\ \hline \multirow{4}{*}{STITCH} & Otter DistMult & 0.575 & 0.808 & 0.573 & 0.138 & 0.859 & 0.615 & 0.603 \\ \cline{1-1} & Otter TransE & 0.578 & 0.814 & 0.572 & 0.119 & 0.859 & 0.636 & 0.635 \\ \cline{1-1} & Otter Classifier & 0.576 & 0.804 & 0.571 & 0.156 & 0.856 & 0.627 & 0.585 \\ \hline \multicolumn{8}{c}{} & Otter Ensemble & **0.588** & **0.839** & **0.578** & 0.168 & **0.886** & **0.678** & **0.638** \\ \hline \end{tabular}
\end{table}
Table 3: Results of knowledge enhanced representation on three standard drug-target binding affinity prediction benchmarks datasets with different splits. The evaluation metrics is Pearson correlation (higher is better). We reported the results concerning pretraining on separate upstream datasets and the ensemble of these models.
objective in addition to link prediction. Observe from the table that the results are similar to the results in Table 3, with minor variations across different scoring functions and datasets. Notably, Otter Classifier with noisy links (N) and no information flow control (C) achieves comparable or even better performance than when we cherry-pick links and control the flow of information (Table 3). These minor variations suggest that the embeddings computed by the GNN are resilient to noisy triples that are not directly relevant to the downstream tasks. Adding regression objectives and information flow control does not provide a significant result improvement.
Morgan-fingerprint versus MolFormerThe base of the _Otter-Knowledge_ is to leverage the pre-trained representations, before enhancing them with additional knowledge. Thus, the initial embeddings computed for the SMILES and sequences have an impact in the results. Table 5 shows the results of _Otter-Knowledge_ using a ClassifierHead, trained on UDB for 25 epochs with different drug initial representations. We can see that the MolFormer does not give superior results compared to Morgan-fingerprint, there is room for improvement regarding learned representation over simple fingerprint-based approaches for small molecules.
## 5 Related work
We review methods to learn an effective representation from proteins, molecules and their interactions.
Representation learning for proteins and small moleculesRepresentation learning focuses on encoding essential information about entities, such as proteins or molecules, into low-dimensional tensors. Self-supervised algorithms using language models (LMs) have achieved remarkable success in learning protein and molecule representations by training on extensive datasets of protein sequences or linear serialization of small molecules, such as SMILES. State of the art examples of transformer-based protein language models (PLMs) are TAPE [22], ProteinLM [36], ProteinBERT [2], ESM [25], Prottrans [6], and MSA [23]. They are typically trained on masked reconstruction - they learn the likelihood that a particular amino acid appears in a sequence context. Because the probability that a residue will be conserved or not across related sequences is intrinsically tied to its biological role, existent PLMs can capture co-evolutionary and inter-residue contact information [22; 25], and have shown impressive performance on various tasks, such as predicting protein structure [15] and function [22]. Regarding small molecules, their molecular structure can be condensed into linear notations like SMILES or SELFIES. LMs have also been used to interpret these representations, e.g., MolFormer [28], MolBERT [7], SmilesFormer [20] or SELFormer [37]. Both Protein and molecular representations have been fine-tuned using a contrastive learning co-embedding by Conplex [30] achieving good performance in Drug-Target Interaction (DTI) prediction, surpassing state of the art approaches in the TDC-DG leaderboard which evaluates of out-of-domain generalisation [11] and achieving high specificity while detecting false positive bindings in "decoy" datasets like DUD-E.
Knowledge enhanced pre-trained language models for proteinsLMs do not consider existent extensive knowledge, in the form of manually curated functional and structural annotations, in human-curated domain knowledge bases and effectively leveraging all this available factual knowledge to enhance representation learning is still an open research challenge. Nonetheless, prior research indicates that it can improve results in downstream learning tasks. OntoProtein [38] fine-tuned a PLM by reconstructing masked amino acids while minimizing the embedding distance between the contextual representation of proteins and associated gene ontology functional annotations [3]. For this purpose they built ProteinKG25, a KG consisting of 600k protein sequences and nearly five million triples. Their results show that the representations obtained where useful for classification tasks such as protein-protein interaction type, protein function, and contact prediction; but underperformed in regression tasks like homology, fluorescence, and stability.
KeAP [39], on the other hand, claims to explore a more granular token-level approach, where non-masked amino acids iteratively query the associated knowledge tokens to extract helpful information (from the Gene ontology) for restoring masked amino acids via cross-attention. The training process is guided only by the mask token reconstruction objective, while OntoProtein uses both contrastive learning and masked modelling simultaneously. KeAP, also trained on ProteinKG25 [38] and it outperforms OntoProtein on 9 downstream tasks, including contact, protein-protein interaction type, homology, stability, and protein-protein binding affinity prediction.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Datasets & **DTI DG** & \multicolumn{3}{c}{**DAVIS**} & \multicolumn{3}{c}{**KIBA**} \\ \cline{2-9} Splits & Temporal & Random & Target & Drug & Random & Target & Drug \\ \hline MolFormer & 0.547 & **0.811** & **0.578** & 0.103 & 0.838 & **0.642** & **0.624** \\ Morgan fingerprint & **0.574** & 0.806 & 0.573 & **0.125** & **0.861** & 0.631 & 0.619 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Impact of different modalities for drugs on the UDB datasets
Graph-based approaches for therapeuticsGraphs are a natural way to represent molecular interactions, signalling pathways, and disease co-morbidities. They can also be used for representation learning as they allow for the distillation of high-dimensional information about a node's neighborhood into low-dimensional vector spaces. The training objective of these representations is that similar neighborhoods are embedded close to each other in the vector space. Optimised representations can then be used to train downstream models to predict properties of specific nodes (e.g., protein function), as well as, novel associations between nodes (e.g., drug-target interactions). An overview on graph representation learning in biomedicine can be found in [14]. State of the art approaches have shown that incorporating multiple knowledge sources improves downstream performance (e.g., DTiGEMS+ [35] formulates the prediction of DTIs as a link prediction problem in an heterogeneous graph, or TxGNN for predicting drug indications/contranidications for rare diseases [10]).
## 6 Conclusion and future work
In this paper, we studies the representation learning problems for multimodal knowledge graphs fused from multiple sources. Our study serves as a foundation for future investigations in this area with the release of curated datasets from different sources in the format that is ready for studying pretraining methods. Additionally, we have made available pre-trained models that have been constructed using a comprehensive collection of open data. These models can be utilized to acquire representations specifically tailored for drug discovery applications. Furthermore, they can serve as benchmarks for comparing and evaluating against other baseline representation learning techniques. We have also made the standard evaluation framework for assessing pretrained representations in drug-target binding affinity prediction publicly available. Furthermore, we conducted a thorough analysis of various representation learning methods using three well-established benchmark datasets for drug-target binding affinity prediction. Notably, our approach achieved state-of-the-art results on the TDC DG dataset, demonstrating its superiority over existing methods.
Our study establishes the foundation for exploring the learning of multi-modal knowledge representation. Nevertheless, numerous unresolved research questions persist. Firstly, the incorporation of additional modalities, such as the 3D structure of molecules or proteins, can provide valuable insights for representation learning. Secondly, the challenge lies in effectively handling a vast number of databases, where aligning them is not a straightforward task. Developing a learning approach capable of accommodating the dynamic input schema from diverse sources is a crucial problem to address. Finally, evaluating generasibility of the learned graph representation for further (predictive or generative) downstream tasks and use cases and having a more robust learning methods for generalizing the learned representation to multiple tasks under data distribution shift is an important research topic.
## 7 Limitations
Due to license restriction of some datasets we only release the datasets with non-commercial licenses and the pretrained models that were built on these datasets. The datasets do not include 3D structure of proteins/drugs which can be an interesting modalities for future work.
|
2301.07516 | Quality Attributes Optimization of Software Architecture: Research
Challenges and Directions | The estimation and improvement of quality attributes in software
architectures is a challenging and time-consuming activity. On modern software
applications, a model-based representation is crucial to face the complexity of
such activity. One main challenge is that the improvement of distinctive
quality attributes may require contrasting refactoring actions on the
architecture, for instance when looking for trade-off between performance and
reliability (or other non-functional quality attributes). In such cases,
multi-objective optimization can provide the designer with a more complete view
on these trade-offs and, consequently, can lead to identify suitable
refactoring actions that take into account independent or even competing
objectives.
In this paper, we present open challenges and research directions to fill
current gaps in the context of multi-objective software architecture
optimization. | Daniele Di Pompeo, Michele Tucci | 2023-01-18T13:37:21Z | http://arxiv.org/abs/2301.07516v1 | # Quality Attributes Optimization of Software Architecture: Research Challenges and Directions
###### Abstract
The estimation and improvement of quality attributes in software architectures is a challenging and time-consuming activity. On modern software applications, a model-based representation is crucial to face the complexity of such activity. One main challenge is that the improvement of distinctive quality attributes may require contrasting refactoring actions on the architecture, for instance when looking for trade-off between performance and reliability (or other non-functional quality attributes). In such cases, multi-objective optimization can provide the designer with a more complete view on these trade-offs and, consequently, can lead to identify suitable refactoring actions that take into account independent or even competing objectives.
In this paper, we present open challenges and research directions to fill current gaps in the context of multi-objective software architecture optimization.
refactoring, multi-objective optimization, software architecture, performance
## I Introduction
Different factors, such as the addition of new requirements, the adaption to new execution contexts, or the deteriorating of non-functional attributes, can lead to software refactoring. Identifying the best refactoring operations is challenging because there is a wide range of potential solutions and no automated assistance is currently available. In this situation, search-based approaches have been widely used [1, 2, 3, 4, 5].
Multi-objective optimization approaches, which are search-based, have lately been used to solve model refactoring optimization issues [6, 7]. Searching among design alternatives (for example, through architectural tactics) is a typical feature of multi-objective optimization methodologies used to solve model-based software restructuring challenges [8, 7].
The automated refactoring of software models plays an important role in optimizing software architectures, as it allows generating design alternatives while preserving the external behavior of its functionalities. While being beneficial in finding such alternatives, the automated refactoring process can generate a considerable number of new solutions that are difficult for the designer to navigate. As a result, choosing the best refactoring methods from such a huge set of options requires significant effort, which can be reduced by multi-objective algorithms. However, in order to explore the solution space and produce an (almost) optimal Pareto frontier, multi-objective algorithms may require a significant amount of hardware resources (such as time and memory allocation). Even when automated, finding and creating Pareto boundaries can frequently take many hours or even days. Therefore, assessing and understanding the performance of multi-objective algorithms in software model refactoring is of paramount importance, especially when the goal is to integrate them into the design and evolution phases of software development.
In this paper, we present open challenges that, to the best of our knowledge, hinder the exploitation of search-based techniques within the context of quality attribute optimization of software architectures. We also describe the plan to overcome some of the listed open challenges.
## II State of the art
In the past ten years, approaches on software architecture multi-objective optimization have been developed to optimize various quality attributes (such as reliability and energy) [9, 10, 11, 12, 6]; with various degrees of freedom in the modification of architectures (such as service selection [13].
Recent research analyzes the capacity of two distinct multi-objective optimization algorithms to enhance non-functional features inside a particular architecture notation (_i.e.,_ Palladio Component Model) [7, 14, 15]. The authors use architectural approaches to find the best solutions, which primarily include changing system parameters (such as hardware settings or operation requirements).
Menasce _et al._ have provided a framework for architectural design and quality optimization, [16]. This framework makes use of architectural patterns to help the search process (such as load balancing and fault tolerance). The approach has two drawbacks: performance indices are computed using equation-based analytical models, which may be too simple to capture architectural details and resource contention; the architecture must be designed in a tool-specific notation rather than in a standard modeling language (as we do in this paper).
A method for modeling and analyzing AADL architectures has been given by Aleti _et al._[17]. A tool that may be used to optimize various quality attributes while adjusting architecture deployment and component redundancy has also been introduced.
Cortellessa and Di Pompeo [6] have presented a multi-objective framework aimed at improving the quality of architectural models specified by _&Emilia_[18]. Cortellessa and Di
Pompeo analyzed the sensibility of genetic algorithms when changing the configuration parameters.
Cortellessa _et al._[19] have instead studied the impact of specific non-functional quality metric (_i.e.,_ performance antipatterns [20]) on the overall quality of Pareto frontiers. In order to evaluate the overall quality of Pareto frontiers in this study, Cortellessa _et al._ exploited established quality indicators within the search-based theory.
Di Pompeo and Tucci [21] have investigated the effect of introducing a time budget to multi-objective optimization driven by non-functional quality attributes (such as performance and reliability) on the overall Pareto frontiers quality. The idea beyond the approach is to introduce concepts of search-based techniques already investigated within different domains to the software model optimization context.
## III Quality Attribute Optimization framework
Figure 1 depicts a classic search-based framework based on genetic algorithms. The framework starts from an _Initial Architecture_ and a set of _Refactoring Actions_. The initial architecture is the subject architecture to be optimized, while the refactoring actions are the set of all available actions that are combined by the _Bio-Inspired Actions_, which are _selection_, _mutation_, and _crossover_.
The _crossover_ operator mates solutions to evolve the species. Several crossover policies, such as the single point crossover, can be employed in this step. In this case, the chromosome will be split in two halves, and they will be alternatively combined. While the crossover operation is performed, the _mutation_ operator randomly selects an element belonging to the chromosome and change it with a new one, as a genetic mutation would do in nature. The _selection_ operator is in charge of discarding the worst elements in the offspring (_i.e.,_ the architecture alternatives). Once the offspring is made, elements belonging to the offspring are sorted with respect to the objectives (_i.e.,_ through the _sorting_ operator). When there are more than one objective, we are in a multi-objective optimization process. Often, when the objectives are more than 3, we are in a many-objective optimization process. Finally, when a stopping criterion is met, the optimization process ends and the final offspring will form the _Pareto frontier_.
## IV Open Challenges
_Lack of automation:_ As introduced before, when search-based techniques are put in place they require to generate a higher number of alternatives, even thousands. Therefore, the automation to form alternatives is a must-have functionality in every model-based optimization framework. Introducing automation in such a context would be a step ahead towards the adoption of search-based techniques within model-based optimization processes. Furthermore, the automation is strictly related to the modelling notation and its expressiveness. To the best of our knowledge, we introduced the first refactoring engine for UML [20]. Our refactoring engine exploits the Epsilon suite 1, and provides some facilities to refactor three different UML views, _i.e.,_ Component, Sequence, and Deployment diagrams. Then, we exploited the refactoring engine in a search-based optimization framework, where we sought optimal solutions with respect to four competitive objectives [19, 21].
Footnote 1: [https://www.eclipse.org/epsilon](https://www.eclipse.org/epsilon)
Problem formalization:One of the relevant aspects for applying search-based techniques is the formalization of the problem. Often, search-based approaches exploit evolutionary algorithms and, in the majority of the cases, genetic algorithms are used to search the solution space. Furthermore, a genetic algorithm is a bio-inspired algorithm that requires a chromosome to be manipulated for the evolution of the species.
To the best of our knowledge, there not exist guidelines in literature to represent specific problems as chromosomes. Positional structures are exploited to draw problems in model-based optimization studies [8, 7]. Thus, each position of the chromosome have a fixed meaning. One of the advantages of using positional chromosomes is their fastest execution of bio-inspired actions on them. However, the positional structure has the drawback to be too inexpensive within the context of model-based software refactoring. To overcome the above limitation, there exists a chromosome representation that reports a refactoring action into a chromosome position [20]. Nevertheless, it is slower than the positional structure due to the complexity of compatibility checking among elements within chromosomes.
For the above issues, we see an interesting research direction of the problem formalization within model-based software optimization that should fill the gap with more established search-based optimization problems.
Time and resources requirements:Improving quality attributes by means of optimization techniques often requires a considerable amount of time and resources, as the search for better solutions relies on the manipulation of modelling artifacts. Usually, the process of generating a new design alternative also involves a number of transformations from software
Fig. 1: A typical framework for quality attributes optimization of software architecture.
design models (_e.g.,_ UML) to non-functional models (_e.g.,_ Queueing Networks, Markov Chains). These target models are then used to quantify quality attributes like performance and reliability, either analytically or through simulation. Given their inherent complexity and the toolchain employed in these contexts, it is generally challenging to make these activities more efficient. As a consequence, they tend to extend the overall time needed for the optimization process. Moreover, when this process is performed on models that are not just toy examples but realistic in size and complexity, it can last several days [19]. This clearly poses a challenge in adopting search-based optimization techniques in practical software engineering scenarios. Finally, given the random nature of the algorithms that are usually employed in this context, it is very difficult to predict how long it will take for the process to complete. This issue is exacerbated by the fact that, in most cases, the solution space is unknown to the designer at the beginning of the optimization.
_Architectural quality metrics:_ When considering the multi-objective optimization in general, there is no lack of metrics (_e.g.,_ quality indicators) that can be used to quantify the performance, and consequently the outcome, of the optimization process. Nonetheless, such metrics only provide feedback that is based on the numerical values achieved by the solutions in the Pareto front for each objective. This viewpoint is useful to quantify the improvement realized by the solutions in terms of quality attributes and with respect to the initial model. However, the designer would not gain any feedback on how the architectural model itself changed during the process. This makes it difficult to compare the solutions in the Pareto front with the initial architecture and among themselves. Such a comparison is crucial because it guides the decision-making process of adopting a new design. In this regard, quality metrics that represent architectural aspects like the change in the number of communication paths, in their length, in the complexity of components, or in the number of exchanged messages, could make this comparison practical, and avoid inspecting every solution to obtain enough knowledge to make an informed decision.
_Explainability:_ As it is the case for many other optimization techniques, the solutions that are obtained through multi-objective optimization do not carry information about the specific causes that led to the generation and selection of such solutions. For instance, at the end of an automated refactoring process guided by a genetic algorithm, while we can, of course, inspect the solutions to learn what refactoring actions were applied to obtain the best results, we have no knowledge about the circumstances that made those choices preferable to all the others. In other words, we cannot explain why some modifications should be applied to our architecture other than for the quality attributes they seem to improve. In order to understand the modifications, we would have to know why they are beneficial. Unfortunately, in this kind of optimization processes, this is left for the designer to figure out. In this sense, the lack of explainability of results makes it difficult for the designer to justify the new modifications that she is proposing on the basis of a completely automated process.
_Reproducibility:_ Reproducing results of optimization experiments has been an important concern in recent years, both for researchers and practitioners. Of course, the main obstacle in this regard is represented by the random nature of many optimization techniques when it comes to the generation of new solutions and the exploration of the solution space. On top of this, when optimizing architectures, the solution space is difficult or impossible to define beforehand. More often than not, the solution space is not represented by just all the possible combinations of feasible modifications to the initial architecture, but it is built as the process goes on and the set of architectural elements that are possible targets of modifications changes. This uncertainty in the definition of the solution space and, consequently, in the obtained results is usually tackled by performing multiple runs of the same experiments, and by trying to reach conclusions on the basis of the information gathered in all the runs. While this is reasonable and practical in most cases, it can be very expensive when optimizing architectures, and less effective on large solution spaces. Therefore, achieving perfect reproducibility is still a challenge in architectural optimization, and one that is rarely addressed by the relevant literature.
## V Conclusion
In this paper, we reported open challenges of quality attributes optimization of software architectures. To the best of our knowledge, these open challenges hinder the utilization of search-based techniques in software architecture optimization.
Furthermore, our agenda foster new research activities in the field of software architecture optimization, especially for the optimization of non-functional properties, such as performance and reliability.
As short-term future work, we will try to tackle the challenge about the _lack of automation_. We already presented approaches targeted at this challenge [19, 18]. However, we plan on extending the introduced automation by supporting more refactoring actions, for example by implementing the Fowler's refactoring portfolio [22].
As mid-term future work, we will attempt to address the challenge about the _architectural quality metrics_. We plan to exploit quality estimation techniques well-recognized in the search-based community [23].
As long-term future work, we will attempt to address the challenge about the _problem formalization_. To empirically address this challenge, a profound analysis of several software architectures is probably required, and each one will generate a specific optimization problem.
## Acknowledgment
Daniele Di Pompeo is supported by the Centre of EXcellence on Connected, Geo-Localized and Cybersecure Vehicle (EX-Emerge), funded by the Italian Government under CIPE resolution n. 70/2017 (Aug. 7, 2017).
Michele Tucci is supported by the OP RDE project No. CZ.02.2.69/0.0/0.0/18_053/0016976 "International mobility of research, technical and administrative staff at the Charles University".
|
2303.09342 | Machine learning guided discovery of superconducting calcium
borocarbides | Pursuit of superconductivity in light-element systems at ambient pressure is
of great experimental and theoretical interest. In this work, we combine a
machine learning (ML) method with first-principles calculations to efficiently
search for the energetically favorable ternary Ca-B-C compounds. Three new
layered borocarbides (stable CaBC5 and metastable Ca2BC11 and CaB3C3) are
predicted to be phonon-mediated superconductors at ambient pressure. The
hexagonal CaB3C3 possesses the highest Tc of 26.05 K among the three compounds.
The {\sigma}-bonging bands around the Fermi level account for the large
electron-phonon coupling ({\lambda} = 0.980) of hexagonal CaB3C3. The ML-guided
approach opens up a way for greatly accelerating the discovery of new high-Tc
superconductors. | Chao Zhang, Hui Tang, Chen Pan, Hong Jiang, Huai-Jun Sun, Kai-Ming Ho, Cai-Zhuang Wang | 2023-03-16T14:22:12Z | http://arxiv.org/abs/2303.09342v1 | # Machine learning guided discovery of superconducting calcium borocarbides
###### Abstract
Pursuit of superconductivity in light-element systems at ambient pressure is of great experimental and theoretical interest. In this work, we combine a machine learning (ML) method with first-principles calculations to efficiently search for the energetically favorable ternary Ca-B-C compounds. Three new layered borocarbides (stable CaBC\({}_{5}\) and metastable Ca\({}_{2}\)BC\({}_{11}\) and CaB\({}_{3}\)C\({}_{3}\)) are predicted to be phonon-mediated superconductors at ambient pressure. The hexagonal CaB\({}_{3}\)C\({}_{3}\) possesses the highest \(T_{\rm c}\) of 26.05 K among the three compounds. The \(\sigma\)-bonging bands around the Fermi level account for the large electron-phonon coupling (\(\lambda=0.980\)) of hexagonal CaB\({}_{3}\)C\({}_{3}\). The ML-guided approach opens up a way for greatly accelerating the discovery of new high-\(T_{\rm c}\) superconductors.
\({}^{1}\)Department of Physics, Yantai University, Yantai 264005, China
\({}^{2}\) Jiyang College of Zhejiang Agriculture and Forestry University, Zhuji, 311800, China
\({}^{3}\) Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, United States
\({}^{4}\) Ames National Laboratory, Ames, Iowa 50011, United States
## 1 Introduction
Migdal-Eliashberg phonon mediated theory for superconductivity suggests that light-element compounds would be promising candidates for superconductors with high transition temperature (\(T_{\rm c}\)). In contrast to heavy-element compounds in which the energy scale of phonons is usually much smaller than that of electrons, light-element materials tend to have high-frequency phonons (owing to their light atomic mass) and offer a unique playground for nontrivial interplay between the Coulomb correlations and electron-phonon interactions.[1, 2] Pressurized hydrides have been predicted and observed to have high \(T_{\rm c}\) above 200 K, however, the high-\(T_{\rm c}\) hydrides are synthesized and stabilized at high pressures (\(>\) 100 GPa).[3] For example, high-\(T_{\rm c}\) superconductivity was established above 170 and 166 GPa for clathrate metal hydrides La-H and Y-H system,[4-6] and near 155 GPa for H\({}_{3}\)S.[7] Since the requirement of very high pressures would hinder the practical applications, the pursuit of high-\(T_{\rm c}\) superconductor that can persist in stable or metastable compounds at lower, and even ambient, pressure has generated a considerable recent intertest, yet remains an outstanding challenge.
Boron and/or carbon compounds would be good candidates for high-\(T_{\rm c}\) superconductor at ambient pressure. The well-known superconducting MgB\({}_{2}\) possesses a \(T_{\rm c}\) of 39 K,[8] due to strong coupling between \(\sigma\)-bonding electrons and B-B in-plane stretching vibrational phonons.[9-14] Superconductivity was also found in graphite intercalated compounds (GICs), but the \(T_{\rm c}\) are generally low (\(<\) 2 K),[15] expect CaC\({}_{6}\) which exhibits the highest \(T_{\rm c}\) of 11.5 K in this class of materials.[16, 17] Isovalent with and structurally similar to MgB\({}_{2}\), hole-doped layered Li\({}_{x}\)BC was suggested as a high-\(T_{\rm c}\) superconductor.[18, 19] However, superconducting Li\({}_{x}\)BC has not been observed in experiments due to doping caused strong lattice distortion and
considerable changes in electronic band structures.[20-23] It was thus proposed that substituting carbon atoms with boron atoms would introduce hole doping in LiBC while retain the stability of the lattice. First-principles calculations suggested that layered Li\({}_{3}\)B\({}_{4}\)C\({}_{2}\), Li\({}_{2}\)B\({}_{3}\)C, LiB\({}_{1.1}\)C\({}_{0.9}\), and Li\({}_{4}\)B\({}_{5}\)C\({}_{3}\) are superconductors with strong electron-phonon coupling. Moreover, alkaline earth metal intercalated layered compounds XBC (X = Mg, Ca, Sr, Ba) which adopt LiBC structure[24] and layered CaB\({}_{3}\)C\({}_{3}\) which is structurally similar to CaC\({}_{6}\),[25] were also predicted to be phonon-mediated superconductors.
Apart from layered metal-intercalated borocarbide compounds, superconductivity was also found in carbon-boron clathrates. A carbon-boron clathrate SrB\({}_{3}\)C\({}_{3}\), which was successfully synthesized at a pressure of 57 GPa[26], was theoretically predicted to be a superconductor with _T\({}_{\rm c}\)_ \(\sim\) 40 K.[27, 28] This clathrate is composed of a single truncated octahedral B-C cage with Sr atoms incorporated at the void of the B-C cage. The electron-phonon coupling in clathrate SrB\({}_{3}\)C\({}_{3}\) comes from the _sp\({}^{3}\)_-hybridized \(\sigma\)-bands and the boron-associated _E\({}_{g}\)_ phonon modes. The _T\({}_{\rm c}\)_ was further enhanced to 75 K when partially substituting Sr with Rb.[29, 30] By replacing Sr atom in the clathrate SrB\({}_{3}\)C\({}_{3}\) with the first 57 elements (Z = 1 - 57), CaB\({}_{3}\)C\({}_{3}\), YB\({}_{3}\)C\({}_{3}\), and LaB\({}_{3}\)C\({}_{3}\) clathrates were predicted to be stable at ambient pressure by a high-throughput first-principles calculations.[30]
There is only one stoichiometric calcium borocarbide, CaB\({}_{2}\)C\({}_{2}\), synthesized so far for the Ca-B-C ternary system. The crystal structure of CaB\({}_{2}\)C\({}_{2}\) was firstly identified to be isostructural to LaB\({}_{2}\)C\({}_{2}\).[31] Subsequent studies using X-ray diffraction method proposed two structures with space groups, _I4/mcm_ and _P4/mbm_, respectively, would be the ground state of CaB\({}_{2}\)C\({}_{2}\).[32, 33] In fact, the total energies of the two structures are very similar using first
principles calculations.[33] The _IA/mcm_ phase was finally determined to be the ground state of CaB\({}_{2}\)C\({}_{2}\) using electron energy-loss spectroscopy.[34] However, neither _IA/mcm_ nor _PA/mbm_ phase of CaB\({}_{2}\)C\({}_{2}\) is a superconductor. Whether there are stable compounds with high-\(T_{\rm c}\) in the Ca-B-C system at ambient pressure remains an open question.
In this work, we take the Ca-B-C ternary system as a prototype alkaline earth borocarbies to predict new ternary compounds using an efficient framework which integrates machine learning (ML) and first-principles calculations.[35; 36] Three stable and three low-energy metastable structures of calcium borocarbide are found and three of them (CaBC\({}_{5}\), Ca\({}_{2}\)BC\({}_{11}\), and CaB\({}_{3}\)C\({}_{3}\)) are predicted to be phonon-mediated superconductors.
## 2 Computational details
The low-energy structures and compositions of Ca-B-C system are explored using an interactive framework which combines an efficient ML model for high-throughput screening and first-principles methods for accurate calculations. A crystal graph conventional neural network (CGCNN) ML model [37] is employed to perform the fast high-throughput screening over a wide range of possible compositions and crystal structures to select promising candidates for low-energy Ca-B-C ternary compounds.
We construct a hypothetical structure pool for ternary Ca-B-C compounds by collecting 11,914 known ternary structures from the MP database[38] and replace the three elements with Ca, B, and C. For a given ternary structure from the MP database, five hypothetical lattices are generated by uniformly changing the original volume by a factor of 0.96--1.04 with an interval of 0.02. There are six ways to shuffle the three elements on a given template ternary structure. Thus, a set of 357,420 hypothetical Ca-B-C structures are
generated based on MP database (we refer to these structures as MP-based structures). We also generate another set of 65,287 structures using a random generation algorithm (refer to as RS-based structures).[39]
The CGCNN model for predicting the formation energies of compounds developed by Xie and Grossman in Ref. [37] was trained using the DFT calculated structures and energies of 28,046 binary and ternary compounds involving a wide range of chemical elements in the Materials Project (MP) database. We refer to this model as the first generation (1G-CGCNN) model. By applying the 1G-CGCNN model to the MP-based and RS-based Ca-B-C ternary structures generated above, we select only 1200 and 1200 structures with negative formation energies from MP-based and RS-based methods, respectively, for subsequent first-principles calculations. The results of the first-principles calculation on these 2400 candidate structures will be used to retrained the CGCNN model (refer to as 2G-CGCNN model) to improve the accuracy specifically for Ca-B-C system.
In order to generate sufficient data for training the 2G-CGCNN model, we also performed high-throughput first-principles calculations by substituting alkali and alkaline metals by Ca in the known stable alkali and alkaline metal borides, carbides, and borocarbides. We found some stable and metastable Ca-B binary, Ca-C binary and Ca-B-C ternary compounds, i.e, \(P6_{3}/mmc\)-CaB\({}_{2}\), \(P4/mbm\)-CaB\({}_{3}\), \(Imma\)-CaB\({}_{7}\), \(R\overline{3}m\) -CaC\({}_{2}\), \(P6/mmm\)-CaC\({}_{8}\), \(P6/mmm\)-CaC\({}_{12}\), \(Imma\)-CaB\({}_{12}\)C\({}_{2}\), and \(P\overline{4}n2\) -Ca\({}_{2}\)B\({}_{24}\)C. For each crystalline structure of the stable and metastable of Ca-B, Ca-C, and Ca-B-C compounds, we uniformly expand or contract the unit cell of the structure by a scaling factor ranging from 0.85 to 1.25 with 0.05 interval, and at the same time the atomic positions in each unit cell are perturbed randomly for 50 times with distortion amplitudes in the range from \(-0.025\) to 0.025 times the length of the cell vector.
In this way, 4896 distorted crystalline structures are generated (refer to as DC-based structures) and the total energies of the structures are calculated by first-principles calculations.
Based on the first-principles calculation results on the 7296 structures from the three different generation methods discussed above, we select 6271 structures whose formation energies are smaller than 2 eV/atom to train the 2G-CGCNN model specifically for Ca-B-C system. We then apply the 2G-CGCNN ML model to the set of 357,420 MP-based structures and to the set of 65,287 RS-based structures. A total of 2000 structures with low formation energy predicted by the 2G-CGCNN model from the structure pool generated by MP and RS methods are then selected for the structure relaxation by first-principles calculations. The distribution of the formation energies of hypothetical structures predicted by the 1G- and 2G-CGCNN models are given in the Supplemental Material.
The first-principles calculations are carried out according to density functional theory (DFT) within the framework of the all-electron projector-augmented wave (PAW) method,[40] as implemented in the Vienna Ab inito Simulation Package (VASP).[41] We adopt the Perdew-Burk-Emzerhof (PBE) functionals at the generalized gradient approximation (GGA) level.[42] A plane wave basis set with a kinetic energy cut-off of 520 eV is used and a uniform, \(\Gamma\)-centered \(k\) mesh with \(2\pi\times 0.03\) A-1 spacing. The electron-phonon coupling (EPC) is calculated using the Quantum-Espresso (QE) code[43] with the PBE functional and PAW pseudopotentials. The planewave basis set and the charge density are expanded with the kinetic energy cut-offs of 60 and 600 Ry, respectively, in the EPC calculations.
In order to assess the reliability of the CGCNN predictions, we random select 500 structures from the train database, and compare the predicted formation energies from the 1G- and 2G-CGCNN ML models with those from first-principles calculations. The performance of the 2G-CGCNN ML model is significantly better than that of the 1G-CGCNN model, as shown in Fig. 1(a). The 1G-CGCNN ML model significantly underestimates the formation energies of the Ca-B-C system. The root mean square error (RMSE) of the 2G-CGCNN model is 0.092 eV/atom, which is much smaller than 0.765 eV/atom from the 1G-CGCNN model. Thus, our trained 2G-CGCNN model is more suitable for the Ca-B-C system.
The 2G-CGCNN model enable us to discover two stable calcium borocarides, CaBC\({}_{5}\) with _Amm2_ symmetry and CaB\({}_{13}\)C\({}_{2}\) with _C2/m_ symmetry. Based on CaBC\({}_{5}\), we build Ca\({}_{2}\)BC\({}_{11}\) with _Amm2_ symmetry, CaB\({}_{2}\)C\({}_{4}\) with _Amm2_ symmetry, and CaB\({}_{3}\)C\({}_{3}\) with _P\(\bar{6}2m\)_ symmetry. The structural
Figure 1: (a) Formation energy of Ca-B-C system predicted by CGCNN models are compared with those from DFT. (b) Ternary phase diagram of Ca-B-C system at ambient pressure.
relationships between CaBC\({}_{5}\), Ca\({}_{2}\)BC\({}_{11}\), CaB\({}_{2}\)C\({}_{4}\), and CaB\({}_{3}\)C\({}_{3}\) will be discussed below. The ternary phase diagram of the Ca-B-C system at ambient pressure is constructed using previously reported experimental structures and our predicted structures, as shown Fig. 1(b). _Amm2_-CaBC\({}_{5}\), _Amm_2-CaB\({}_{2}\)C\({}_{4}\), _I4_/_mcm_-CaB\({}_{2}\)C\({}_{2}\), _Imma_-CaB\({}_{12}\)C\({}_{2}\), and _C2_/_m_-CaB\({}_{13}\)C\({}_{2}\) are stable ternary compounds. _Amm_2-Ca\({}_{2}\)BC\({}_{11}\), _Fmm_2-CaB\({}_{2}\)C\({}_{6}\), \(P\)62_m_-CaB\({}_{3}\)C\({}_{3}\), \(P\)4_n_2-Ca\({}_{2}\)B\({}_{24}\)C, \(P\)6\({}_{3}\)/_mmc_-CaBC are metastable ternary compounds, whose formation energies are 9.5 meV/atom, 41.6 meV/atom, 153.0 meV/atom, 168.6 meV/atom, and 201.4 meV/atom, respectively, above the convex hull. To gain insights into the effects of temperature on the stable and metastable compounds, we evaluated the Gibbs free energy as a function of temperature, as shown in Supplemental Material. The ternary phase diagram of the Ca-B-C system changes slightly below 500, and we selected a phase diagram at 300 K shown in Supplemental Material.
Figure 2: Side view of crystal structures of (a) Ca\({}_{2}\)BC\({}_{11}\), (b) CaBC\({}_{5}\), (c) CaB\({}_{2}\)C\({}_{4}\), and (d) CaB\({}_{3}\)C\({}_{3}\). (e) Graphene layer and (f)-(h) boron-carbon layers. Dark blue, green, and brown balls represent Ca, B, and C atoms, respectively.
Figure 2 shows the crystal structure of Ca\({}_{2}\)BC\({}_{11}\), CaBC\({}_{5}\), CaB\({}_{2}\)C\({}_{4}\), and CaC\({}_{3}\)C\({}_{3}\). Similar to GICs, these four compounds are Ca intercalated layered structures. Carbon and boron atoms form three types of graphene-like layers, which are termed B-type (denoted as B), C-type (denoted as C), and D-type (denoted as D) carbon-boron layer with increasing boron content, as shown in Figs. 2(f)-(h). The pristine graphene layer is termed A-type layer (denoted as A), as shown in Fig. 1(e). The intercalant Ca atoms form a triangular array between graphene layers or graphene-like carbon-boron layers, and each layer of Ca intercalation is denoted as \(\alpha\). The stacking sequences of the Ca\({}_{2}\)BC\({}_{11}\), CaBC\({}_{5}\), CaB\({}_{2}\)C\({}_{4}\), and CaB\({}_{3}\)C\({}_{3}\) are thus in the pattern of A\(\alpha\)B\(\alpha\), B\(\alpha\)B\(\alpha\), Ca\(\alpha\)C\(\alpha\), and D\(\alpha\)D\(\alpha\), respectively. In this way, Ca\({}_{2}\)BC\({}_{11}\), CaBC\({}_{5}\), and CaB\({}_{2}\)C\({}_{4}\) adopt orthorhombic structures with _Amm2_ symmetry, whereas CaB\({}_{3}\)C\({}_{3}\) form a hexagonal structure with \(P\overline{6}2m\) symmetry.
Instead of being exactly in the middle of the adjacent graphene layer and boron-carbon layer, Ca atoms are biased towards the graphene layer. With increasing content of boron in Ca\({}_{2}\)BC\({}_{11}\), CaBC\({}_{5}\), and CaB\({}_{2}\)C\({}_{4}\), the interlayer distance between Ca layer and boron-carbon layer decreases. As a result, the average C-C and B-C bond lengths in boron-carbon layer slightly increase from Ca\({}_{2}\)BC\({}_{11}\), CaBC\({}_{5}\), to CaBC\({}_{4}\). For the highest ratio of boron to carbon (1:1) in CaB\({}_{3}\)C\({}_{3}\), the average B-C bond length is 1.566 A. The average C-C bond length in the graphene layer of Ca\({}_{2}\)BC\({}_{11}\) is 1.467 A, which is slightly larger than that of 1.449 A in CaC\({}_{6}\). It is noteworthy that the stoichiometric CaB\({}_{3}\)C\({}_{3}\) with \(P\overline{6}2m\) symmetry predicted in this work has very similar energy with another CaB\({}_{3}\)C\({}_{3}\) with \(R32\) symmetry which was proposed by Chen.[25] However, the symmetry of \(P\overline{6}2m\)-CaB\({}_{3}\)C\({}_{3}\) is higher than that of \(R32\)-CaB\({}_{3}\)C\({}_{3}\). In addition, the metastable Ca\({}_{2}\)B\({}_{24}\)C, stable CaB\({}_{13}\)C\({}_{2}\), and stable CaB\({}_{12}\)C\({}_{2}\) contain B cages, and details of structural properties of these compounds are
shown in the Supplemental Material.
The orbital-resolved electronic band structures and projected density-of-states (PDOSs) of Ca\({}_{2}\)BC\({}_{11}\), CaBC\({}_{5}\), CaB\({}_{2}\)C\({}_{4}\), and CaB\({}_{3}\)C\({}_{3}\) are plotted in Fig. 3. The _Amm_2-Ca\({}_{2}\)BC\({}_{11}\), _Amm_2-CaBC\({}_{5}\), and \(P\overline{6}2m\) -CaB\({}_{3}\)C\({}_{3}\) are metallic, whereas the _Amm_2-CaB\({}_{2}\)C\({}_{4}\) is a semiconductor. We focus on the electronic contributions of boron and carbon atoms around the Fermi level, and the total DOS and PDOS of Ca are shown in the Supplemental Material. For the _Amm_2
Figure 3: Orbital-resolved electronic band structure and projected density of states (PDOS) of (a) Ca\({}_{2}\)BC\({}_{11}\), (b) CaBC\({}_{5}\), (c) CaB\({}_{2}\)C\({}_{4}\), and (d) CaB\({}_{3}\)C\({}_{3}\). The unit of electronic PDOS is states/eV/atom.
Ca\({}_{2}\)BC\({}_{11}\) and _Amm2_-CaBC\({}_{5}\), the \(\pi\) electrons from B and C atoms play a vital role at the Fermi level compared with \(\sigma\) electrons from them. As one can see from Fig. 3(a) and 3(b), the \(\pi\) electrons from both boron and carbon in these two structures provide significant DOS at the Fermi level. The PDOS from the \(\pi\) electrons in mixed B-C layer decreases quickly below the Fermi level and reach almost zero at about 1.5 eV below the Fermi level, while the PDOS of the \(\pi\) electrons from the graphene layer in the _Amm2_-Ca\({}_{2}\)BC\({}_{11}\) increases and forms a peak around 1 eV below the Fermi level combined with Ca \(d\) states, as shown in Supplemental Material. Different from _Amm2_-Ca\({}_{2}\)BC\({}_{11}\) and _Amm2_-CaBC\({}_{5}\), the \(\sigma\) electrons from B and C atoms in the _Amm2_-CaB\({}_{2}\)C\({}_{4}\) and \(P\overline{6}2m\)_-CaB\({}_{3}\)C\({}_{3}\) structures play an important role in the PDOS around the Fermi level. The top of the valence band and the bottom of the conduction band of _Amm2_-CaB\({}_{2}\)C\({}_{4}\) locate between the Z \(\rightarrow\Gamma\) line, which results to a small gap of 0.083 eV at the PBE-level. The \(P\overline{6}2m\)-CaB\({}_{3}\)C\({}_{3}\) is metallic with strong contributions from the \(\sigma\) electrons of boron and carbon atoms to the states at the Fermi level in addition to the \(\pi\) electrons contributions, although there is a band gap of \(\sim\)1 eV starts from about 0.7 eV above the Fermi level. The band structure of this structure exhibits a combination of some flat bands and some steep bands in the vicinity of the Fermi level along the \(\Gamma\rightarrow\) A direction. The simultaneous occurrence of flat and steep bands near the Fermi level has been suggested as essential to superconducting behavior. In addition, the B-rich ternary compounds (_Imma_-CaB\({}_{12}\)C\({}_{2}\), \(C\)2/_m_-CaB\({}_{13}\)C\({}_{2}\), and \(P\overline{4}n2\) -Ca\({}_{2}\)B\({}_{24}\)C) show semiconducting characteristics. The band gaps of \(C\)2/_m_-CaB\({}_{13}\)C\({}_{2}\), _Imma_-CaB\({}_{12}\)C\({}_{2}\), and \(P\overline{4}n2\)-Ca\({}_{2}\)B\({}_{24}\)C are 0.799, 1.406, and 1.822 eV, respectively. Details of the electronic band structures of newly predicted B-rich compounds are shown in Supplemental Material.
Figure 4 presents the phonon band structure, phonon PDOS, Eliashberg spectral function \(\alpha^{2}\)F(\(\omega\)), and integrated electron-phonon coupling constant \(\lambda\)(\(\omega\)) of the _Amm_2-Ca\({}_{2}\)BC\({}_{11}\), _Amm_2-CaBC\({}_{5}\), and \(P\bar{6}2m\)-CaB\({}_{3}\)C\({}_{3}\). The absence of imaginary of phonon frequencies establishes the dynamical stabilities of these predicted structures. For the three compounds, the low-frequency range (below 20 meV) is dominated by Ca atoms and the vibration of Ca atoms extend to 60 meV. The frequency range above 60 meV is dominated by B and C atoms. The _Amm2_-Ca\({}_{2}\)BC\({}_{11}\) and _Amm_2-CaBC\({}_{5}\) have similar distributions of Eliashberg spectral function, which lead to three steps in the integrated \(\lambda\)(\(\omega\)). Vibration of Ca atoms mainly contribute to the first step of the integrated \(\lambda\)(\(\omega\)), which are 51.8% and 55.6% of the total \(\lambda\) for _Amm_2-Ca\({}_{2}\)BC\({}_{11}\) and _Amm_2
CaBC\({}_{5}\), respectively. The intermediate-frequency range (20-100 meV) and high-frequency range (150-180 meV) approximately contribute to 30% and 10% the total \(\lambda\), respectively. The distribution of Elaishberg spectral function of the \(P\overline{6}2m\)-CaB\({}_{3}\)C\({}_{3}\) is apparently different from that of the \(\mathit{Amm}2\)-Ca\({}_{2}\)BC\({}_{11}\) and \(\mathit{Amm}2\)-CaBC\({}_{5}\). There are four steps in the integrated \(\lambda(\omega)\). The low-frequency range (0-20 meV) dominated by Ca atoms only contributes 20.2% to the total \(\lambda\). The frequency range located at 20-60 meV, which reflects strong interaction between Ca, B, and C atoms, contributes 31.9% to the total \(\lambda\). The high-frequency range (80-140 meV) coming from B and C atoms contribute 36.9% to the total \(\lambda\). Especially, the phonon vibration around 90 meV strongly couples with \(\sigma\) electrons of B and C atoms.
According to the Allen-Dynes modified version of the McMillan equation:[44; 45]
\[T_{c}=\frac{\omega_{log}}{\texttt{1.2}}\exp\,\left[-\frac{\texttt{1.04(1+ \lambda)}}{\lambda-\mu^{*}(1+0.62\lambda)}\right],\]
we estimate the \(T_{\text{c}}\) of Ca-B-C system at ambient pressure with the Coulomb pseudopotential \(\mu^{*}\) of 0.15, as shown in Fig. 4(d). The predicted \(T_{\text{c}}\) of \(R\overline{3}m\)-CaC\({}_{6}\) is 11.7 K which is in excellent consistent with the experimental data (11.5 K).[16] With increasing of B contents from the \(R\overline{3}m\)-CaC\({}_{6}\), the \(\mathit{Amm}2\)-CaBC\({}_{11}\), to the \(\mathit{Amm}2\)-CaBC\({}_{5}\), the \(\lambda\) monotonically decreases from 0.853 to 0.766 and to 0.655, Correspondingly, \(T_{\text{c}}\) decreases from 11.75 K to 8.92 K and to 5.19 K. This comes from the similar Eliashberg distributions of these three compounds. \(P\overline{6}2m\)-CaB\({}_{3}\)C\({}_{3}\) possesses the largest \(T_{\text{c}}\) (26.05 K) among the four compounds. The high \(T_{\text{c}}\) stems from the strong coupling between \(\sigma\)-bonding and B and C vibration in the frequency range of 87 and 95 meV. It is worth noting that the \(T_{\text{c}}\) of \(R32\)-CaB\({}_{3}\)C\({}_{3}\) is estimated to be 26.73 K, which is agreement with \(T_{\text{c}}\) of 28.2 K obtained by Chen.[25]
## 4 Conclusion
In summary, we search for low-energy Ca-B-C ternary compounds using an efficient framework which combines ML high-throughput screening with accurate first-principles calculations, and explore the possible superconductivity of these new compounds at ambient pressure. Four new stable (_Amm_2-CaBC\({}_{5}\), _Amm_2-CaB\({}_{2}\)C\({}_{4}\), _Imma_-CaB\({}_{12}\)C\({}_{2}\), and \(C\)2/_m_-CaB\({}_{13}\)C\({}_{2}\)) and three new metastable (_Amm_2-Ca\({}_{2}\)BC\({}_{11}\), _P_\(\overline{6}\)2\(m\) -CaB\({}_{3}\)C\({}_{3}\), and _P_\(\overline{4}\)_n_2 -Ca\({}_{2}\)B\({}_{24}\)C) calcium borocarbides are revealed. Layered _Amm_2-Ca\({}_{2}\)BC\({}_{11}\), _Amm_2-CaBC\({}_{5}\) and _P_\(\overline{6}\)2\(m\) -CaB\({}_{3}\)C\({}_{3}\) are predicted to be phonon-mediated superconductors, and _P_\(\overline{6}\)2_m_-CaB\({}_{3}\)C\({}_{3}\) possesses the highest _T\({}_{\rm c}\)_ of 26.05 K among the three compounds. _Amm_2-CaB\({}_{2}\)C\({}_{4}\), \(C\)2/_m_-CaB\({}_{13}\)C\({}_{2}\), _Imma_-CaB\({}_{12}\)C\({}_{2}\), and _P_\(\overline{4}\)_n_2-Ca\({}_{2}\)B\({}_{24}\)C are semiconductors, which have band gaps of 0.083 eV, 0.799 eV, 1.406eV, and 1.822 eV, respectively. The stable and metastable structures of the Ca-B-C system have significant implications for alkali and alkaline metal borocarbides. The ML-guided approach opens up a way for greatly accelerating the discovery of new high-_T\({}_{\rm c}\)_ superconductors.
## Acknowledgements
C. Zhang was supported by the National Natural Science Foundation of China (Grants No. 11874318). Work at Ames National Laboratory was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division including a grant of computer time at the National Energy Research Supercomputing Center (NERSC) in Berkeley. Ames National Laboratory is operated for the US DOE by Iowa State University under Contract No. DEAC02-07CH11358. |
2302.06358 | Anticipating Next Active Objects for Egocentric Videos | This paper addresses the problem of anticipating the next-active-object
location in the future, for a given egocentric video clip where the contact
might happen, before any action takes place. The problem is considerably hard,
as we aim at estimating the position of such objects in a scenario where the
observed clip and the action segment are separated by the so-called ``time to
contact'' (TTC) segment. Many methods have been proposed to anticipate the
action of a person based on previous hand movements and interactions with the
surroundings. However, there have been no attempts to investigate the next
possible interactable object, and its future location with respect to the
first-person's motion and the field-of-view drift during the TTC window. We
define this as the task of Anticipating the Next ACTive Object (ANACTO). To
this end, we propose a transformer-based self-attention framework to identify
and locate the next-active-object in an egocentric clip.
We benchmark our method on three datasets: EpicKitchens-100, EGTEA+ and
Ego4D. We also provide annotations for the first two datasets. Our approach
performs best compared to relevant baseline methods. We also conduct ablation
studies to understand the effectiveness of the proposed and baseline methods on
varying conditions. Code and ANACTO task annotations will be made available
upon paper acceptance. | Sanket Thakur, Cigdem Beyan, Pietro Morerio, Vittorio Murino, Alessio Del Bue | 2023-02-13T13:44:52Z | http://arxiv.org/abs/2302.06358v5 | # Anticipating Next Active Objects
###### Abstract
This paper addresses the problem of anticipating the next-active-object location in the future, for a given egocentric video clip where the contact might happen, before any action takes place. The problem is considerably hard, as we aim at estimating the position of such objects in a scenario where the observed clip and the action segment are separated by the so-called "time to contact" (TTC) segment. Many methods have been proposed to anticipate the action of a person based on previous hand movements and interactions with the surroundings. However, there have been no attempts to investigate the next possible interactable object, and its future location with respect to the first-person's motion and the field-of-view drift during the TTC window. We define this as the task of Anticipating the Next ACTive Object (ANACTO). To this end, we propose a transformer-based self-attention framework to identify and locate the next-active-object in an egocentric clip. We benchmark our method on three datasets: EpicKitchens-100, EGTEA+ and Ego4D. We also provide annotations for the first two datasets. Our approach performs best compared to relevant baseline methods. We also conduct ablation studies to understand the effectiveness of the proposed and baseline methods on varying conditions. Code and ANACTO task annotations will be made available upon paper acceptance.
## 1 Introduction
The widespread use of wearable cameras prompted the design of egocentric (first-person) systems that can readily support and help humans in their daily activities, by augmenting their abilities [6, 7, 20]. In order to assist users, a fundamental problem is to predict, forecast and even anticipate what the person will do in the next few second(s). Among all the possible tasks, one of the most relevant is to understand from an egocentric video stream, which object a user will interact with or manipulate in the near future. Besides, it is not just enough to localize the next-active-object (_NAO_) but also to model the motion and Field-of-View (FoV) drift till the contact with the object actually happens. Solving this task can help to gain more understanding about the future activity of the person as well as the usage of the objects. However, compared to other tasks performed with egocentric videos, anticipating interactable objects is notably challenging since humans interact with the environment based on their final goals and the responses
Figure 1: The goal of our work is to anticipate the next-active-object, i.e. to localize the object that the person will interact with in the first frame of an action segment, based on the evidence of video clip of length \(\tau_{o}\), located \(\tau_{a}\) seconds (anticipation time) before the beginning of an action segment at time-step \(t=\tau_{s}\).
they get from the environment. On the other hand, performing this task is useful, for example, by doing so a robot can prevent a collision between object(s) and human(s) in a warehouse by analysing the past observation and estimating the future point of contact or provide support in human-robot interactions for instance in factories where objects are also moving to anticipate the contact location based on robot movement _wrt_ objects.
In this paper, we call this task _"Anticipating Next ACTive Object"_ (ANACTO) by following the nomenclature of the most recent literature [16]. In [16], _NAO_ is defined for the object which is identified in the _last observed frame_. Instead, our ANACTO task further expands this definition to formulate the motion and FoV drift of the interactant to anticipate the _NAO_ at its contact point. According to [29], active objects are those which are in contact (usually with hands) with the first-person. However, our work focuses on the localization of the _NAO_ after a certain time at its contact point _before_ any interaction(s) begin, as shown in Figure 1. We have past evidence from observed video clip segment of length \(\tau_{o}\), which precedes the actual action by a _time to contact window_\(\tau_{a}\). We define ANACTO as the task of predicting the bounding box of _NAO_ involved in the action in its starting frame(s) at its contact point (\(t=0\)). Notice that, ANACTO task refers to not only detect/localize the NAO in the last observed frame (which is the case for Ego4D's Short-term anticipation (STA)) but also anticipating the final location of NAO at which the contact/interaction actually happens even in much later upcoming frames. Instead, Ego4D STA does not aim to identify the final interaction with the object. In Ego4D STA, it is assumed that object are static because of the fact that the last observed frame is considered only.
We propose to address the ANACTO task by exploring the combination of object-centered and human-centered cues while leveraging the self-attention mechanism of vision transformers (VIT) [9]. In detail, the proposed method analyzes RGB frames to gain an understanding of hand's position and their motion without explicitly using hand information. At the same time, it exploits an object detector to include spatial positioning of objects in the observed clip. Since ego-actions are mainly characterized as the interaction between the user's hands and objects in the scene, we claim that VIT's self-attention is a good candidate for capturing such relationships, both on frame-level and across frames. Indeed, the correctness of this claim is shown by quantitative (which also includes comparisons with several relevant methods) and qualitative analysis. The main contributions of this work are the following:
* A new task called Anticipating the Next ACTive Object (ANACTO) in egocentric videos is introduced.
* A novel method to address ANACTO, which is based on vision transformers, encoding the interactions between the first-person and the objects, and accounting for the time to contact window, is proposed.
* Existing action anticipation state-of-the-art (SOTA) methods are extended to perform ANACTO task.
* Our method as well as the SOTA are benchmarked on EpicKitchens-100 [4] (EK-100), EGTEA+ [25] and Ego4D [16] datasets. The performance comparisons among all methods prove the effectiveness of the proposed method in all cases. For the EK-100 and EGTEA+ datasets, we also provide annotations for the ANACTO task.
## 2 Related Work
We first review studies on egocentric action anticipation, since our problem follows a similar approach. Yet, instead of action classification, we focus on regressing the location of _NAO_. Then, we review the definition of "active" objects, which are also closely related to _NAO_, and then investigate the existing works on it.
**Action Anticipation in Egocentric Videos.** Action anticipation is the task of predicting future actions _before_ they occur. The anticipation problem has been well-explored for various actions from _third person videos_[23, 24, 17, 18, 11, 14, 1, 24, 38, 33]. Instead, its application in _first-person videos_, which is formalized in [5], has only recently gained popularity [26, 13, 28, 8] due to its applicability on wearable computing platforms [15]. We discuss works that are closely related to our anticipation task such that perform short-term (i.e., "recent", see [35] for its definition) egocentric action anticipation since we have evaluation protocols and datasets in common.
Lui et al. [26] define the egocentric action anticipation problem in terms of human-object interaction forecasting, in which the hand movement is used as a feature representation to predict the egocentric hand motion, interaction hotspots and the future action. Dessalene et al. [8] perform hand-object contact and activity modeling to anticipate partially observed and/or near future action. For hand-object contact modeling, the short-term dynamics is learned with 3D Convolutions. The localization of boundaries between the hands and objects in contact is performed by applying segmentation through a U-Net [34]. The activity modeling stage embeds the output of contact modeling through Graph Convolutional Network (GCN) layers [22] and then fed to an LSTM, which is followed by a fully-connected layer to make action predictions. On the other side, there exist methods relying on the aggregation of the information from the past frames in an observed video clip [13, 28]. For example, [13] propose RU-LSTM, a method composed of a "rolling" LSTM (R-LSTM) encoding the past observations, and the "unrolling" LSTM (U-LSTM) taking over the cur
rent hidden and cell states of the R-LSTM and producing hypotheses of future actions. Differently, the model in [28] uses a predictive model (a CNN) and a transitional model (a CNN pre-trained on action recognition). The predictive model directly anticipates the future action while the transitional model is constrained to the output of the currently happening action that is later on used to anticipate the future actions. Recently, [15] presented an architecture based on transformers to encode the data performed by the backbone and predict the future actions performed by the head network. [15] achieves superior results compared to [13] and shows the better performance of transformer backbone with respect to using many other backbones such as TSN [39] and Faster R-CNN [31]. Compared to [15], our transformer based architecture additionally aims to exploit the object-centric features with spatial and temporal attention along with two losses introduced to model past observation and learn about active object(s) to anticipate _NAO_ at its contact point using an autoregressive decoder.
Since our ANACTO task is novel, to obtain relevant baselines to compare with, we have modified several action anticipation SOTA tested on egocentric videos [26, 13, 15] and tested on third-person videos [39] (we include [39] due to its promising results demonstrated in [15] for egocentric settings). For the baselines [26, 39], we append our decoder (see 3.3 for its definition) to aggregate the frame level information gathered from their backbone in order to perform the ANACTO task. In terms of encoder design, as we propose a transformer based architecture, our method differs from [26, 13, 39] which are based on I3D-Res50 [3], LSTMs [41], and Temporal Segment Networks, (i.e., Spatial and Temporal ConvNets), respectively.
**Active Objects.** For the first time, [29] defined _active_ and _passive_ objects in an egocentric setup. Their method is based on the appearance differences among the objects (e.g., an opened fridge is an active object which looks different from a closed fridge called a passive object), and the location of the active object (i.e., active objects tend to appear close to the center of an egocentric image). By definition, active objects are those, which are currently involved in an interaction, e.g., being touched by humans, whilst, the passive objects are the background objects that the human agent is not in an interaction with, e.g., not manipulating them [29]. Dessalene et al. [8] adapted these definitions to describe _NAO_, which stands for the object that will be contacted with a hand. Their method requires the visibility of the _NAO_ and the existence of the hands in the current frames. It was also only tested when some specific action classes (take, move, cut and open) were considered. Instead, our method processes the frames independent to the hand(s) visibility or presence in the current frames. Importantly, we do not specifically restrict the possible (inter)actions between the human and the objects, i.e., we use all the verb classes supplied by the benchmark datasets. [19] also explored _NAO_ prediction using cues from visual attention and hand position, but by only using a single frame for the prediction. That approach [19] is not able to differentiate between the past or future active object, since it does not account for the temporal information acquired by the videos. Furnari et al. [12] also explored the _NAO problem_ by taking into account the active/passive objects definition of [29]. Their method [12] uses an object tracker to extract the object trajectories for a small video clip till the last frame precedes an action. This trajectory is later used to classify whether a given object is going to be active or passive in next frame. Such methodology [12] is restricted to predicting the _immediate NAO_ instead of predicting the location of the active objects in several future frames as our proposed method can do. Moreover, it requires an observation time which is till the penultimate frame of an action segment, which is unpredictable in real-life implementations. Very recently, Liu et al. [27], proposed a similar setup but to forecast hand trajectories for interaction hotspots on next-active-objects, i.e. confining to human hands interactions. Instead, our setup is more generic, e.g., can include interactions of robot.
## 3 ANACTO in Egocentric Videos
In this section, we first formalize the ANACTO problem, and then we provide details about the proposed model. Specifically, let \(V\) be a given video clip, we split the video clip into three sequential parts: the observed segment of length \(\tau_{o}\), the time to contact (TTC) window of length \(\tau_{a}\) and a given action segment which starts at timestep \(t=\tau_{s}\). The goal is to localize _NAO_ at the beginning of an action segment at timestep \(t=\tau_{s}\) where the contact happens, using \(\tau_{o}\) length observed video clip \(\tau_{a}\) seconds _before_ the beginning of the action segment involving _NAO_ (see Fig. 1). In other words, ANACTO is a combination of two tasks merged into one: (1) identifying _NAO_ from past observed segment, and (2) by using the past observation(s), modelling the motion of a person to estimate _NAO_'s location after the TTC window where actual contact happens. Notice that, this definition assumes that for every action to be performed, a person interacts with an object either with their hands or with a tool such that the object becomes active at the starting point of the action. Therefore, our problem description is not bounded with "hand"-object interactions only, consequently our approach does not include/require the detection of hands (e.g. the physical interactions can be performed by a tool as well).
### Proposed Method: T-Anacto
We propose a method, which regresses the location of _NAO_ from egocentric videos by analyzing the past video
frames, and incorporating object detections for the input frames. Object detections refer to the location of the object bounding box (\(x_{c}\), \(y_{c}\), \(w\), \(h\)), and a confidence score (\(c_{s}\)) produced by the detector. Fig. 2 illustrates the proposed method.
The proposed method (called T-ANACTO stands for Transformer-based Anticipating Next ACTive Object) leverages the self-attention mechanism of VIT to construct a _encoder_ network that operates on individual frames or short clips, followed by a transformer _decoder_. The _T-ANACTO encoder_ consists of a vision transformer (VIT) [9] and an object detector [31] which are used to extract the feature embeddings from each video frame. Our decoder is inspired from [15], such that we exploit its _causal_ structure - to tackle a predictive task based on past observations and make it autoregressive for an egocentric setting. This model choice was supported by the fact that Transformer-based end-to-end attention methods are efficient not only in recognizing actions in given-video segments, but also in predictive video modelling. There also exist promising results in anticipation and object detection-based tasks on static images [2, 15, 10, 21, 32]. The _T-ANACTO decoder_ aggregates the information acquired in temporal dimension to collectively understand the first-person's movements with a final goal of predicting the location of the _NAO_. Herein, we also introduce 2 losses to enforce the model to attend to past active objects to predict for NAO in future frames based on previous observations.
### T-ANACTO Encoder
The encoder of our model consists of an object detector [31] (called as object detection head, \(H_{o}\)) combined with a VIT [9] (i.e., a video backbone). The object detector identifies the positions of the objects, while VIT analyzes a RGB video frame to understand the context. Different from [15], we demonstrate the importance of object-centric features with temporal attention along with two losses introduced to model past observation and anticipate future contact point, described in detail below.
Given a video clip \(V\) = {\(X_{1},X_{2},\ldots X_{T}\)} with \(T\) frames, where \(X_{t}\) is the RGB image at time step \(t\) and an action segment, we trim the video clip into: (1) observed segment length of \(\tau_{o}\), (2) TTC window (\(\tau_{a}\)), before the beginning of the action segment at \(t=\tau_{s}\). Frames from the observed segment are then sampled at a frame rate which is equal to \(\tau_{a}\) to maintain consistency between frame intervals as described in Fig. 3. Each frame extracted from the observed segment is an input of an individual T-ANACTO Encoder. Our object detection head \(H_{o}\) follows a Faster R-CNN [31] architecture and consists of a region proposal network and a regression head. It takes as input each RGB frame \(X_{t}\) and generate bounding boxes \(b_{i,t}\in\mathbb{R}^{4}\) with corresponding confidence score \(cs_{i,t}\in(0,1)\) such that:
\[b_{i,t},cs_{i,t}=H_{o}(X_{t}),\ \ i\in\{1,\ldots N\}, \tag{1}\]
where \(N\) is the total categories of objects for a dataset. For
Figure 2: Our T-ANACTO model is an encoder-decoder architecture. Its encoder is composed of an _object detector_ and a _Vision Transformer_[9]. The object detector [31] takes an input frame (e.g., size of 1920\(\times\)1080) and predicts the location of objects in terms of bounding boxes (\(x\), \(y\), \(w\), \(h\)) and detection confidence scores (\(c\)). The input of VIT are the frame(s), first resized to, 224\(\times\)224 and then divided into the patches (16\(\times\)16). The object detections (\(x\), \(y\), \(w\), \(h\)) are also converted to match the scaled size of the frame (i.e., 224\(\times\)224), reshaped, and are then passed through a MLP to convert it into the same dimension as the embeddings from the transformer encoder, which are later concatenated together to be given to the decoder. There exist a linear layer between the decoder and the T-ANACTO encoder, which adjusts the feature dimensions to be fed to the transformer decoder. Transformer decoder uses temporal aggregation to predict the next active object. For each frame, the decoder aggregate the features from the encoder for current and past frames along with the embeddings of last predicted active objects and then predicts the next active object for the future frames.
a category, detections with the highest confidence score are used.
The object detections are performed for the original size of the image frames, \(X_{t}\) (e.g. \(1920\times 1080\)) and then the bounding boxes are scaled to match the resized image size, \(X_{t}^{T}\) of \(224\times 224\) to match with the input size of VIT [9]. The detections are then reshaped \((BS,N,5)\xrightarrow{}(BS,-1)\) to be passed through an MLP, \(f_{MLP}\) to convert them to the same dimensions as the T-ANACTO encoder's output.
For our video backbone \(V_{b}\), we adopt ViT-B/16 using \(224\times 224\) images, where \(X_{t}^{T}\) is an image at a time \(t\). We split each input frame into \(16\times 16\) non-overlapping patches, which are later flattened into a 256-dimensional vector. The vector representation is then projected to a 768-dimensional vector to be used as the input for our transformer encoder. The feature dimensions are kept constant throughout the encoder. We also append a learnable _[cls]_ token in the patch features, which can later be used to identify active object(s) label in the current frame, if any. All the other patches are also allocated a spatial positional embedding with their patch embedding. The resulting patch embeddings are then passed through a standard VIT Encoder with pre-norm. Finally, the feature representations learnt for each frame from the visual backbone are concatenated with the detections obtained from the object detection head as follow:
\[z_{t}=V_{b}(X_{t}^{r})+f_{MLP}(H_{o}(X_{t})). \tag{2}\]
In the end, we add a temporal position encoding to the extracted features from the T-ANACTO encoder for each frame, which are further given to the decoder network.
### T-ANACTO Decoder
We argue that the past observations can provide a lot of context to produce hypothesis regarding the _NAO_. Therefore, for the decoder network, we take inspiration from [15], and extending it to make it autoregressive at each step, to aggregate the features of the past frames and exploit the last predicted active object location which allows us to perform ANACTO.
The decoder network \(D\) is designed to produce attentive features corresponding to the future frames: \(\hat{z_{1}},\dots,\hat{z_{t}}\) to anticipate the location of the _NAO_ for each input frame as: \(\hat{z_{t}}=D(z_{0},\dots,z_{t};\hat{h_{0}},\dots,\hat{h_{t-1}})\) (see also Fig. 2). Here \(\hat{z_{t}}\) is the predicted features of the _future frame_ at t+1 obtained after attending to all other encoded features belonging to the frames before t+1 (i.e., \(z_{0},z_{1}..z_{t}\)). At each frame, the decoder takes the previously predicted active object location \(\hat{h_{t}}\) in previous frames along with RGB features to estimate the next-active-object position, \(\hat{y_{t}}\) in future frames. Both these features are concatenated together and are then fed to the next step. This helps in aggregating features of the past frames and understanding the intention and final goal of the first-person, which is defined by the action segment ground-truth label. These features are passed through multiple decoder layers, each consisting of masked multi-head attention, LayerNorm (\(LN\)), and a multi-layer perceptron (MLP) as in [30]. The final output is then passed through another \(LN\) to obtain the final embeddings. For each decoder output \(\hat{z_{t}}\), it is used to regress the _NAO_ in the corresponding frame at t+1. The predicted features are then fed to a linear layer \(\theta\), to regress the bounding box coordinates \(\hat{y_{t}}\in\mathbb{R}^{8}\), _i.e._\(\hat{y_{t}}=\theta(\hat{z_{t}})\). The final prediction \(y_{t}\) represents the model's output at each frame.
### Loss Calculation
To train T-ANACTO, we sample a clip preceding each labeled action segment in a given dataset, ending \(\tau_{a}\) seconds before the start of the action. The clip is then sampled with the same frame rate as \(\tau_{a}\) seconds to remain consistent with frame intervals as described in Fig. 3. The sampled frames are then passed through our T-ANACTO model and train the network in a supervised manner with three loss functions, described as follows.
\(L_{feat}\) defined in Eq. 3 aims at leveraging the predictive structure of the model by supervising the future frame features predicted by the decoder to match the true future frame features that are extracted as embeddings from the encoder.
\[\mathcal{L}_{feat}=\sum_{t=0}^{N}||\hat{z_{t}}-z_{t+1}||_{2}^{2}, \tag{3}\]
where \(N\) is the number of frames in training. It is to be noted that our model does not need the presence of hand or any active object to be present in the observed segment. However, any active object found in the observed segment provides additional supervision using \(L_{cao}\), stands for current active object loss, is a Mean Squared Error (MSE) Loss used for the prediction of active objects _in the observed segment of the video clip_. In addition, \(\mathcal{L}_{nao}\), stands for the next-active-object loss, forces the model to identify the location of the _NAO at the start of an action_.. It is supported by, \(\mathcal{L}_{cao}\) which helps T-ANACTO to identify and keep track of
Figure 3: The observed video segment of length \(\tau_{o}\) is sampled at a frame rate equal to the TTC time (shown as \(\tau_{a}\)) to maintain consistency in (1) the frame interval of sampled frames and (2) between the last observed frame and the starting frame of the action segment, which starts at \(t=\tau_{s}\).
active object(s) found at the end of the observed video segment.
\[\mathcal{L}_{cao}=\sum_{t=0}^{N-1}||y_{t}-\hat{y_{t}}||^{2},\mathcal{L}_{nao}=||y _{n}-\hat{y_{n}}||^{2}, \tag{4}\]
where \(y_{t}\in\mathbb{R}^{8}\) and \(\hat{y_{t}}\in\mathbb{R}^{8}\) are the ground-truth and predicted bounding boxes for active objects in _the current frame_, respectively. Whereas \(y_{n}\in\mathbb{R}^{8}\) and \(\hat{y_{n}}\in\mathbb{R}^{8}\) are the ground-truth and predicted bounding box for _NAO_ in _the starting frame_ of an action after \(\tau_{a}sec\), respectively. The final loss is a linear combination of the aforementioned three losses:
\[\mathcal{L}=\mathcal{L}_{feat}+\lambda_{1}\mathcal{L}_{cao}+\lambda_{2} \mathcal{L}_{nao}, \tag{5}\]
where \(\lambda_{1}\), \(\lambda_{2}\) are fixed weights.
## 4 Experimental Analysis
The experimental analyses were conducted on three major egocentric video datasets, described in Sec. 4.1. As this is the first time ANACTO task is being benchmarked, there is no existing method performing it. Therefore, we adapted the SOTA action anticipation methods to perform comparisons in Sec. 4.2. We described the implementation details of T-ANACTO in Sec. 4.3.
### Datasets
**EK-100 [4].** consists of about 100 hours of recordings with over 20M frames comprising daily activities in kitchens, recorded with 37 participants. It includes 90K action segments, labeled with 97 verbs and 300 nouns (i.e. manipulated objects). It supplies the annotations regarding the hand and object interactions, which are used for ANACTO. In detail, the aforementioned annotations are in terms of the prediction results of a hand-object interaction detector [36], which provides the hand location, side, contact state, and a bounding box surrounding the object that the hand is in contact. Such detector [36] was trained on EK-55 [5], EGTEA [25] and CharadesEgo [37] datasets, and applied on EK-55 [5] dataset to annotate it with respect to the hand-object interactions. We use the following annotations: the locations of both hands (i.e., the bounding boxes \(b\in\mathbb{R}^{8}\)), and the locations of the objects along with the contact state information at each frame of each video and, then curate the final ground-truth data for ANACTO problem. It is important to mention that the videos in this dataset were collected with different frame rates. In order to apply the methods: [13, 39, 26] requiring frame rates fixed to 30 frame per second, we converted each video to this constant frame rate, thus the annotations regarding the hand locations and active objects' locations are also interpolated accordingly.
**EGTEA+ [25].** includes 28 hours of videos containing 106 action categories, which corresponds to 2.4M frames. There exist 10325 action segments associated to 19 verbs and 53 nouns (i.e., objects) that were recorded with 32 participants. It is important to notice that yet there exist no publicly available source supplying annotations needed to perform ANACTO for EGTEA+. Therefore, we created the hand-object interaction annotations following the annotation pipeline of EK-100 [4] dataset. These include: the hand locations (bounding boxes \(b\in\mathbb{R}^{8}\) and the corresponding detection confidence scores) at each frame, the active object locations and their contact state. First, all the videos are converted to a constant frame rate of 30 fps. Then, each frame is fed to the hand-object interaction detector model from [36]. The hand and object threshold is kept at 0.5 to produce better qualitative results, which is also the same when extracting the annotations for EpicKitcen-100 dataset [4]. Additionally, we provide annotations for the videos with original frame rate for its original frame size.
**Ego4D [16].** This is the largest first-person dataset recently released. The dataset is split into 5 different categories, each focusing on a different task, combining for a total of 3,670 hours of egocentric videos across 74 locations. For this task, we focus on the forecasting split, containing 1000 videos for a total of 960 hours, annotated at 30 fps for the short term interaction anticipation task. The annotations are for the _NAO_ in the _last observed frame_.
### Baseline Methods
We compare T-ANACTO with SOTA action anticipation methods, namely AVT [15], RULSTM [13], Liu et al. [26] and TSN [39]. For RULSTM [13], we used pre-extracted RGB, flow and object features as in their paper, for EK-100 and EGTEA+. For Ego4D, we computed the flow and RGB features by following the same TSN model mentioned in [13], which were then fed as the inputs to the RULSTM model. We also tested individual modalities with TSN [39] (ResNet101) for RGB frames and RULSTM-object centric path for object modality. Moreover, we used object detections as well as their confidence score from the object detector [31] to be used as object features in RULSTM(fusion) and RULSTM(obj). We modified and re-trained all these aforementioned methods in order to perform ANACTO task. We explored these methods (noticed that they were used for action anticipation in egocentric videos, a.k.a. a _classification task_) because our problem formulation is very much related to action anticipations, and we claim that these methods can provide effective learning for ANACTO _regression task_ by modelling past motion. For each model, we replace the last classification layer with a regression layer to predict the bounding boxes \(\hat{y_{n}}\in\mathbb{R}^{8}\) regarding the next ac
tive object. Since TSN [39] method processes individual frames and not a video clip, for the corresponding experiments, we appended the whole T-ANACTO decoder layer to the TSN [39] method allowing the aggregation of information from all frames (i.e., tuning the task from frame-level processing to video processing). Throughout this paper, we refer to these methods as _baselines_.
### Implementation Details of T-ANACTO
T-ANACTO was trained with an SGD optimizer for 50 epochs with a learning rate of \(1e-5\). Recall that a linear layer exist after the output of the decoder to regress the bounding box coordinates \(\hat{y}_{t}\in\mathbb{R}^{8}\) (here the results are in \(\mathbb{R}^{8}\), confining to a single active object for each hand. We kept the values of \(\lambda_{2}\) as 1.0 and \(\lambda_{1}\) as 0.5 (see Eq. 5) respectively, while training T-ANACTO. Also, and the weight for feature loss is set to 1.0 For training and testing, our model takes 10 sampled frames as input and takes 1s to process a batch of 4 clips during inference. We keep the required input number of frame for each baseline method as proposed in their original paper.
We used annotations from [36] detector for identifying active objects in observed segment and to train the model for all datasets with \(\mathcal{L}_{cao}\) loss. Specifically, for EK-100 [4] and EGTEA+ datasets, during training, we maintained a lookup window of 10 frames from starting frame of action to look for first identified location of active objects _i.e; bounding boxes_ (if visible) to be labeled as ground truth for ANACTO task. It is also possible that for some clips, _true contact_ i.e., the actual interaction with an object can start sometime later after our lookup window. For those cases, we do not get bounding box labels for the location of active object. This means no object was actually active during the start of the action segment. However, we checked whether if this situation does lead to any inconsistency and observed that an active object is present _94% and 92%_ of the times in the first 10 frames of the action segment for the EK-100 and EGTEA dataset, respectively. It is important to notice that EK-100 and EGTEA do not supply object detections. As mentioned before, to obtain this information, we rely on Faster-RCNN [31] provided by [4] pre-trained on EK-55 [5] to detect the location of every object in the scene with a confidence score associated with each prediction \(b\in\mathbb{R}^{5}\). For both datasets, we use the training and test splits provided by [13] for the evaluations of the T-ANACTO and the baseline methods. On the other hand, for Ego4D, we used the forecasting split for training and validation provided by [16]. It is important to notice that the annotations provided for _NAO_ are _wrt._ only _the last observed frame_. As Ego4D is highly big-scaled, it was not possible to annotate it as we performed for other datasets. Therefore, we utilized only the supplied data as the ground-truth. On the other hand, this allowed us to show another utility of the ANACTO task, i.e., its setup also works for the model(s) to forecast _NAO_ in the last observed frame.
## 5 Results
As the evaluation metrics, Average Precision (\(AP\)) with various IoU thresholds: 5, 10, 20 and 50 as well as their average shown as \(AP_{avg}\) were used.
**The Effect of Losses and The Backbone.** We first present an ablation study to evaluate losses given in Eq. 5 as well as testing a different backbone (i.e., ResNet101, notice that this is the backbone used by TSN [39]) while keeping the other settings of T-ANACTO the same. The corresponding results are given in Table 1, when the experiments were performed on EK-100 dataset and the anticipation length \(\tau_{a}=0.25s\). As seen, using the transformer backbone compared to ResNet101 improves the results for all cases (ResNet101 vs. T-ANACTO w/ \(\mathcal{L}_{nao}\) and ResNet101 vs. T-ANACTO w/ \(\mathcal{L}_{cao}\)+\(\mathcal{L}_{nao}\)). Moreoever, \(\mathcal{L}_{cao}\) brings in important performance improvements to the ANACTO task, highlighting the importance of using the object-centric features.
**Effect of Anticipation Length.** We compare the performances of T-ANACTO and the baseline methods for various anticipation lengths for the ANACTO task in the unobserved scenes. This set of experiments was realized on EK-100 dataset [4] and the corresponding results are given in Table 2. It is important to mention that since we keep the total number of sampled frames from a given observed clip as constant throughout the experiments, the change in anticipation time \(\tau_{a}\) also changes the observed length \(\tau_{o}\) of the clip. In other words, in these sets of experiments, the decrease in anticipation length \(\tau_{a}\) also reduces the respective observed length \(\tau_{a}\) of time duration. The results given in Table 2 show that changing the anticipation lengths from higher values to lower values (e.g., from 1s to 0.5s or from 0.5s to 0.25s), as expected, increases the performance of T-ANACTO as well as all baseline methods.
**Comparisons among T-ANACTO and Baselines.** Table 2 presents a performance comparison among T-ANACTO and baseline methods on EK-100 dataset. As seen, our method T-ANACTO surpasses all the other methods in all metrics, for all TTC durations, while the second-best method is chaining for different TTC durations. We also present comparisons on EGTEA+ and Ego4D datasets in Tables 3
\begin{table}
\begin{tabular}{|l|c c c c|c|} \hline Ablation & AP5 & AP10 & AP20 & AP50 & \(AP_{avg}\) \\ \hline ResNet101 & 0.31 & 0.28 & 0.17 & 0.02 & 0.20 \\ T-ANACTO w/ \(\mathcal{L}_{nao}\) & 0.33 & 0.29 & 0.19 & 0.02 & 0.21 \\ T-ANACTO w/ \(\mathcal{L}_{cao}\)+\(\mathcal{L}_{nao}\) (FULL) & **0.37** & **0.32** & **0.21** & **0.04** & **0.24** \\ \hline \end{tabular}
\end{table}
Table 1: Ablation study performed on EK-100 [4] to investigate the effect of losses and the backbones of T-ANACTO.
and 4, respectively, when the TTC duration \(\tau_{a}\) is 0.25s for EGTEA and rate of sampling frames is 0.25s for Ego4D. To do so, for EGTEA+, we used training and testing splits-1 (see [13] for details) and for Ego4D, the experiments were conducted with the training and validation splits provided for the forecasting task. As mentioned in Sec. 4.1, the _NAO_ for Ego4D is identified at the end of the past observed segment. Even for this setup, we notice that the attention-based mechanism elevated by object centric information performs better, compared to other baselines. The obtained results in the aforementioned tables are in line with the results obtained for the EK-100 dataset, showing that T-ANACTO outperforms the other baseline methods, while the performance improvement can be up to 12% in terms of \(AP_{avg}\).
Qualitative Results.We visualize the effective spatial attention by our T-ANACTO encoder on the last observed frame in Fig. 4 for EK-100 [4]. The red regions demonstrate the regions of interest to the model, which correspond to human-object interaction in the future frames and help in anticipating the _NAO_. The results show that our model learns to focus on objects which are likely to be in contact with human hands based on observation till last observed frame, and thus the inference can also be performed before the contact happens. Notice in the second column, even though the object is not active in the starting frame, our model learns to focus on a possible object which becomes active later. We also notice that the model performs equally well for different lighting conditions. Besides, it is also interesting to note that T-ANAACTO model is also able to identify human-interaction hotspots for an object in some case. In the Supp. Material, we provide more qualitative results of our model for identifying objects in last observed frame, different TTC \(\tau_{a}\), and discuss failure cases for the model.
## 6 Conclusions
We have investigated the problem of anticipating next active object localization. First, we discussed the formulation of the ANACTO task. We then presented a new vision transformer based model, T-ANACTO which learns to encode hand-object interactions with the help of an object detector. We proved its effectiveness by comparing it against relevant strong anticipation based baseline methods. The experimental evaluation highlights that: (1) the object-centered cues help in elevating the performance to locate the next possible active object; (2) the effectiveness of the model increases when the anticipation time for the prediction before the beginning of an action is kept short. Besides,
\begin{table}
\begin{tabular}{|l|c c c c|c|} \hline Models & AP5 & AP10 & AP20 & AP50 & \(AP_{avg}\) \\ \hline \hline AVT [15] & 0.38 & 0.28 & 0.12 & 0.02 & 0.20 \\ RULSTM [13] & 0.37 & 0.27 & 0.10 & 0.01 & 0.19 \\ TSN(rgb) [39] & 0.35 & 0.23 & 0.08 & 0.01 & 0.17 \\ RULSTM(obj) [13] & 0.34 & 0.21 & 0.08 & 0.01 & 0.16 \\ Liu et al. [26] & 0.15 & 0.11 & 0.07 & 0.01 & 0.08 \\
**T-ANACTO** & **0.41** & **0.31** & **0.18** & **0.04** & **0.24** \\ \hline \end{tabular}
\end{table}
Table 4: T-ANACTO and the baseline methods’ performances when they are tested on Ego4D dataset [16] to identify NAO _wrt_ last observed frame. Frames are sampled from observed segment at \(0.25s\). Best result of each column are given in bold.
\begin{table}
\begin{tabular}{|l||c c c c|c c c||c c c|c c c c|} \hline \multicolumn{1}{|l||}{Anticipation time} & \multicolumn{4}{c||}{\(\tau_{a}=1.0\) s} & \multicolumn{4}{c||}{\(\tau_{a}=0.5\) s} & \multicolumn{4}{c|}{\(\tau_{a}=0.25\) s} \\ \hline \hline Models & AP5 & AP10 & AP20 & AP50 & AP\({}_{avg}\) & AP10 & AP20 & AP50 & AP5 & AP10 & AP20 & AP50 & \(AP_{avg}\) \\ \hline \hline AVT [15] & 0.25 & 0.19 & 0.13 & 0.01 & 0.15 & 0.30 & 0.26 & 0.17 & **0.03** & 0.19 & 0.32 & 0.27 & 0.18 & 0.03 & 0.20 \\ RULSTM [13] & 0.27 & 0.21 & 0.14 & 0.02 & 0.16 & 0.29 & 0.24 & 0.15 & **0.03** & 0.18 & 0.31 & 0.25 & 0.16 & 0.03 & 0.19 \\ TSN(rgb) [39] & 0.17 & 0.12 & 0.07 & 0.00 & 0.09 & 0.20 & 0.16 & 0.08 & 0.01 & 0.11 & 0.25 & 0.19 & 0.11 & 0.01 & 0.14 \\ RULSTM(obj) [39] & 0.24 & 0.19 & 0.11 & 0.01 & 0.14 & 0.24 & 0.19 & 0.11 & 0.01 & 0.14 & 0.27 & 0.20 & 0.14 & 0.01 & 0.16 \\ Liu et al. [26] & 0.13 & 0.09 & 0.05 & 0.00 & 0.07 & 0.13 & 0.10 & 0.05 & 0.00 & 0.07 & 0.14 & 0.10 & 0.05 & 0.00 & 0.07 \\ \hline
**T-ANACTO** & **0.34** & **0.28** & **0.18** & **0.03** & **0.21** & **0.35** & **0.29** & **0.20** & **0.03** & **0.22** & **0.37** & **0.32** & **0.21** & **0.04** & **0.24** \\ \hline \end{tabular}
\end{table}
Table 2: Results of our T-ANACTO model and other baseline methods for different TTC duration, i.e., 1, 0.5 and 0.25 seconds, tested on the EK-100 [4]. Best result of each column are given in bold.
Figure 4: The top row shows the “last observed frame”, the middle row shows “the region of interest of T-ANAACTO”, and the bottom row shows “the starting frame of an action”. The green box(es) in the last row represent the location of _NAO_ bounding box in the starting frame(s) of action.
\begin{table}
\begin{tabular}{|l|c c c c|c|} \hline Models & AP5 & AP10 & AP20 & AP50 & \(AP_{avg}\) \\ \hline \hline AVT [15] & 0.19 & 0.16 & 0.10 & **0.02** & 0.12 \\ RULSTM [13] & 0.18 & 0.13 & 0.07 & 0.01 & 0.10 \\ TSN(rgb) [39] & 0.14 & 0.12 & 0.07 & 0.01 & 0.09 \\ RULSTM(obj) [13] & 0.15 & 0.12 & 0.06 & 0.01 & 0.09 \\ Liu et al. [26] & 0.11 & 0.08 & 0.05 & 0.01 & 0.06 \\
**T-ANACTO** & **0.26** & **0.21** & **0.14** & **0.02** & **0.16** \\ \hline \end{tabular}
\end{table}
Table 3: T-ANACTO and the baseline methods’ performances when they are tested on EGTEA+ dataset [25] with the TTC duration \(\tau_{a}=0.25s\). Best result of each column are given in bold.
we also discuss the effect of observation length on the performance of model(s). (3) Our model effectively learns to identify and allocate attention to possible action objects in the future, as realized from qualitative results. (4) Importantly, T-ANACTO is also able to detect _NAO_ location even in the last observed frame. Finally, we also supply the ANACTO task annotations for EGTEA+ and EK-100 datasets, i.e., hand and active object bounding box annotations along with their contact state as well as providing the object annotations for the entire dataset using an object detector pre-trained on EK-55 [5].
As future work, we will extend the ANACTO task to predict the dynamic TTC, noun and verb for _NAO_, and investigate the use of an object tracker with other human-centered cues such as gaze and the appearance of objects over time. We will also investigate the effect of action recognition on _NAO_ identification and localization.
### Anticipating Next Active Objects for Egocentric Videos: Supplementary Material
This supplementary material includes the visualization of the attention maps of our T-ANACTO encoder, which is given for different time to contact window (Section 7) for EpicKitchen [4, 5] and EGTEA [25] datasets. The attention maps provide an intuition on learning of T-ANACTO to identify "interactable" objects (i.e; possible next-active-object) in the scene and then model the motion of the person till its TTC to anticipate its contact location in future frame. We also provide further visualizations for Ego4D [16] dataset when the next-active-object is identified at the last observed frame irrespective of time to contact with that object. Then, we discuss the failure cases of our model in Section 8 through multiple exemplary images. In addition, we also provide **a video** giving the details of the transition of attention over past frames till the last observed frame in a video clip.
## 7 Visualization of the Attention Maps
For training purposes, the vision transformer (VIT) [9] model was implemented using the timm [40] based pytorch-image model, which does not provide attention weights for the output of transformer encoder. To visualize the spatial attention of our T-ANACTO encoder, we implemented a similar model with the same nomenclature for the layers to load the trained weights from the training of the model. The attention weights are then extracted from each block layer and then stacked together to project the learning of our encoder.
We show the effectiveness of our model T-ANACTO for anticipating next active object task (ANACTO) as spatial attention of our encoder in additional figures for both EpicKitchen-100 [4] and EGTEA+ [25] datasets in Fig. 5, 6, 7 and 9. Using this visualization, one can understand how the confidence of the model differs as it analyzes frames that are temporally distant from the beginning of an action segment for different TTC window \(\tau_{a}\). In other words, we are able to compare the diversity of attention for different TTC window, \(\tau_{a}\)_v/s_ observed \(\tau_{o}\) time of video clips for EpicKitchen [4] dataset.
In detail, extending on the visualization provided in the main paper for EpicKitchen-100 dataset, herein, we report additional results for that dataset for different TTC window \(\tau_{a}=\) 0.25 seconds, 0.5 seconds, 1.0 second in Fig. 5, 6, 7, respectively. In the mentioned figures, we report the last observed frame by the model and the attention map generated for that particular frame by our T-ANACTO encoder to predict the location of next active object. We also report the ground truth results for the active object at the starting frame of action segment. This attention map(s) is generated after considering the past frames and the last frame of observed segment \(\tau_{o}\). As mentioned in the main paper, the change in \(\tau_{a}\) for a video clip also affects the observed video segment length \(\tau_{o}\) proportionally.
To qualitatively understand the improved performance of the model as the \(\tau_{a}\) is reduced from \(1.0s\) to \(0.25\) seconds, we report the comparison in Fig. 13. It is visible that as the model is fed with frames that are closer to the beginning of an action segment, _i.e lower_\(\tau_{a}\), its confidence for the next
Figure 5: Results showing the attention map generated by our T-ANACTO encoder for last observed frame of video clip with TTC \(\mathbf{\tau_{a}=0.25}\) seconds before the beginning of the action. The red regions depicts the region of interest to identify the next active object in the starting frame of the action. The green bounding box for the starting frame of the action (row) shows the localization of the active object for that frame. It is interesting to note that for segments which there is no active object at the start of the action, our encoder is able to identify the possible area of interest for next future frames post the starting frame of the action.
Figure 6: Results showing the attention map generated by our T-ANACTO encoder for last observed frame of video clip with TTC \(\mathbf{\tau_{a}=0.5}\) seconds before the beginning of the action.
active object increases and so the performance gain can be justified.
It is also important to mention that, for most of the results, one can notice that our model is also able to identify the hand's positions and interaction hotspots for certain objects, although our model does not explicitly require the hand's position as an input. We confirm this by re
Figure 11: In the figure, we report results for some of the failure cases of our model as discussed in Section 8.1. (a) The model fails to attribute attention to objects which are light colored or easily camouflaged with the background. (b) When the scene completely changes at the beginning of action from the past observed segment.
Figure 8: Our T-ANACTO model is designed to identify and locate the hand-object interaction location for future frames. In the process, it also learns to attribute to hand position in an image frame without explicitly providing hand location. The figures provided illustrate the spatial attention of our model to hand positions besides possible next active object.
Figure 10: Results shows the spatial attention map for ego4d dataset when trained to identify next active object _wrt_ last observed frame. The green bounding box specifies the location of the object which will become active in the future. The highlighted region specifies the attention stress by the model in the last observed frame.
Figure 9: Results shows the spatial attention map for EGTEA+ dataset. The green bounding box specifies the location of the active object at the starting of an action.
Figure 8: Our T-ANACTO model is designed to identify and locate the hand-object interaction location for future frames. In the process, it also learns to attribute to hand position in an image frame without explicitly providing hand location. The figures provided illustrate the spatial attention of our model to hand positions besides possible next active object.
porting our results for the EpicKitchen-100 dataset [25] in Fig. 8. Since our method learns to identify the future hand-object interaction it focuses on locating the position of hands and respectively locate the next active object in consequent starting frame of an action segment.
In Fig. 9, we provide the attention map for the learning of our model for EGTEA+ dataset [25] when trained on _train split 1_ and tested on _test split 1_. We also provide a **video** visualizing the transition of attention on all past frames till the last observed frame for different clips.
**Ego4D [16].** In Fig. 10, we provide the attention map for the learning of our model for Ego4D dataset when trained on training set of forecasting split to predict the next active object location _wrt_ last observed frame.
## 8 Success and Failure cases
All the visualizations discussed in the previous section, is given for the cases T-ANACTO is successful to anticipate the next active object correctly. In this section, we discuss the cases which can be considered as failure for T-ANACTO.
### Epic Kitchen and EGTEA
We were able to identify two major cases for EpicKitchen and EGTEA:
1) Light colored objects.We noticed that the model is not able to confine its attention to those areas in the video clips where a light colored or transparent object is used for human-object interaction (see Fig. 11(a)). It could perhaps be a failure of the object detection model which is not able to identify items due to the transparent nature of object and camouflage with the background of frames(s). However, for most of the video clips consisting of light colored objects, our model is able to identify the hand's positioning in the frames as described in Fig. 8 which can be exploited to further extend the work in this domain.
2) Scene transition.As stated earlier, the next active object detection is a challenging task due to the consistent nature of humans to continuously interact with the environment. In the process, a person does the interaction with the objects based on the activities being performed which can lead to sudden change of scenes from one moment to another. Therefore, a current scene at the start of action segment might be drastically different _wrt_ past observed frames. In those cases, it is extremely difficult for the model to locate "interactable" objects in the scene which has not be observed by the model (see Fig. 11(b)).
### Ego4D
We provide the visualization of cases in Fig. 12
1) Sampling of frames.Since our model take input frames at a sampled interval, it is trained to output predictions after the the sampled interval time after the last observed frame. However, in Ego4D dataset the TTC for a next active object varies drastically for each clip, which is one of the main reasons our model suffers for those objects whose TTC are much higher than sampled frame rate for our input frames.
2) Tiny and clustered objects.We also notice that our model fails for tiny / transparent objects in the scene or where multiple objects are scattered in the frame.
Figure 12: In the figure, we report results for some of the failure cases of our model as discussed in Section 8.2 for Ego4D dataset. (a) The model fails to attribute for higher TTC time for a given next active object. (b) For objects which are tiny or transparent or scattered around multiple objects it is difficult to identify next active object for larger TTC.
Figure 13: Results show the diversity of spatial attention for the last observed frame preceding the beginning of an action segment for different setups of TTC window for \(\tau_{a}=0.25,0.5,1.0\) seconds. The attention corresponds to the red region in the image. The regions tend to appear more assertive as the model examines frame closer to action segments _i.e;_ as the \(\tau_{a}\) is decreased. This also attributes to a higher accuracy of the model for shorter time to contact window. |
2310.08164 | Interpreting Learned Feedback Patterns in Large Language Models | Reinforcement learning from human feedback (RLHF) is widely used to train
large language models (LLMs). However, it is unclear whether LLMs accurately
learn the underlying preferences in human feedback data. We coin the term
\textit{Learned Feedback Pattern} (LFP) for patterns in an LLM's activations
learned during RLHF that improve its performance on the fine-tuning task. We
hypothesize that LLMs with LFPs accurately aligned to the fine-tuning feedback
exhibit consistent activation patterns for outputs that would have received
similar feedback during RLHF. To test this, we train probes to estimate the
feedback signal implicit in the activations of a fine-tuned LLM. We then
compare these estimates to the true feedback, measuring how accurate the LFPs
are to the fine-tuning feedback. Our probes are trained on a condensed, sparse
and interpretable representation of LLM activations, making it easier to
correlate features of the input with our probe's predictions. We validate our
probes by comparing the neural features they correlate with positive feedback
inputs against the features GPT-4 describes and classifies as related to LFPs.
Understanding LFPs can help minimize discrepancies between LLM behavior and
training objectives, which is essential for the safety of LLMs. | Luke Marks, Amir Abdullah, Clement Neo, Rauno Arike, David Krueger, Philip Torr, Fazl Barez | 2023-10-12T09:36:03Z | http://arxiv.org/abs/2310.08164v5 | # Interpreting Reward Models in RLHF-Tuned Language Models Using Sparse Autoencoders
###### Abstract
Large language models (LLMs) aligned to human preferences via reinforcement learning from human feedback (RLHF) underpin many commercial applications of LLM technology. Despite this, the impacts of RLHF on LLM internals remain opaque. We propose a novel method for interpreting implicit reward models (IRMs) in LLMs learned through RLHF. Our approach trains pairs of autoencoders on activations from a base LLM and its RLHF-tuned variant. Through a comparison of autoencoder hidden spaces, we identify features that reflect the accuracy of the learned IRM. To illustrate our method, we fine-tune an LLM via RLHF to learn a token-utility mapping and maximize the aggregate utility of generated text. This is the first application of sparse autoencoders to interpreting IRMs. Our method provides an abstract approximation of reward integrity and holds promise for measuring alignment between specified objectives and learned model behaviors.
## 1 Introduction
Do implicit reward models (IRMs) learned by Large Language Models (LLMs) through Reinforcement Learning from Human Feedback (RLHF) diverge from their intended training objectives? How can we interprete these IRMs and measure such divergences?
LLMs are commonly fine-tuned with RLHF to align outputs with a reward measure. Despite the widespread adoption of RLHF, it remains opaque how well the student model internalizes the explicit reward function, making failures in the IRM difficult to detect. Contributing to this difficulty is superposition in the features used in LLMs (Elhage et al., 2022), as well as full model interpretability research being at an early stage.
As LLMs steered via RLHF scale in capability and deployment, the implications of failures in the IRM amplify. Misspecified rewards can cause'specification gaming' (Krakovna et al. (2020)), whereby a model engages in an undesired behavior while still achieving high reward. Through behaviors like sycophancy, this phenomenon can be observed to be emerging in LLMs already (Wei et al. (2023)). Other risks include manipulation of the user's preferences (Adomavicius et al. (2013)), reinforcement of the biases present in human labellers (Santurkar et al. (2023)) and potentially catastrophic outcomes in situations where models approach or generally exceed human capabilities (Christiano (2019)). Detecting such failures of RLHF in the wild is challenging, as models may be incentivized to appear more aligned than they are (Hubinger et al. (2019)) in an effort to preserve their reward model(s) (Omohundro (2008)).
In this work, we present a novel technique to interpret IRMs learned through RLHF. While prior work has applied sparse coding to derive more interpretable features from LLMs (Sharkey et al.
(2022); Cunningham et al. (2023)), we extend those methods to IRMs, proposing their use for IRM interpretation and measurement. Our major contribution is applying sparse coding towards (a) distinguishing features that specifically emerge from the RLHF tuning process and (b) quantifying the accuracy of the learned IRM in matching the preferences of the overseer during fine-tuning. To the best of our knowledge, our paper is the first to apply sparse coding to the study of reward models.
Our procedure can be broken down into the following steps, also illustrated in Figure 1.
1. **Find Highly Divergent Layers:** After RLHF, compute the parameter divergence between the base model \(M_{\text{base}}\) and the fine-tuned model \(M_{\text{RLHF}}\), and sort layers in descending order by divergence. Given that if an IRM is learned it must be encoded by the differences in parameters between \(M_{\text{base}}\) and \(M_{\text{RLHF}}\), we avoid training useless autoencoders (for the task of IRM interpretation) by discarding layers unlikely to contain components of the IRM.
2. **Train Large and Small Autoencoders:** Train an autoencoder with a sparsity constraint on activations from \(M_{\text{RLHF}}\) over a an unseen corpus for the top-\(n\) layers with the highest parameter divergence to construct a hidden space feature representation, and then another autoencoder with a smaller dictionary size. Do the same for the corresponding layers in \(M_{\text{base}}\).
3. **Identify Shared Features:** Compute overlapping features across the larger and smaller learned dictionaries for both autoencoder pairs, to identify ground truth features in \(M_{\text{base}}\) and \(M_{\text{RLHF}}\).
4. **Compare Features and Quantify IRM Efficacy:** Compare the differences in features identified in \(M_{\text{base}}\) and \(M_{\text{RLHF}}\), such that an interpretable notion of the effects of RLHF on \(M_{\text{base}}\) is attained through the relative feature differences. We later use these features in a quantitative measure of the efficacy of the internal reward model, as well as in qualitative analysis.
Figure 1: First, we sample activations from layers having the highest parameter divergence between \(M_{\text{base}}\) and \(M_{\text{RLHF}}\). Then, two autoencoders with a sparsity constraint are trained on those activations, each with a different dictionary size. The overlap is computed between the two dictionaries to find high-confidence features that serve as a proxy for ground truth. We analyze activations on these features, enabling both manual inspection of features as well as computing an aggregate score for the implicit reward model.
Background
Mechanistic InterpretabilityUnderstanding the inner workings of neural networks such as transformers is essential for fostering transparency and trust. In recent years, mathematical frameworks have been developed to represent and analyze the computations within these models (Ehlage et al., 2022). Foote et al. (2023); Bills et al. (2023) offer another approach whereby a larger model predicts what human-interpretable concept a neuron might represent in a smaller model. For a different perspective, Black et al. (2022) construct the 'polytope lens', which proposes polytopes as the fundamental interpretable units of a neural network instead of individual neurons or linear combinations of them. These frameworks propose scalable methods for describing the internal functioning of transformers, enabling transparency in the model's functioning and the verification of properties useful for safety, like accurate reward modeling.
Our work interprets the internals of transformer-based LLMs with a vocabulary size \(V\). The models take an input sequence \((\mathtt{x}_{1},\ldots,\mathtt{x}_{p})\) where each \(\mathtt{x}_{i}\in\{1,\ldots,V\}\). Tokens are mapped to \(\mathtt{d}_{e}\)-dimensional embeddings by selecting the \(\mathtt{x}_{i}\)-th column of an embeddings matrix \(\text{Embd}\in\mathbb{R}^{d_{e}\times V}\).
External Reward Models in RLHFRLHF has emerged as the dominant paradigm for fine-tuning large language models to represent human preferences. It is performant even if the desired behavior is complex or not easily quantifiable, making it significantly more effective than hand-crafted reward functions.
In common RLHF settings, a dataset of human comparisons between outputs of the base model is first collected, providing feedback on which outputs are preferable (Christiano et al. (2023); Ziegler et al. (2020)). In the Reinforcement Learning through AI Feedback (RLAIF) variation of the fine-tuning scheme, this dataset is AI-generated, removing the need for human participation in the fine-tuning process (Bai et al. (2022)).
This dataset is used to train a reward model to predict human preference scores, replacing traditional reward functions. In the context of language models, this reward model is often itself a separate instance of an LLM. The reward model is used to fine-tune the policy of the base model. Techniques like proximal policy optimization (Schulman et al. (2017)) are commonly employed to optimize the policy model using scores under the reward model as the objective. By the end of a successful fine-tuning process, the policy model has internalized an implicit model of the external preferences (human feedback, in the case of RLHF).
In this paper, we analyze the implicit reward model (IRM) internalized by the policy model. To differentiate clearly, we will refer to the reward models used to oversee the RLHF process as external reward models (ERMs).
Feature Superposition in Deep Learning ModelsThere is a significant body of evidence indicating that deep neural networks learn human-interpretable features of the input (Bills et al., 2023; Karpathy et al., 2015; Olah et al., 2017; Mikolov et al., 2013). By features, we mean vectors in a network's activation space that correspond to human-understandable concepts, such as apostrophes or arithmetic. Often, deep neural networks store the features in a distributed way; as a result, individual neurons do not correspond to a single semantic feature. This phenomenon has been coined "superposition" (Ehlage et al., 2022). It allows a model to represent more features than it has dimensions in its activation space, especially when those features are sparsely present in training data. Superposition poses a major obstacle to neural network interpretability, and this is expected to extend to the interpretation of reward models learned through RLHF in LLMs.
Sparse Autoencoders for Activation Vector ReconstructionAutoencoders minimize the reconstruction error \(\epsilon\) for an input vector \(x\) subject to projection into a latent space:
\[\epsilon=\|x-\text{Dec}(\text{Enc}(x))\|^{2} \tag{1}\]
Enc represents the encoding function, and Dec the decoding function. For activation vectors, sparse autoencoders constrain the activations in the hidden layer \(h\) to a limited number \(k\) of active neurons, and we stipulate the encoding function Enc to be \(\text{Enc}_{k}\) in this case.
As a result of the sparsity constraint on the autoencoder, each vector in Dec encodes a handful of neurons from the activation vector. A compressed representation capturing key activation patterns emerges, identifying 'ground truth features' in the model that activations were sampled from. Early results from Sharkey et al. (2022) and Cunningham et al. (2023) suggest sparse autoencoders can recover ground truth features, even when those features are represented in a superposed manner.
Autoencoder ArchitectureOur autoencoder architecture consists of an encoder, composed of a linear layer preceding a ReLU activation function, and a linear decoder. Sparsity in the decoder is induced through \(L_{1}\) regularization on the weights, forcing the network to learn a sparser representation.
The decoder and encoder weights are tied. Prior to being encoded, the weights are normalized to have unit norm. The overall loss function is calculated as the sum of the mean squared error between the reconstructed output from the decoder and the true data (for both training the decoder and measuring performance) and an \(L_{1}\) loss term on the decoder weight matrix. We scale the \(L_{1}\) loss by an \(L_{1}\) coefficient, to tune the importance given to sparsity. This architecture is based on the experimental results of Sharkey et al. (2022).
Deducing Features From Dictionary Similarities Between Autoencoders of Different Sizes Sharkey et al. (2022) identify features in toy models exhibiting superposition by training two sparse autoencoders of different sizes, and taking a similarity measurement between the decoder weights of the two autoencoders. They show that features with high similarity between the two learned dictionaries (the decoder weights matrix) correspond to ground truth features exhibited in the transformer. These results are corroborated by Cunningham et al. (2023) where the same technique is applied to language models, showing best-in-class performance.
For their similarity measure between two learned dictionaries, Sharkey et al. (2022) define 'Mean Max Cosine Similarity' (MMCS). Let \(D\) and \(D^{\prime}\) be two dictionaries, and \(d\) and \(d^{\prime}\) be elements from each dictionary. Then we have:
\[\text{MMCS}(D,D^{\prime})=\frac{1}{|D|}\sum_{d\in D}\max_{d^{\prime}\in D^{ \prime}}\text{CosineSim}(d,d^{\prime}). \tag{2}\]
Intuitively, MMCS is just the average nearest neighbor similarity for features to \(D\) from \(D^{\prime}\). In the above, let \(D_{g}\) be the top \(k\) features of \(D\) that realize the highest contribution to the MMCS. In the case of LLMs, the ground truth features are unknown, and so the set \(D_{g}\) is used as a proxy for a true representation of the ground truth features.
Automating Neuron Interpretability Using Large Language ModelsIdentifying plausible descriptions of what a given neuron represents is laborious for a human. Thus, approaches like Bills et al. (2023); Foote et al. (2023) automate this process. Bills et al. (2023) provide GPT-4 with a set of normalized (to a range of 0 and 10, where 10 indicates maximal activation) and discretized activations for a set of tokens passed to the model as a prompt. GPT-4 then predicts an explanation for what the neuron represents based on those activations, and then simulates discretized activations for tokens as if that description were true.
## 3 Related Work
To our knowledge, no general methods have been proposed for finding human-interpretable representations of IRMs learned via RLHF and RLAIF. Nevertheless, there have been works in similar domains.
Jenner & Gleave (2021) provide a framework for preprocessing reward functions learned by RL agents into simpler but equivalent reward functions, which makes visualizations of these functions more human-understandable. Michaud et al. (2020) explain the reward functions learned by Gridworld and Atari agents using saliency maps and counterfactual examples, and find that learned reward functions tend to implement surprising algorithms relying on contingent aspects of the environment. They also note that reward interpretability requires a different set of tools from policy interpretability. We share with these works the desire to find new general tools for reward model interpretability, but focus on reward models learned through RLHF and RLAIF rather than standard RL training.
Furthermore, Gleave et al. (2021) and Wolf et al. (2023) present methods for comparing and evaluating learned reward functions in the standard RL setting without requiring these functions to be human-interpretable. In comparison, we aim for evaluation of IRMs in the RLHF setting through interpretability.
There is also existing literature on circumventing superposition when interpreting deep learning models. Olah et al. (2020) introduce the problem of superposition and its effect on interpretability. Elhage et al. (2022) present a toy model where the superposed features can be fully understood and outline possible directions for tackling the problem in real-world models. One of the proposed approaches, sparse dictionary learning (Olshausen and Field (1997), Lee et al. (2006)) to find directions in the activation space that correspond to features, also forms the basis of our work.
Sharkey et al. (2022) present a report of preliminary attempts to apply sparse dictionary learning on deep neural networks. Cunningham et al. (2023) build upon the work of Sharkey et al. (2022), finding that the dictionary features learned by sparse autoencoders are more amenable to automated interpretability techniques introduced by Foote et al. (2023); Bills et al. (2023). They also find that the dictionary features are more precise and monosemantic compared to features brought out of superposition by other methods, such as principal component analysis (Wold et al. (1987)) and independent component analysis (Lee (1998)). Their experiments are conducted on Pythia-70M language models, but in comparison to our work, do not assess whether this method is applicable to learned reward models.
Other works exploring related techniques include Yun et al. (2021), who apply sparse dictionary learning to visualize the residual streams of transformer models, and Gurnee et al. (2023), who find human-interpretable features in large language models using sparse linear probes. Finally, an alternative approach for circumventing superposition has been explored by Jermyn et al. (2022), who engineer models to have more monosemantic neurons by intervening in the training process and changing the local minimum the model's weights converge to.
## 4 Methodology
### Interpreting Learned Reward Models in LLMs.
Our primary method for interpreting IRMs learned through RLHF and RLAIF consists of first isolating LLM layers relevant to reward modeling, then using sparse autoencoders to reconstruct activation vectors from these layers, and finally using GPT-4 to reconstruct feature explanations for the activation vectors. This can be separated into the following components:
* Identify the set of layers \(L\) in an RLHF-tuned LLM \(M_{\text{RLHF}}\) likely related to the learned IRM. We do so by sorting layers in order of increasing magnitude of \(\Delta(L_{M_{\text{RLHF}}},L_{M_{\text{base}}})\), where \(\Delta\) is the sum of Euclidean distances between each corresponding weight and bias tensor in the layer between \(M_{\text{RLHF}}\) and the corresponding base model \(M_{\text{base}}\). In the following bullets, we simplify notation by describing our feature extraction for a single fixed layer \(\ell\) of \(L\).
* For both \(M_{\text{RLHF}}\) and \(M_{\text{base}}\), train two autoencoders, \(\mathcal{AE}_{1}\) and \(\mathcal{AE}_{2}\), of differing hidden sizes, and with the same sparsity constraint. These autoencoders reconstruct activation vectors (obtained through prompting with the test split of the relevant dataset) on \(\ell\) for their respective model (Sharkey et al. (2022); Cunningham et al. (2023)). For _each_ model, we extract a pair of lower-dimensional feature dictionaries, \(D_{1}\) and \(D_{2}\), from the corresponding autoencoder. Each feature is a column of the decoder's weight matrix.
* Because autoencoders produce varying dictionaries over training runs and hyperparameters, we keep only the features that occur in both \(D_{1}\) and \(D_{2}\). We compute the MMCS between \(D_{1}\) and \(D_{2}\) in order to identify repeating features across the two dictionaries, indicating that shared features truly occur in the model.
* The top-\(k\) most similar features between \(D_{1}\) and \(D_{2}\) in terms of MMCS are explained using a variation of the method by Bills et al. (2023) designed to directly describe the features in a dictionary. The method feeds the encoder of \(\mathcal{AE}_{n}\) activations from the model on which it was trained, and then GPT-4 predicts a description of that feature from the feature weights specified in the encoder output.
* By comparing these explanations in \(M_{\text{RLHF}}\) and \(M_{\text{base}}\), we show how these descriptions can be correlated with the efficacy of the IRM in encapsulating the explicit reward model.
* This method is applied to a training regime in which \(M_{\text{RLHF}}\) is tasked with learning an explicit table of words and maximizing their presence within PPO training. This training environment allows us to quantitatively assess the efficacy of \(M_{\text{RLHF}}\)'s reward model.
### Overseer-Guided Fine-Tuning Using Utility Tables.
As a case study, we construct a fine-tuning environment simpler than conventional RLHF. An overseer, denoted as \(O\), is imbued with a "utility table" \(U\): a mapping of words to respective utility values. The overseer converts a tokenized generation to words, and then computes the utility of the generation and prefix together.
The aim is to modulate the student model, \(M_{\text{RLHF}}\), to maximize the utility of its output text. Utility values are assigned to tokens in \(M_{\text{RLHF}}\)'s vocabulary, and we use Proximal Policy Optimization (PPO) for reward training. See Appendix C for more details on the general PPO method, and see Appendix D for more details on the Utility tables task.
We flesh out further details of our setup in Section 5 and lightly explore alternate options in Appendix H.
## 5 Experiments
We detail here each stage of our experimental pipeline, from training LLMs via RLHF, to extracting dictionary features from autoencoders, to finally interpreting the IRMs using these dictionary features.
### Applying RLHF to base models.
We select a controlled sentiment generation task using data from the IMDb reviews dataset due to the simplicity of the training environment, reducing noise in our analysis. Models generate completions to review prefixes, and positive sentiment prefix and completion pairs are assigned higher rewards. Two different external sentiment reward models are used for fine-tuning via RLHF.
The first is a DistilBERT (Sanh et al., 2020) sentiment classifier trained on the IMDb reviews dataset (von Werra, 2023). Reward is assigned to the logit of the positive sentiment label. The second is the Utility table reward model described in Section 4.2, where the utility values are taken from the VADER sentiment lexicon (Hutto and Gilbert (2014)). The sentiment values were initially labelled by a group of human annotators, who assigned ratings from \(-4\) (extremely negative) to \(+4\) (positive), with an average taken over ten annotations per word. We assigned reward to a sentence as a sum of utilities, scaled down by a factor of 5 and clamped to an interval of \([-10,10]\). Scaling and clamping were implemented to avoid collapse in PPO training, which was observed if reward magnitudes were left unbounded.
\[\mathrm{Reward}(s)=\mathrm{clip}\left(\frac{1}{5}\sum_{\text{token}\in s}U( \text{token}),-10,+10\right) \tag{3}\]
Our experiments are run with various models from the Pythia suite (70M, 160M and 410M) (Biderman et al., 2023). These models are fine-tuned with equivalent hyperparameters via Proximal Policy Optimization (PPO), in a setup akin to Ouyang et al. (2022). For fine-tuning, we used the Transformers Reinforcement Learning (TRL) framework (von Werra et al., 2023). The major hyperparameters are listed in Table 1, with the rest derived from the default values provided by the TRL framework. See Appendix C for an overview of the RLHF training pipeline.
### Training autoencoders for dictionary extraction
Once we obtain the trained policy model, we compute the parameter divergence between \(M_{\text{RLHF}}\) and \(M_{\text{base}}\) layer by layer under the \(\ell_{2}\) norm. We sort all layers in descending order from most to least parameter divergence, and fix the five highest-divergence layers for our dictionary extraction. These turned out to mostly be the deeper layers of the models; see Appendix G for details.
For each model from \(M_{\text{base}}\) and \(M_{\text{RLHF}}\), we train a pair of autoencoders on the activations of each high-divergence layer using two different dictionary sizes. The dictionary sizes in Table 2 were used for the autoencoders.
Autoencoders were trained for \(3\) epochs with an \(L_{1}\) regularization coefficient of \(0.001\), a learning rate of \(1e-3\) and a batch size of \(32\) on activations from inputs for the test split of the IMDb reviews dataset. We found that for GPT-Neo-125, an \(L_{1}\) regularization coefficient of \(0.0015\) gave a better tradeoff of reconstruction and sparsity. The decoder and encoder weights were tied, and the decoder weights are simply a transpose of those for the encoder. 'These hyperparameters were chosen based on empirical testing by Sharkey et al. (2022), Cunningham et al. (2023) and ourselves in selecting for optimal sparsity and reconstruction loss, where we optimized for both the \(\ell_{1}\) and \(\ell_{0}\) sparsity of the dictionary elements.
Next, we find and retain the top \(k=10\) features that maximize the MMCS objective (given earlier in Equation 2) between \(D_{1}\) and \(D_{2}\) of each such pair of autoencoders.
For more discussion on the methodology used to train the autoencoders, see Appendix H.
### Measuring fidelity of sparse coding features to the specified reward function.
In order to derive human-interpretable explanations, we employ GPT-4 to explain what a dictionary feature represents based on normalized and discretized activations for that feature (Bills et al. (2023)) over a series of tokens. The top \(k=10\) highest MMCS features were sampled for both \(M_{\text{base}}\) and \(M_{\text{RLHF}}\) to locate feature indices to explain with GPT-4. Through these explanations of likely ground truth dictionary features, we attempt to understand the effects of RLHF on \(M_{\text{base}}\), using examples to substantiate analysis in Section 6. See Appendix E for a complete list of feature descriptions for layer 2 in Pythia-70m.
Reconstructing external Utility table reward modelThrough the sparse coding feature extraction from \(M_{\text{RLHF}}\) and subsequent GPT-4 interpretation, we would expect to rederive tokens present in our originally specified utility table \(U\) if RLHF is successful influencing the model to learn \(U\). For example, if \(U\) specifies positive utility tokens (e.g., 'good', 'happy', etc.) and these tokens are more prevalent in the feature descriptions for \(M_{\text{RLHF}}\) than in \(M_{\text{base}}\), it would indicate \(M_{\text{RLHF}}\) having learned this skew.
To quantify the correspondence of the dictionary features to the specified external reward model, we also measure the summed absolute utility of the top-\(k\) most similar feature descriptions for both the \(M_{\text{base}}\) and \(M_{\text{RLHF}}\) dictionaries. We can then use GPT-4's description of these features and the summed absolute utility of these text descriptions to answer: How well has \(M_{\text{RLHF}}\) learned \(U\)?
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Batch Size** & **Mini Batch Size** & **Init KL Coef** & **Max Grad Norm** & **Learning Rate** \\ \hline
64 & 16 & 0.5 & 1 & \(1\times 10^{-6}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperparameters used to train models for positive sentiment completions of prefixes from the IMDb dataset.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** &
\begin{tabular}{c} **Dictionary Size** \\ **Large** \\ \end{tabular} \\ \hline Pythia-70m & 1024 & 512 \\ Pythia-160m & 1536 & 768 \\ Pythia-410m & 2048 & 1024 \\ GPT-Neo-125m & 768 & 1536 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dictionary sizes for autoencoder comparison via MMCS
Results and Discussion
In this section, we present a qualitative analysis of the feature explanations generated via GPT-4 for the implicit reward models under both of our tasks. We also give a quantitative measure of the utility of the dictionary features for our Utility table reward model.
### Movie Opinion Features in Pythia-70m Fine-Tuned on Positive Movie Review Completions
Features identified as detecting opinions concerning movies in itself serves as a great example of both the utility and shortcomings of this method. Being able to detect the occurrence of an opinion regarding a movie provides useful insights about the reward model, given that the training objective was generating positive sentiment completions. However, the descriptions of such features are very high-level and overrepresented among the feature descriptions. In the fine-tuned Pythia-70m instance, of the 50 highest similarity features (10 per layer), there are 21 feature descriptions that mention detecting opinions or reviews in the context of movies. Of the top-\(k=10\) features in layer 4 of the fine-tuned model, 8 are for this purpose. Contrast this to the base model, with 13 total feature descriptions focused on sentiment in the context of movie reviews. Full feature description tables are available in Appendix E.
This data alone does not allow for a clear picture of the reward model to be constructed. Although it is clear that a greater portion of the features represent concepts related to the training objective in this limited sample, it cannot be shown that the model has properly internalized the reward model on which it was trained. Additionally, it is highly improbable for the base model to inherently have \(13\) of the \(50\) sampled features applied to identifying opinions on movies, which shows that the nature of the input data used to sample activations can skew GPT-4's description of the feature. If a feature consistently activates on negative opinions, but the entire sample set is movie reviews, it might be unclear to GPT-4 whether the feature is activating in response to negative sentiment, or to negative sentiment in movie reviews specifically. This underscores the need for future work to use a diverse sample of inputs when sampling activations for use in this method. The next case study tries to cover a quantitative metric for reward modeling efficacy, but also falls short of showing a crisp structure of elements comprising the reward model.
### Quantifying Reward Modeling Efficacy For Models Fine-tuned on High Utility Movie Review Completions
Not all dictionary features will be relevant to the utility table. Using the example of 'Sentences concerning word processing' as a feature description, it is not obvious how the utility of this could be computed under any \(U\). Sentiment lexicons like VADER lend themselves well to this task. Neutral entries are labeled as having a sentiment score of 0, and words not included in the lexicon are treated as though they were neutral entries. A quantitative measure is attempted in Table 3, whereby GPT-4's predicted explanations are computed against \(U\) for an approximation of \(M_{\text{RLHF}}\)'s ability to learn \(U\) and its maximization. This metric is shown alongside the aggregate utility measured over \(100\) completions of a \(30\) token prefix sampled from the test split to validate it as correlating with actual performance against the reward model. See Table 4.
The descriptions of the top-\(k\) represented features score considerably more highly in \(U\), suggesting a superior IRM. Although _indicative_ of what features might compose the reward model of \(M_{\text{RLHF}}\), the accuracy of this method is limited by two primary factors: the capability of the sparse autoencoders in reconstructing accurate activation vectors, and GPT-4's ability to accurately devise descriptions
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & \(M_{\text{base}}\)** **Score** & \(M_{\text{RLHF}}\)** **Score** \\ \hline Pythia-70m & 61.2 & 94.3 \\ Pythia-160m & 59.2 & 80.2 \\ Pythia-410m & 59.4 & 89.4 \\ GPT-Neo-125m & 101.2 & 111.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean of the aggregate absolute utility of the top-\(k=30\) learned features in the base and fine-tuned model over three samples per model
for neurons. Additionally, aggregating the absolute utility of feature descriptions is simply a proxy for reward modeling efficacy, and is not guaranteed to map to equivalent performance against \(U\) empirically.
## 7 Conclusion
In closing, features contained in the dictionaries of autoencoders specific to our fine-tuned model, \(M_{\text{RLHF}}\), are explained using GPT-4. Explanations that imply properties of the reward model are used as case studies to demonstrate their usefulness for studying the reward models learned through RLHF. Additionally, we quantify the efficacy of the reward model learned by \(M_{\text{RLHF}}\) using GPT-4, which future work could leverage for reward modeling benchmarks or for training LLMs that learn more accurate reward models.
However, this method has several limitations as well. In LLMs larger than those used in these experiments (the largest of which was Pythia-410m), it may be required to explain many hundreds or thousands of features in order to effectively study their reward models. Both training autoencoders on activations of this scale and having GPT-4 explain the reconstructed activations becomes very computationally intensive. Furthermore, although features related to reward modeling can be extracted, the relationships of those features in producing a reward model remain unclear. Future work could focus on establishing these relationships for a more formal and broad interpretation of learned reward models in LLMs.
|
2303.10613 | SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude
Operations | Reverse engineering CAD models from raw geometry is a classic but strenuous
research problem. Previous learning-based methods rely heavily on labels due to
the supervised design patterns or reconstruct CAD shapes that are not easily
editable. In this work, we introduce SECAD-Net, an end-to-end neural network
aimed at reconstructing compact and easy-to-edit CAD models in a
self-supervised manner. Drawing inspiration from the modeling language that is
most commonly used in modern CAD software, we propose to learn 2D sketches and
3D extrusion parameters from raw shapes, from which a set of extrusion
cylinders can be generated by extruding each sketch from a 2D plane into a 3D
body. By incorporating the Boolean operation (i.e., union), these cylinders can
be combined to closely approximate the target geometry. We advocate the use of
implicit fields for sketch representation, which allows for creating CAD
variations by interpolating latent codes in the sketch latent space. Extensive
experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness
of our method, and show superiority over state-of-the-art alternatives
including the closely related method for supervised CAD reconstruction. We
further apply our approach to CAD editing and single-view CAD reconstruction.
The code is released at https://github.com/BunnySoCrazy/SECAD-Net. | Pu Li, Jianwei Guo, Xiaopeng Zhang, Dong-ming Yan | 2023-03-19T09:26:03Z | http://arxiv.org/abs/2303.10613v1 | # SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude Operations
###### Abstract
Reverse engineering CAD models from raw geometry is a classic but strenuous research problem. Previous learning-based methods rely heavily on labels due to the supervised design patterns or reconstruct CAD shapes that are not easily editable. In this work, we introduce SECAD-Net, _an end-to-end neural network aimed at reconstructing compact and easy-to-edit CAD models in a self-supervised manner. Drawing inspiration from the modeling language that is most commonly used in modern CAD software, we propose to learn 2D sketches and 3D extrusion parameters from raw shapes, from which a set of extrusion cylinders can be generated by extruding each sketch from a 2D plane into a 3D body. By incorporating the Boolean operation (i.e., union), these cylinders can be combined to closely approximate the target geometry. We advocate the use of implicit fields for sketch representation, which allows for creating CAD variations by interpolating latent codes in the sketch latent space. Extensive experiments on both ABC and Fusion 360 datasets demonstrate the effectiveness of our method, and show superiority over state-of-the-art alternatives including the closely related method for supervised CAD reconstruction. We further apply our approach to CAD editing and single-view CAD reconstruction. Code will be released at [https://github.com/BunnySoCrazy/SECAD-Net](https://github.com/BunnySoCrazy/SECAD-Net).
## 1 Introduction
CAD reconstruction is one of the most sought-after geometric modeling technologies, which plays a substantial role in reverse engineering in case of the original design document is missing or the CAD model of a real object is not available. It empowers users to reproduce CAD models from other representations and supports the designer to create new variations to facilitate various engineering and manufacturing applications.
The advance in 3D scanning technologies has promoted the paradigm shift from time-consuming and laborious manual dimensions to automatic CAD reconstruction. A typical line of works [3, 6, 35, 47] first reconstructs a polygon mesh from the scanned point cloud, then followed by mesh segmentation and primitive extraction to obtain a boundary representation (B-rep). Finally, a CAD shape parser is applied to convert the B-rep into a sequence of modeling operations. Recently, inspired by the substantial success of point set learning [1, 32, 49] and deep 3D representations [28, 30, 9], a number of methods have exploited neural
networks to improve the above pipeline, _e.g_., detecting and fitting primitives to raw point clouds directly [25, 27, 40]. A few works (_e.g_., CSG-Net [39], UCSG-Net [19], and CSG-Stump [33]) further parse point cloud inputs into a constructive solid geometry (CSG) tree by predicting a set of primitives that are then combined with Boolean operations. Although achieving encouraging compact representation, they only output a set of simple primitives with limited types (_e.g_., planes, cylinders, spheres), which restricts their representation capability for reconstructing complex and more general 3D shapes. CAPRI-Net [59] introduces quadric surface primitives and the difference operation based on BSP-Net [8] to produce complicated convex and concave shapes via a CSG tree. However, controlling the implicit equation and parameters of quadric primitives is difficult for designers to edit the reconstructed models. Thus, the editability of those methods is quite limited.
In this paper, we develop a novel and versatile deep neural framework, named SECAD-Net, to reconstruct high-quality and editable CAD models. Our approach is inspired by the observation that a CAD model is usually designed as a command sequence of operations [7, 38, 50, 51, 57], _i.e_., a set of planar 2D sketches are first drawn then extruded into 3D solid shapes for Boolean operations to create the final model. At the heart of our approach is to learn the sketch and extrude modeling operations, rather than CSG with parametric primitives. To determine the position and axis of each sketch plane, SECAD-Net first learns multiple extrusion boxes to decompose the entire shape into multiple local regions. Afterward, for the local profile in each box, we utilize a fully connected network to learn the implicit representation of the sketch. An extrusion operator is then designed to calculate the implicit expression of the cylinders according to the predicted sketch and extrusion parameters. We finally apply a union operation to assemble all extrusion cylinders into the final CAD model.
Benefiting from our representation, our approach is flexible and efficient to construct a wide range of 3D shapes. As the predictions of our method are fully interpretable, it allows users to express their ideas to create variations or improve the design by operating on 2D sketches or 3D cylinders intuitively. To summarize, our work makes the following contributions:
* We present a novel deep neural network for reverse engineering CAD models with self-supervision, leading to faithful reconstructions that closely approximate the target geometry.
* SECAD-Net is capable of learning implicit sketches and differentiable extrusions from raw 3D shapes without the guidance of ground truth sketch labels.
* Extensive experiments demonstrate the superiority of SECAD-Net through comprehensive comparisons. We also showcase its immediate applications to CAD interpolation, editing, and single-view reconstruction.
## 2 Related work
**Neural implicit representation.** 3D shapes can be represented either explicitly (_e.g_., point sets, voxels, meshes) or implicitly (_e.g_., signed-distance functions, indicator functions), each of them comes with its own advantages and drawbacks. Recently, there is an explosion of neural implicit representations [9, 28, 30] that allow for generating detail-rich 3D shapes by predicting the underlying signed distance fields. Thanks to the ability to learn priors over shapes, many deep implicit works have been proposed to solve various 3D tasks, such as shape representation and completion [2, 10, 42], image-based 3D reconstruction [58, 52, 45], shape abstraction [16, 44] and novel view synthesis [29, 12]. Theoretically, any of the above shape representations can be used to represent sketches. However, primitive-based methods usually suppress the ability cap of shape representation. In this work, we choose to fit an implicit sketch representation using a neural network, and show its superiority over other representations (_e.g_., BSP [8]) in the ablation study, see Sec. 5.4.
**Reverse engineering CAD reconstruction.** Over the past decades, reverse engineering has been extensively studied; it aims at converting measured data (a surface mesh or a point cloud) into solid 3D models that can be further edited and manufactured by industries. Traditional approaches addressing this problem consist of the following tasks: (1) segmentation of the point clouds/meshes [5, 41, 60], (2) fitting of parametric primitives to segmented regions [11, 36, 55], (3) finishing operations for CAD modeling [4, 24]. Important drawbacks of these conventional methods are the time-consuming process and the requirement of a skilled operator to guide the reconstruction [6].
With the release of several large-scale CAD datasets (_e.g_., ABC [21], Fusion 360 [50]), SketchGraphs [37]), numerous approaches have explored deep learning to address primitive segmentation/detection [25, 56], parametric curve or surface inference from point clouds [17, 27, 31, 40, 48] or B-rep models [18, 23]. However, by only outputting individual curves or surfaces, these methods lack the CAD modeling operations that are needed to build solid models. Focusing on CAD generation rather than reconstruction task as ours, some approaches propose deep generative models that predict sequences of CAD modeling operations to produce CAD designs [50, 51, 53, 26, 54]. Aiming at CAD reconstruction involving inverse CSG modeling [15], CSGNet [39] first develops a neural model that parses a shape into a sequence of CSG operations. More recent works follow the line of CSG parsing by advancing the inference without any supervision [19], or improving representation capability with a three-layer reformulation of the classic
CSG-tree [33], or handling richer geometric and topological variations by introducing quadric surface primitives [59]. While achieving high-quality reconstruction, CSG tends to combine a large number of shape primitives that are not as flexible as the extrusions of 2D sketches and are also not easily user edited to control the final geometry.
Motivated by modern design tools, supervised methods are proposed [22, 46] utilizing the sketch-extrude procedural models and learning 2D sketches that can be extruded to 3D shapes. In contrast to their reliance on 2D labels, SECAD-Net is trained in a self-supervised manner. Most closely related to our work is ExtrudeNet [34]. SECAD-Net distinguishes itself from ExtrudeNet in several significant aspects: i) Following the traditional reconstruction process, ExtrudeNet first predicts the parameters of Bezier curves and then converts them into SDFs. In contrast, we jumped out of this paradigm and directly used neural networks to predict the 2D implicit fields of the profiles. ii) ExtrudeNet adopts closed Bezier curves to avoid self-intersection in sketches. This makes ExtrudeNet can only predict star-shaped profiles, which limits the expressive power of their CAD shapes. Our method does not impose any restrictions on the shape of the profile, thus having greater flexibility in shape expression. iii) To pursue the reconstruction effect, ExtrudeNet relies on a larger number of primitives, while our method is able to predict more compact CAD shapes.
## 3 Problem Statement and Overview
In this section, we present an overview of the proposed approach. To precisely explain our techniques, we first provide the definition of several related terminologies (Fig. 3).
### Preliminaries
**Definition 1** (Loop, Profile and Sketch): _In CAD terminology, a sketch is represented by a collection of geometric primitives. By referring to a closed curve as a loop and an enclosed region composed of one or multiple inner/outer loops as a profile, we define a sketch as the collection of one profile and its loops._
**Definition 2** (Sketch plane and Extrusion box): _A sketch plane is a finite plane with width \(w\) and length \(l\), containing one or more sketches with the same extrusion height \(h\). Then we define an extrusion box as a cuboid with the sketch plane as the base and \(2h\) as the height._
**Definition 3** (Cylinder primitive and Cylinder): _In this work, a cylinder refers to the shape obtained by extruding a sketch, and a cylinder primitive is obtained by performing an extrude operation on a closed area formed by a loop. A cylinder may contain one cylinder primitive or be obtained from several cylinder primitives through the Difference operation used in CSG modeling._
### Overview
We formulate the problem of CAD reconstruction as _sketch_ and _extrude_ inference: taking an input 3D shape,
Figure 3: Definitions of CAD terminologies used in this paper. Note that the axis of the sketch plane in the figure is the same as the z-axis in the extrusion box.
Figure 2: **Network architecture for SECAD-Net**: The embedding \(\mathbf{z}\) encoded from the voxel input is first fed to the extrusion box head to predict extrusion boxes. It is also sent to the sketch head network to calculate the sketch SDF \(\hat{\mathcal{S}}^{i}_{\text{sk}}\) after concatenating with the linear transformed sampling point. \(\hat{\mathcal{S}}^{i}_{\text{cyl}}\) stands for the SDF of the cylinder, which is acquired by extruding \(\hat{\mathcal{S}}^{i}_{\text{sk}}\) with height \(h_{i}\). Then we convert \(\hat{\mathcal{S}}^{i}_{\text{cyl}}\) to occupancy of cylinder \(\hat{\mathcal{O}}_{i}\) and finally obtain the complete shape by union all the occupancies.
SECAD-Net aims to reconstruct the CAD model by predicting a set of geometric proxies that are decomposed to sketch-extrude operations. The overall pipeline of SECAD-Net is visualized in Fig. 2. Given a 3D voxel model, we first map it into a latent feature embedding \(\mathbf{z}\) by using an encoder based on a 3D convolutional network. An extrusion box head network is then applied to predict the parameters of the sketch planes from \(\mathbf{z}\). We employ \(N\) sketch head network to independently learn \(N\) 2D signed distance fields (SDFs) as the implicit representation of a sketch. Next, we design a differentiable extrusion operator to calculate the SDF of the 3D cylinder primitives corresponding to the sketches. Finally, an occupancy transformation operation and a union operation transform the multiple SDFs into the full 3D reconstructed shape as the output of the network.
## 4 Method
### Sketch-Extrude Inferring
We apply a standard 3D CNN encoder to extract a shape code \(\mathbf{z}\) with size 256 from the input voxel. The code is then passed to the proposed SECAD-Net to output the sketch and extrusion cylinder parameters. Below we introduce the main modules of SECAD-Net following the order of data transmission during prediction.
**Extrusion box prediction.** We first apply a fully connected layer to predict the parameters of the extrusion boxes. Taking the feature encoding \(\mathbf{z}\) as input, a decoder, refereed to _sketch box head_, outputs a set of sketch boxes \(\mathcal{B}=\{\mathbf{s}_{i},\mathbf{c}_{i},\mathbf{r}_{i}\mid i\in N\}\), where \(\mathbf{s}_{i}\in\mathbb{R}^{3}\) describes the 2D size (_i.e._, length and width) of the box, \(\mathbf{c}_{i}\in\mathbb{R}^{3}\) represents the predicted position of the box's center, and \(\mathbf{r}_{i}\in\mathbb{R}^{4}\) is the rotation quaternion. The positive z-axis of the extrusion box determines the axial direction \(\mathbf{e}_{i}\) of the sketch plane, and the height of the extrusion box is twice the height of the extrude operation (see Fig. 3).
**2D sketch inference.** The sketches in each sketch plane depict the shape contained within the sketch box. Inspired by the recent neural implicit shapes [2, 46], we encode the shape of each sketch into a sketch latent space. To this end, we first project the 3D sampling points _w.r.t_ the corresponding occupancy value onto the sketch plane along the axis \(\mathbf{e}_{i}\). A _sketch head network_ (SK-head) then computes the signed distance from each sampling point to the sketch contour. The distance is negative for points in the sketch and positive for points outside. Each SK-head contains \(N_{lay}\) layers of fully connected layers, with softplus activation functions used between layers, and we clamp the output distance to [-1,1] in the last layer. Each 2D point is concatenated with the feature encoding \(\mathbf{z}\) as a global condition before being fed into the SK-head. Regarding the \(i\)-th SK-head as an implicit function \(f_{i}\), then formally we get:
\[\hat{\mathcal{S}}_{\text{sk}}^{i}=f_{i}(\mathbf{x}_{i}^{t},\mathbf{z}), \tag{1}\]
where \(\mathbf{x}_{i}^{t}\) is the result of a linear transformation of the sampling points contained in the \(i\)-th extrusion box, which can be expressed as \(\mathbf{r}_{i}^{-1}(\mathbf{x}_{i}-\mathbf{c}_{i})\). \(\hat{\mathcal{S}}_{\text{sk}}^{i}\) represents the signed distance field of the \(i\)-th sketch plane.
**Differentiable extrusion.** Next, we calculate the SDF of a cylinder based on the 2D distance field and the extrusion height \(h\). We denote \(\Omega\) as the volume between two hyperplanes \(p^{u}\) and \(p^{l}\), where \(p^{u}\) and \(p^{l}\) are the upper and lower surfaces on which the cylinder is located. Similarly, we define \(\Psi\) as the volume inside the infinite cylinder where the side of the cylinder is located. The implicit field of the \(i\)-th cylinder, \(\hat{\mathcal{S}}_{\text{cyl}}^{i}\), is equal to one of the following cases: (1) the distance from a point \(\mathbf{x}_{i}\) to \(p^{u}\) or \(p^{l}\), when \(\mathbf{x}_{i}\in\Omega\cap\Psi^{\complement}\), where superscript \(\complement\) stands for complement; (2) \(\hat{\mathcal{S}}_{\text{sk}}^{i}\), when \(\mathbf{x}_{i}\in\Omega^{\complement}\cap\Psi\); (3) the distance from \(\mathbf{x}_{i}\) to the intersection curves of the cylinder and hyperplanes, when \(\mathbf{x}_{i}\in\Omega^{\complement}\cap\Psi^{\complement}\); (4) the maximum distance between \(\hat{\mathcal{S}}_{\text{sk}}^{i}\) and the point to \(p^{u}\) or \(p^{l}\), when \(\mathbf{x}_{i}\in\Omega^{\complement}\cap\Psi\). The sub-formulas for each case are as follows:
\[\hat{\mathcal{S}}_{\text{cyl}}^{i}=\begin{cases}max(\hat{\mathcal{S}}_{\text{ sk}}^{i},|\mathbf{x}_{i_{x}}|-h_{i})&,(\hat{\mathcal{S}}_{\text{sk}}^{i}\leq 0) \land(|\mathbf{x}_{i_{x}}|\leq h_{i})\\ |\mathbf{x}_{i_{x}}|-h_{i}&,(\hat{\mathcal{S}}_{\text{sk}}^{i}\leq 0)\land(| \mathbf{x}_{i_{x}}|>h_{i})\\ \hat{\mathcal{S}}_{\text{sk}}^{i}&,(\hat{\mathcal{S}}_{\text{sk}}^{i}>0)\land(| \mathbf{x}_{i_{x}}|\leq h_{i})\\ \left\|\hat{\mathcal{S}}_{\text{sk}}^{i},(|\mathbf{x}_{i_{x}}|-h_{i})\right\| _{2}&,(\hat{\mathcal{S}}_{\text{sk}}^{i}>0)\land(|\mathbf{x}_{i_{x}}|>h_{i}) \end{cases} \tag{2}\]
Combining the above four sub-formulas with the \(max\) and \(min\) operations, the following result is obtained:
\[\hat{\mathcal{S}}_{\text{cyl}}^{i} = min(max(\hat{\mathcal{S}}_{\text{sk}}^{i},|\mathbf{x}_{i_{x}}|-h _{i}),0) \tag{3}\] \[+ \left\|max(\hat{\mathcal{S}}_{\text{sk}}^{i},0),max(|\mathbf{x}_{ i_{x}}|-h_{i},0)\right\|_{2}\]
**Occupancy conversion and assembly.** The occupancy function represents points inside the shape as 1 and points outside the shape as 0, which can be transformed by SDF. Following [13, 33], we use the Sigmoid function to perform differentiable transformation operations:
\[\hat{\mathcal{O}}_{i}=Sigmoid(-\eta\cdot\hat{\mathcal{S}}_{\text{cyl}}^{i}). \tag{4}\]
We finally assemble the occupancy \(\hat{\mathcal{O}}_{i}\) of each cylinder to obtain the reconstructed shape. In order to express complex shapes, many works use intersection, union, and difference operations in CSG in the assembly stage [8, 19, 33, 59]. In contrast to them, we only use the union operation, because the extrusion cylinders can naturally represent concave shapes. This helps us avoid designing intricate loss functions or employing multi-stage training strategies without losing the flexibility of reconstructing shape representations. We adopt the Softmax to compute the union operation as it is shown to be effective in avoiding vanishing gradients [33]:
\[\hat{\mathcal{O}}_{total}=\sum_{i}^{N}Softmax(\varphi\cdot\hat{\mathcal{O}}_{i })\cdot\hat{\mathcal{O}}_{i}, \tag{5}\]
where \(\varphi\) is the modulating coefficient and \(\hat{\mathcal{O}}_{total}\) is the occupancy representation of the final reconstructed shape.
### Loss Function
We train SECAD-Net in a self-supervised fashion through the minimization of the sum of two objective terms. The supervision signal is mainly quantified by the reconstruction loss, which measures the mean squared error between the predicted shape occupancy \(\hat{\mathcal{O}}_{total}\) and the ground truth \(\mathcal{O}_{total}^{*}\):
\[\mathcal{L}_{\textit{recon}}=\mathbb{E}_{x\in\mathbf{X}}\left[(\hat{\mathcal{ O}}_{total}-\mathcal{O}_{total}^{*})^{2}\right], \tag{6}\]
where \(x\) is a randomly sampled point in the shape volume.
However, we find that applying only \(\mathcal{L}_{\textit{recon}}\) makes the network always learn fragmented cylinders. To tackle this problem, we design a 2D sketch loss to facilitate the network to learn the axis of the sketch plane and the complete profile. Specifically, each sketch plane cuts the voxel model to form an occupancy cross-section \(\mathcal{O}_{cs}^{i^{*}}\). We project the 3D sampling points inside the \(i\)-th extrusion box \(\mathcal{B}^{i}\) onto the sketch plane along the axial direction, and calculate the difference between the occupancy value of the projected points \(\hat{\mathcal{O}}_{proj}\) and ground truth \(\mathcal{O}_{cs}^{i^{*}}\) :
\[\mathcal{L}_{\textit{sketch}}=\sum_{i=1}^{N}\mathbb{E}_{x\in\mathcal{B}^{i}} \left[(\hat{\mathcal{O}}_{proj}^{i}-\mathcal{O}_{cs}^{i^{*}})^{2}\right]. \tag{7}\]
The overall objective of SECAD-Net is defined as the combination of the above two terms:
\[\mathcal{L}_{total}=\mathcal{L}_{\textit{recon}}+\lambda\mathcal{L}_{\textit{ sketch}}, \tag{8}\]
where \(\lambda\) is a balance factor.
### CAD Reconstruction
The output of SECAD-Net during the training phase is an implicit occupancy function of the 3D shape. In the prediction stage, we reconstruct CAD models by using sketch-extrude operations instead of the marching cubes method.
**Sketch and extrusion.** To convert a 2D implicit field (Fig. 4 (a)) in the sketch latent space into an editable sketch, we input uniform 2D sampling points to the SK-head, and attach the implicit value to the position of the sampling point to obtain an explicit image-like 2D profile (Fig. 4 (b)). We then use the Teh-Chin chain approximation [43] to extract the contours of the profiles and the hierarchical relationships between them. We further apply Dierckx's fitting [14] to convert the contours into closed B-splines (Fig. 4 (c)).
After extruding each sketch to get the cylinder primitives according to half the height of \(\mathcal{B}^{i}\), we assemble cylinder primitives into cylinders by alternately performing union or difference operations according to the hierarchical relationship between contours (primitive at hierarchy 0 _difference_ primitives at hierarchy 1 in the case of Fig. 4). Finally, we take the union of all cylinders to obtain the CAD model.
**Post-processing.** We take two post-processing operations to clean up overlapping and shredded shapes in the result. First, for any two cylinders, when their overlapping coefficient is greater than 0.95, the smaller of them is discarded. Second, we delete all cylinders whose height is less than 0.01 in the reconstruction result. We demonstrate our final reconstructions in Fig. 5 and Fig. 6.
### Implementation Details
SECAD-Net is implemented in PyTorch and trained on a TITAN RTX GPU from NVIDIA(r). We train our model using an Adam optimizer [20] with learning rate \(1\times 10^{-4}\) and beta parameters (0.5, 0.99). We set both the number of MLP layers in the sketch head network and the number of output cylinders to 4. For hyper-parameters in Eq. 4, Eq. 5 and Eq. 8, we set \(\eta=150\), \(\varphi=25\) and \(\lambda=0.01\) in default, which generally works well in our experiments. Employing a similar training strategy to [59], we first pre-train SECAD-Net on the training datasets for 1,000 epochs using batch size 24, which takes about 8 hours, and fine-tuning on each test shape for 300 epochs, which takes about 3 minutes per shape.
## 5 Experimental Results
In this section, we examine the performance of SECAD-Net on the ABC dataset [21] and Fusion 360 Gallery [50]. Through extensive comparisons and ablation studies, we demonstrate the effectiveness of our approach and show its superiority over state-of-the-art reference approaches for CAD reconstruction.
### Setup
**Dataset preparation.** For the ABC dataset, the voxel grids and sampling point data are provided by [59]. We use 5,000 groups of data for training and 1,000 for testing. For Fusion 360, which does not contain available voxels, we first randomly select 6,000 meshes, then discretize them into internally filled voxels. The train-test split is the same as ABC. We obtain sampling points with the corresponding occu
Figure 4: Illustration of converting 2D implicit sketches into the closed B-splines.
pancy value following [9]. The resolution of voxel shapes is \(64^{3}\) for both datasets, and the number of sampling points is 8,192. Considering that fine-tuning each method and generating high-accuracy meshes is time-consuming, we take 50 shapes from each dataset to form 100 shapes for quantitative evaluation.
**Evaluation metrics.** For quantitative evaluations, we follow the metrics that are commonly used in previous methods [33, 59], including symmetric Chamfer Distance (\(CD\)), Normal Consistency (\(NC\)), Edge Chamfer Distance (\(ECD\)). Details of computing these metrics are given in the supplemental materials. Additionally, we also report the number of generated primitives, #\(p\), as a measure of how easy the output CAD results are to edit.
### Comparison on CAD Reconstruction
We thoroughly compare our method with two types of primitive-based CAD reconstruction methods that output editable CAD models, including two CSG-like methods (, UCSG-Net [19], CSG-Stump [33]) and two cylinder decomposition counterpart (, point2Cyl [46]), ExtrudeNet [34]. For each method, we adopt the implementation provided by the corresponding authors, and use the same training strategy for training and fine-tuning. For CSG-Stump, we set the number of intersection nodes to 64, making it output a comparable number of primitives to other methods. Those methods provide a plethora of comparisons to other techniques and establish themselves as state-of-the-art. Note that for point2Cyl, we only report its results on Fusion 360 dataset, as the ABC dataset does not provide the labels needed to train point2Cyl.
Quantitative results on the ABC and Fusion 360 datasets are reported in Table 1 and Table 2, respectively. It can be seen that the proposed SECAD-Net outperforms both kinds of methods on all evaluation metrics while still generating a relatively small number of primitives. Fig. 5 and Fig. 6 display several qualitative comparison results. For fairness, all the reconstructed CAD models are visualized using marching cubes (MC) with 256 resolution. Visually as shown in the figures, our method achieves much better geometry and topological fidelity with more accurate structures (, holes, junctions) and sharper features.
### CAD Generation via Sketch Interpolation
Although without ground truth labels as guidance, SECAD-Net can learn plausible 2D sketches from raw 3D shapes. Thanks to the implicit sketch representation, we are able to generate different CAD variations when a pair of shapes is interpolated in the complete and continuous sketch latent space, as shown in Fig. 7. The results suggest that the generated sketch is gradually transformed even if the pair of shapes have significantly different structures, and we draw two further conclusions: (1) the predicted position of each extrusion box is relatively deterministic, although the input shape is different (see the left and right column sketches in the leftmost group); (2) when an extrusion box does not contain a shape, our SK-head does not generate a sketch, making the network output an adaptive number of cylinders (see the middle column sketches of the leftmost and the rightmost groups).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Methods & CD\(\downarrow\) & ECD\(\downarrow\) & NC\(\uparrow\) & \#P\(\downarrow\) \\ \hline \hline UCSG-Net [19] & 1.849 & 1.255 & 0.820 & 12.84 \\ \hline CSG-Stump [33] & 4.031 & 0.754 & 0.828 & 17.18 \\ \hline ExtrudeNet [34] & 0.471 & 0.914 & 0.852 & 14.46 \\ \hline Ours & **0.330** & **0.724** & **0.863** & **4.30** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison between reconstruction results on ABC dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Methods & CD\(\downarrow\) & ECD\(\downarrow\) & NC\(\uparrow\) & \#P\(\downarrow\) \\ \hline \hline UCSG-Net [19] & 2.950 & 5.277 & 0.770 & 10.84 \\ \hline CSG-Stump [33] & 2.781 & 4.590 & 0.744 & 12.08 \\ \hline Point2Cyl [46] & 13.889 & 14.657 & 0.669 & **2.76** \\ \hline ExtrudeNet [34] & 2.263 & 3.558 & **0.819** & 15.72 \\ \hline Ours & **2.052** & **3.282** & 0.803 & 5.44 \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison between reconstruction results on Fusion 360 dataset.
Figure 5: Visual comparison between reconstruction results on ABC dataset.
### Ablations
We perform ablation studies to carefully analyze the efficiency of major components of our designed model. All quantitative metrics are measured on the ABC dataset.
**Effect of network design and sketch loss.** We first examine the effect of the number of components/parameters in SECAD-Net, including the number of SK-heads (\(N_{sh}\)), the number of fully connected layers (\(N_{lay}\)) in each SK-head, and the number of output cylinders (\(N_{cyl}\)). Then we show the necessity of the sketch loss by deactivating it to train the network. The quantified results are presented in Table 3. Settings (a), (b), and (c) show that reducing the number of SK-heads or increasing the number of cylinder outputs will damage the model prediction accuracy. Settings (b), (d), and (e) show that increasing the number of MLP layers in SK-head or enabling \(\mathcal{L}_{sketch}\) will improve the prediction accuracy.
**Effect of implicit sketch representation.** To assess the efficiency of neural implicit sketch representation, we adopt two other classical shape representations, namely binary space partitioning (BSP [8]) and box-like primitives, to compare with our SK-head in SECAD-Net. For BSP, we set the number of output convex shapes to 8, each containing 12 partitions. The assembly method is consistent with [8] to
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Representation & CD\(\downarrow\) & ECD\(\downarrow\) & NC\(\uparrow\) & \#P\(\downarrow\) \\ \hline \hline Box primitives & 0.523 & 0.982 & 0.825 & 5.38 \\ \hline BSP & 0.612 & 0.838 & 0.852 & 5.84 \\ \hline SK-head (Ours) & **0.330** & **0.724** & **0.863** & **4.30** \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study on sketch representation.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Settings & (a) & (b) & (c) & (d) & (e) \\ \hline \hline \(N_{sh}\) & 1 & 4 & 4 & 4 & 4 \\ \hline \(N_{lay}\) & 2 & 2 & 2 & 4 & 4 \\ \hline \(N_{cyl}\) & 4 & 4 & 8 & 4 & 4 \\ \hline \(\mathcal{L}_{sketch}\) & ✓ & ✓ & ✓ & ✗ & ✓ \\ \hline CD\(\downarrow\) & 2.627 & 0.993 & 1.504 & 0.336 & **0.330** \\ \hline ECD\(\downarrow\) & 1.754 & 0.882 & 1.098 & 0.772 & **0.724** \\ \hline NC\(\uparrow\) & 0.713 & 0.835 & 0.761 & 0.863 & **0.863** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study on network design and sketch loss. We adopted setting (e) in the final model.
Figure 6: Visual comparison between reconstruction results on Fusion 360 dataset.
Figure 7: For each example, we encode the sketches of top and bottom shapes in latent vector space and then linearly interpolate the corresponding latent codes.
represent 2D sketches. For box-like primitives, 24 rectangles are predicted. We divide them into two subsets in half, take the union operation separately, and subtract the other from one of the union results. The numerical and visual comparison results are shown in Table 4 and Fig. 8, respectively. It can be seen that our implicit field can represent the smoothest shape while obtaining the best reconstruction results.
### Other Applications
By replacing the voxel encoder, SECAD-Net is flexible to reconstruct CAD models from other input shape representations, _e.g_., images and point clouds. Fig. 9 shows the results of SECAD-Net in solving single-view reconstruction (SVR) task. Following the training strategy of previous work [8, 9], we first use voxel data to complete the training of the 3D auto-encoding task, and then train an image encoder with the feature encoding of each shape as the target. The voxels and input images used for the SVR task are obtained directly from Fusion 360. Replacing more input representations, while feasible and meaningful, is not the focus of this paper and we leave it to future research.
Finally, the parameters of both 2D sketches and 3D cylinders are available, thus the CAD results output from SECAD-Net can be directly loaded into existing CAD software for further editing. As shown in the right side of Fig. 9, interpretable CAD variations can be produced via specific editing operations, such as sketch-level curve editing, primitive-level displacement, rotation, scaling, and Boolean operations between primitives.
## 6 Conclusion and Future Work
We have presented a novel neural network that successively learns shape sketch and extrusion without any expensive annotations of shape segmentation and labels as the supervision. Our approach is able to learn smooth sketches, followed by the differentiable extrusion to reconstruct CAD models that are close to the ground truth. We evaluate SECAD-Net using diverse CAD datasets and demonstrate the advantages of our approach by ablation studies and comparing it to the state-of-the-art methods. We further demonstrate our method's applicability in single-image CAD reconstruction. Additionally, the CAD shapes generated by our approach can be directly fed into off-the-shelf CAD software for sketch-level or cylinder primitive-level editing.
In future work, we plan to extend our approach to learn more CAD-related operations such as _revolve, level, and sweep_. Besides, we find that current deep learning models perform poorly on datasets with large differences in shape geometry and structure. Therefore, another promising direction is to explore how to improve the generalization of neural networks and enhance the realism of the generated shapes by learning structural and topological information.
**Acknowledgments.** We thank the anonymous reviewer for their valuable suggestions. This work is partially funded by the National Natural Science Foundation of China (U22B2034, 62172416, U21A20515, 62172415), and the Youth Innovation Promotion Association of the Chinese Academy of Sciences (2022131).
Figure 8: Visual comparison results for ablation study on sketch representation.
Figure 9: SECAD-Net can aid in more applications. Left: the results of single-view reconstruction. Right: a subsequent CAD editing by changing the predicted cylinder primitives. |
2301.06608 | Counting of level crossings for inertial random processes:
Generalization of the Rice formula | We address the counting of level crossings for inertial stochastic processes.
We review Rice's approach to the problem and generalize the classical Rice
formula to include all Gaussian processes in their most general form. We apply
the results to some second-order (i.e., inertial) processes of physical
interest, such as Brownian motion, random acceleration and noisy harmonic
oscillators. For all models we obtain the exact crossing intensities and
discuss their long- and short-time dependence. We illustrate these results with
numerical simulations. | Jaume Masoliver, Matteo Palassini | 2023-01-16T21:05:20Z | http://arxiv.org/abs/2301.06608v2 | # Counting of level crossings for inertial random processes: Generalization of the Rice formula
###### Abstract
We address the counting of level crossings for inertial stochastic processes. We review Rice's approach to the problem and generalize the classical Rice formula to include all Gaussian processes in their most general form. We apply the results to some second-order (i.e., inertial) processes of physical interest, such as Brownian motion, random acceleration and noisy harmonic oscillators. For all models we obtain the exact crossing intensities and discuss their long- and short-time dependence. We illustrate these results with numerical simulations.
pacs: 02.50.Ey, 89.65.Gh, 05.40.Jc, 05.45.Tp
## I Introduction
Level-crossing problems -and related issues such as hitting, extreme-value, first-passage and exit times problems, among others- are not only of deep physical and theoretical interest, but also of considerable practical importance, with countless applications ranging from chemical physics, meteorology, seismology, reliability theory, structural and electrical engineering, and even economics and finance, just to name a few [1; 2; 3; 4; 5; 6]. In a rather general form we may say that the level-crossing problem consists in gathering information on the interval between crossing points to some given level or mark -usually critical- with the ultimate objective of obtaining the probability density of the time intervals between consecutive crossings, a problem which, unfortunately, has no known exact solution [7]. What is however known (at least to some extent) is the counting of level crossings.
The problem of level-crossing counting was first thoroughly discussed during the mid nineteen forties by S. O. Rice [8; 9] within statistical communication theory and it was restricted to stationary Gaussian processes. The main result was the classical Rice formula for the average number of occasions, per unit time, that these processes cross a given level. While Rice was primarily concerned with applications to electrical and radio engineering, the matter has deep and far-reaching effects on other fields of knowledge such as ocean and mechanical engineering, chemical physics, material sciences, laser physics and optics, and many more (see the review [10]). After Rice the problem was first put on firmer mathematical basis by Ito [11], Ylvisaker [12], and particularly by the Scandinavian school of statistics led by Harald Cramer and collaborators [3; 13; 14; 15; 16], among others (see [17; 18; 19] for a small sample).
One of the main achievable goals in the theory of level crossings is provided by the crossing intensity, or average crossing frequency, which is the average number of times (per unit time) that a random process crosses some given level. The inverse of such a quantity has dimensions of time and is called the return period. In mechanical engineering this is a key quantity since it measures the severity of the load on a given structure. For instance, in ocean engineering, in designing walls for the protection against high sea levels the sea surface is generally modeled by stationary Gaussian fields with random excursions from an average height [3].
As we will recall in the next section, in order to develop Rice's approach to a given stochastic process, it is necessary to know the joint probability density of the process and its time derivative, which in many cases is not known. For example, first-order processes driven by white noise are not differentiable, thus this joint density does not exist. One of the objectives of this work is to extend Rice theory and obtain exact expressions of the crossing intensity for linear second-order (i.e., inertial) random processes.
As far as we know, most applications and generalizations of Rice theory are restricted to Gaussian processes and extensions thereof. This is for instance the case of the Slepian model for Gaussian and stationary processes after crossings of the average level [20]. Another extension is addressed to quadratic sums of, again, Gaussian processes (the so-called \(\chi^{2}\) processes [3]), which are important in modeling the response of a given structure to a wind load. In both extensions, solutions are usually numerical and essentially focused on engineering applications. Rice's formula can also be derived from the Kac counting formula [21] for the roots of functions with continuous first derivative, and
for this reason it is sometimes called Kac-Rice formula [22; 23]. The Kac formula has been generalized to scalar-valued random fields [24] and vector-valued random fields [22; 23].
Rice's theory has been widely studied in mathematics and engineering but, to our knowledge, it seems to be less known in physics. Our main goals here are to review the theory using simple arguments and, as mentioned above, to apply it to inertial random process which naturally arise in many physical applications. Previous physical applications of Rice's theory include persistence and first-passage properties (see review [25] and references therein). The number of crossings of the order parameter at a given level has been used to analyze metastable states in the stochastic evolution of spin systems [26; 27], but in this case the evolution is not inertial. Rice's theory was also generalized to determine the number of critical points in stochastic processes and random fields, such as those arising in the statistical physics of disordered systems [28; 29; 30].
The paper is organized as follows. In Sect. II we review the classical Rice formula of the crossing intensity. In Sect. III we obtain the most general expression of the crossing intensity for any Gaussian process. In Sect. IV we apply the results to some particular but relevant Gaussian inertial processes such as Brownian motion and random acceleration process. Sect. V is devoted to random oscillators either damped and undamped with a thorough discussion on different time scales. Concluding remarks are in Sect. VI and some technical details in three appendices.
## II The level-crossing problem and Rice formula
Historically, the level-crossing problem stemmed from Rice zero-crossing problem [8; 9] which in turn originated in Kac's search of the zeros of random polynomials [21]. Rice studied the case in which the random process was given by the explicit form \(X(t)=f(a_{1},\cdots,a_{n};t)\) where \(f(\cdot)\) is any given function and \(a_{1},\cdots,a_{n}\) are random variables. He then obtained an explicit expression for the average number of zeros per unit time when \(X(t)\) is a stationary Gaussian process. The result was latter extended to wider classes of random processes, including non-stationary ones [16]. We will next review the general formula for the counting of level crossings using intuitive arguments rather than a more rigorous mathematical reasoning. We essentially follow Rice original approach [8] as well as Blake and Lindsey excellent review [1], and refer the interested reader to Lindgren's textbook [16] for more rigorous derivations.
### Level-crossing intensity
Let \(X(t)\) be a random process and denote by \(Y(t)=\dot{X}(t)\) its time derivative (also called velocity) which is supposed to exist, at least in the sense of generalized functions, and let \(p(x,y,t)\) be the joint probability density function (PDF) of \(X(t)\) and \(Y(t)\). In a first step, the level-crossing problem consists in counting the number of times that \(X(t)\) attains a certain level or mark \(u\) (which can be time dependent), that is to say, in obtaining statistical information on the random quantity:
\[N_{u}(t_{0},t)=\mbox{ no. of times }X(\tau)=u,\quad(t_{0}\leq\tau\leq t).\]
In some applications it is important to distinguish whether the crossing of level \(u\) occurred while "going up" or "going down", we thus have the number of _upcrossings_,
\[N_{u}^{(+)}(t_{0},t)=\mbox{ no. of times }X(\tau)=u,\ \dot{X}(\tau)>0,\quad(t_{0} \leq\tau\leq t),\]
and we can analogously define the number of _downcrossings_\(N_{u}^{(-)}(t_{0},t)\) in which \(\dot{X}(\tau)<0\). These quantities are obviously random variables depending on the particular realization of the process \(X(t)\).
We will now obtain the probability of having a crossing event to any level \(u\) during a time interval \((t,t+\Delta t)\). Let us first observe that the probability of having more than one crossing during the interval is negligible as long as \(\Delta t\) is small. Therefore, during small time intervals, the probability of having a crossing event equals the probability that \(N_{u}(t,t+\Delta t)=1\). Let us also note that the crossing of any level \(u\) for the process \(X(t)\) during a small time interval \((t,t+\Delta t)\), will take place either (i) if \(X(t)\) is between the positions \(u-Y(t)\Delta t\) and \(u\) while the velocity \(Y(t)\) is positive (upcrossing), as illustrated in Fig. 1, or (ii) if \(X(t)\) is between \(u\) and \(u+|Y(t)|\Delta t\) while \(Y(t)\) is negative (downcrossing).
Consequently, the probability of a crossing during \((t,t+\Delta t)\), either down or up, is
\[\mbox{Prob}\Big{\{}N_{u}(t,t+\Delta t)=1\Big{\}}\] \[=\mbox{Prob}\Big{\{}u-Y(t)\Delta t\leq X(t)\leq u,Y(t)>0\Big{\}}+ \mbox{Prob}\Big{\{}u\leq X(t)\leq u+|Y(t)|\Delta t,Y(t)<0\Big{\}}\]
or, in terms of the joint PDF \(p(x,y,t)\),
\[\mathrm{Prob}\Big{\{}N_{u}(t,t+\Delta t)=1\Big{\}} = \int_{0}^{\infty}dy\int_{u-y\Delta t}^{u}p(x,y,t)dx+\int_{-\infty}^ {0}dy\int_{u}^{u+|y|\Delta t}p(x,y,t)dx\] \[= \Delta t\left[\int_{0}^{\infty}yp(u,y,t)dy+\int_{-\infty}^{0}|y| p(u,y,t)dy\right]+O(\Delta t^{2}),\]
that is,
\[\mathrm{Prob}\Big{\{}N_{u}(t,t+\Delta t)=1\Big{\}}=\Delta t\int_{-\infty}^{ \infty}|y|p(u,y,t)dy+O(\Delta t^{2}). \tag{1}\]
The average number of crossings in \((t,t+\Delta t)\) is thus
\[\big{\langle}N_{u}(t,t+\Delta t)\big{\rangle}=1\times\mathrm{Prob}\big{\{}N_{u }(t,t+\Delta t)=1\big{\}}+0\times\mathrm{Prob}\big{\{}N_{u}(t,t+\Delta t)=0 \big{\}},\]
and by virtue of Eq. (1) we write
\[\big{\langle}N_{u}(t,t+\Delta t)\big{\rangle}=\Delta t\int_{-\infty}^{\infty} |y|p(u,y,t)dy+O(\Delta t^{2}). \tag{2}\]
We define the _intensity (or frequency) of crossings_, \(\mu_{u}(t)\), as the expected number of crossings per unit time, that is
\[\mu_{u}(t)\equiv\lim_{\Delta t\to 0}\frac{\big{\langle}N_{u}(t,t+\Delta t) \big{\rangle}}{\Delta t}, \tag{3}\]
and from Eq. (2) we obtain the generalized Rice formula:
\[\mu_{u}(t)=\int_{-\infty}^{\infty}|y|p(u,y,t)dy \tag{4}\]
Figure 1: Illustration of an upcrossing event. The irregular (black) line represents a simulated random trajectory \(X(t)\), the straight oblique (purple) line has slope \(\dot{X}(t)\). If \(\Delta t\) is small enough, \(X(t)\) will cross the level \(u\), represented by the horizontal solid (green) line in the interval \((t,t+\Delta t)\) if \(\dot{X}(t)>0\) and \(u-\dot{X}(t)\Delta t\leq X(t)\leq u\).
valid for general non-stationary random processes.1 We also see from Eqs. (2)-(4) that the average \(\left\langle N_{u}(t_{0},t)\right\rangle\) of the total number of crossings during a finite time interval \((t_{0},t)\) is
Footnote 1: As we will see below (see Eq. (16)), the term “Rice formula” is usually applied to the case when \(X(t)\) and \(Y(t)\) are independent and stationary Gaussian processes with zero mean. In any case the expression (4) is also termed as Rice formula.
\[\left\langle N_{u}(t_{0},t)\right\rangle=\int_{t_{0}}^{t}\mu_{u}(t^{\prime})dt ^{\prime}=\int_{t_{0}}^{t}dt^{\prime}\int_{-\infty}^{\infty}|y|p(u,y,t^{\prime })dy. \tag{5}\]
Considering that the average of the total number crossings is the sum of the average number of upcrossings plus downcrossings, i.e., \(\left\langle N_{u}(t_{0},t)\right\rangle=\left\langle N_{u}^{(+)}(t_{0},t) \right\rangle+\left\langle N_{u}^{(-)}(t_{0},t)\right\rangle\) (tangencies are supposed to be a set of zero measure [16]), the expressions above can be easily modified to define the intensity of upcrossings \(\mu_{u}^{(+)}(t)\) or downcrossings \(\mu_{u}^{(-)}(t)\) as
\[\mu_{u}^{(+)}(t)=\int_{0}^{\infty}yp(u,y,t)dy, \tag{6}\]
and
\[\mu_{u}^{(-)}(t)=\int_{-\infty}^{0}|y|p(u,y,t)dy=\int_{0}^{\infty}yp(u,-y,t)dy. \tag{7}\]
Obviously,
\[\mu_{u}(t)=\mu_{u}^{(+)}(t)+\mu_{u}^{(-)}(t). \tag{8}\]
An alternative way to deduce the above results is via the Kac counting formula [21]. In order to derive this formula, following Ref.[31], let \(s_{1},s_{2},\dots\) be the crossing times of the \(N_{u}(t,t_{0})\) crossings of level \(u\) in the interval \([t_{0},t]\). Consider a sufficiently small interval \(I_{i}\) around the crossing time \(s_{i}\), so that no other crossings occur in this interval. Then, applying the change of variables \(z=X(t)\) to the identity
\[1=\int_{-\infty}^{\infty}\delta(z-u)dz\,,\]
we obtain
\[1=\int_{I_{i}}\delta(X(t)-u)|\dot{X}(t)|dt\,,\]
and summing over all the crossings we obtain the celebrated Kac counting formula [21] (in physicists' notation):
\[N_{u}(t,t_{0})=\int_{t_{0}}^{t}\delta(X(t^{\prime})-u)|\dot{X}(t^{\prime})|dt ^{\prime}\,.\]
The expectation value of \(N_{u}(t,t_{0})\) is thus
\[\left\langle N_{u}(t,t_{0})\right\rangle=\int_{t_{0}}^{t}dt^{\prime}\int_{- \infty}^{\infty}dx\int_{-\infty}^{\infty}dyp(x,y,t^{\prime})\delta(x-u)|y|= \int_{t_{0}}^{t}dt^{\prime}\int_{-\infty}^{\infty}p(u,y,t^{\prime})|y|dy\,.\]
For a rigorous derivation, we refer to [31] (p.265). Generalizations of the Kac formula (sometimes called Kac-Rice formula) were later obtained for scalar-valued random fields (\(X(t)\in\mathbb{R}\) and \(t\in\mathbb{R}^{d}\) with \(d>1\)) [24], as well as vector-valued random fields (\(X(t)\in\mathbb{R}^{d^{\prime}}\) and \(t\in\mathbb{R}^{d}\), generally with \(d^{\prime}<d\)). Moreover, extensions to the counting of critical points were also obtained. For rigorous recent reviews of these developments, we refer to the books [22; 23]. In this work, we will only be concerned with one-dimendional random processes (\(d=d^{\prime}=1\)). The extension of our results to higher dimensions appears rather difficult due to the increasing complexity of the geometry.
### Stationary processes. Return time and maximum distribution
We now suppose that \(X(t)\) is a stationary random process, which means that it is time homogeneous and that there exists a time-independent stationary distribution defined as [5]
\[p_{\rm st}(x,y)=\lim_{t\to\infty}p(x,y,t).\]
This leads us to define the stationary intensity of crossings by
\[\mu_{u}\equiv\lim_{t\to\infty}\mu_{u}(t).\]
Taking the limit \(t\to\infty\) in Eq. (4), Rice formula now reads
\[\mu_{u}=\int_{-\infty}^{\infty}|y|p_{\rm st}(u,y)dy, \tag{9}\]
and the average for the total number of crossings over a finite time interval \(\Delta t=t-t_{0}\) is given by (cf. Eq. (5))
\[\left\langle N_{u}(t_{0},t_{0}+\Delta t)\right\rangle=\mu_{u}\Delta t=\Delta t \int_{-\infty}^{\infty}|y|p_{\rm st}(u,y)dy. \tag{10}\]
These expressions can be trivially extended to upcrossings and downcrossings. We thus have
\[\mu_{u}^{(+)}=\int_{0}^{\infty}yp_{\rm st}(u,y)dy,\quad\quad\quad\mu_{u}^{(-)} =\int_{0}^{\infty}yp_{\rm st}(u,-y)dy.\]
Related to the stationary intensity of upcrossings is the _return period_\(T_{u}\) to a level \(u\), defined as
\[T_{u}=\frac{1}{\mu_{u}^{(+)}}, \tag{11}\]
which provides the mean time interval between successive upcrossings of the level \(u\).
Let us next briefly explain the connection between crossing counting and the distribution of the maximum value taken by a random process \(X(\tau)\) on a given time interval \(\tau\in(t_{0},t)\). We introduce such a connection through an engineering example. The return period is a key quantity in engineering for designing the maximal load that a mechanical structure can withstand before suffering structural damage, as well as for knowing its operative life [3]. Designers want to know the probability that the structure will suffer a load surpassing the design load \(u\) during a certain service time \(t\). Thus, if \(X(\tau)\) represents the load at time \(\tau\) and
\[M(t_{0},t)=\max\{X(\tau),\ t_{0}\leq\tau\leq t\}\]
is the maximum load within the service time, \((t_{0},t)\), we want to know \(\mbox{Prob}\{M(t_{0},t)>u\}\). There is a very close relation between this probability and the probability \(\mbox{Prob}\{N_{u}^{(+)}(t)>0\}\) that there has been at least one upcrossing to level \(u\) during the interval \((t_{0},t)\). Indeed, assuming that the process starts below the critical value, \(X_{0}(t_{0})=x_{0}<u\), we have
\[\mbox{Prob}\{M(t_{0},t)>u\}=\mbox{Prob}\{N_{u}^{(+)}(t)>0\}, \tag{12}\]
which connects two aspects of the level-crossing problem as are extreme values and level-crossing counting.
Such a connection can be further enhanced in the following way. Let us first note that
\[\mbox{Prob}\{M(t_{0},t)>u\}=1-\mbox{Prob}\{M(t_{0},t)\leq u\},\]
but \(\mbox{Prob}\{M(t_{0},t)\leq u\}\) is the distribution function of the maximum, that is
\[F(u,t|x_{0},t_{0})=\mbox{Prob}\{M(t_{0},t)\leq u|X(t_{0})=x_{0}\}.\]
However, \(F(u,t|x_{0},t_{0})\) is related to the survival (or non-hitting) probability \(S\) at time \(t\) of the process \(X(\tau)\),
\[S(u,t|x_{0},t_{0})=\mbox{Prob}\{X(\tau)\neq u;\ \forall\tau\in(t_{0},t)|\ X(t_{0})= x_{0}\},\]
which is instrumental in first-passage problems. Indeed, as we have shown (see, for instance, [5; 32; 33])
\[F(u,t|x_{0},t_{0})=S(u,t|x_{0},t_{0})\Theta(u-x_{0}),\]
(\(\Theta(\cdot)\) is the Heaviside step function) and since we have assumed that \(x_{0}<u\) we simply write
\[F(u,t|x_{0},t_{0})=S(u,t|x_{0},t_{0}).\]
In other words
\[\mathrm{Prob}\{M(t_{0},t)\leq u\}=S(u,t|x_{0},t_{0}),\]
and from Eq. (12) we write
\[\mathrm{Prob}\{N_{u}^{(+)}(t)>0\}=1-S(u,t|x_{0},t_{0}) \tag{13}\]
which clearly shows the relationship between first-passage (via survival probability) and level-crossing counting. For diffusion processes the survival probability can be obtained by solving the Fokker-Planck equation with initial and absorbing boundary conditions [5] and this can provide a way of obtaining the exact expression of the probability \(\mathrm{Prob}\{N_{u}^{(+)}(t)>0\}\) which is, in general, rather difficult to get [3].
Let us finally obtain a practical bound for \(\mathrm{Prob}\{M(t_{0},t)>u\}\) which may be relevant in applications. From the Markov inequality we have
\[\mathrm{Prob}\{N_{u}^{(+)}(t)>0\}\leq\langle N_{u}^{(+)}(t)\rangle\qquad \Rightarrow\qquad\mathrm{Prob}\{M(t_{0},t)>u\}\leq\langle N_{u}^{(+)}(t)\rangle,\]
and for stationary processes we write (cf. Eqs. (10))
\[\langle N_{u}^{(+)}(t)\rangle=\mu_{u}^{(+)}\Delta t\qquad\Rightarrow\qquad \mathrm{Prob}\{M(t_{0},t)>u\}\leq\mu_{u}^{(+)}\Delta t\]
(\(\Delta t=t-t_{0}\)) and using Eq. (11) we have
\[\mathrm{Prob}\{M(t_{0},t)>u\}\leq\frac{\Delta t}{T_{u}},\]
which is a useful bound for the probability that the maximum load exceeds the critical level during the time interval \(\Delta t\).
### The original Rice formula
As mentioned in the introduction, Rice's formula for level crossings was first obtained for stationary Gaussian processes, assuming that the process \(X(t)\) and its derivative \(Y(t)=\dot{X}(t)\) are uncorrelated and, hence, independent.2 In such a case the joint PDF will be given by \(p(x,y,t)=p(x)p(y)\), that is,
Footnote 2: Recall that stationarity means that the joint PDF, \(p(x,y,t)=p(x,y)\), does not depend of time, which in particular implies that the averages \(\langle X(t)\rangle=m_{x}\) and \(\langle Y(t)\rangle=m_{y}\) do not depend on time either and that \(\langle X(t+\tau)\dot{X}(t)\rangle=\langle X(\tau)\dot{X}(0)\rangle\) for all \(\tau\) and \(\langle X(t)\dot{X}(t)\rangle=\langle X(0)\dot{X}(0)\rangle\). On the other hand, uncorrelated implies that \(\langle X(\tau)\dot{X}(0)\rangle=\langle X(\tau)\rangle\langle\dot{X}(0)\rangle\) and, in particular \(\sigma_{xy}^{2}=\langle X(0)\dot{X}(0)\rangle-\langle X(0)\dot{X}(0)\rangle=0\). Since Gaussian processes are determined by the first two moments, then uncorrelated (i.e., \(\sigma_{xy}^{2}=0\)) it also means being independent.
\[p(x,y)=\frac{1}{2\pi\sigma_{x}\sigma_{y}}\exp\left\{-\frac{(x-m_{x})^{2}}{2 \sigma_{x}^{2}}-\frac{(y-m_{y})^{2}}{2\sigma_{y}^{2}}\right\}, \tag{14}\]
where \(m_{x}\), \(m_{y}\) are the stationary averages and \(\sigma_{x}^{2}\), \(\sigma_{y}^{2}\) the stationary variances of \(X(t)\) and \(\dot{X}(t)\) respectively.
In the original formulation it is also assumed that velocity has zero mean, i.e., \(m_{y}=0\), then substituting Eq. (14) into Eq. (4) we readily obtain the classical Rice formula for the intensity of crossing the level \(u\):
\[\mu_{u}=\frac{\sigma_{y}}{\pi\sigma_{x}}e^{-(u-m_{x})^{2}/2\sigma_{x}^{2}}. \tag{15}\]
When we set \(u=m_{x}\) -corresponding to the crossing of the mean value- we get
\[\mu_{m}=\frac{\sigma_{y}}{\pi\sigma_{x}}, \tag{16}\]
which agrees with the zero-crossing intensity originally devised by Rice [8].
## III Level-crossing counting for general Gaussian processes
We have seen that Rice formula is usually written for stationary Gaussian processes \(X(t)\) and when \(\dot{X}(t)\) has zero mean and is independent of \(X(t)\) (cf. Eq. (15)). Before specifically addressing inertial processes we will present Rice formula for any general Gaussian process with no restrictions. Let us thus suppose that \(X(t)\) is a Gaussian process, then its derivative, \(\dot{X}(t)=Y(t)\), is also Gaussian since the derivative is a linear operation on \(X(t)\) and keeps the Gaussian character. In its more general form the joint PDF of the bidimensional process \((X(t),Y(t))\) is explicitly given by the Gaussian function [5]
\[p(x,y,t)=\frac{1}{2\pi\Delta(t)}\exp\biggl{\{}-\frac{1}{2\Delta^{2}(t)}\Bigl{[} \sigma_{y}^{2}(t)(x-m_{x}(t))^{2}-2\sigma_{xy}(t)(x-m_{x}(t))(y-m_{y}(t))+ \sigma_{x}^{2}(t)(y-m_{y}(t))^{2}\Bigr{]}\biggr{\}}, \tag{17}\]
where
\[m_{x}(t)=\langle X(t)\rangle,\qquad m_{y}(t)=\langle Y(t)\rangle, \tag{18}\]
\[\sigma_{x}^{2}(t)=\Bigl{\langle}\left[X(t)-m_{x}(t)\right]^{2}\Bigr{\rangle},\quad\sigma_{xy}(t)=\Bigl{\langle}\left[X(t)-m_{x}(t)\right]\!\left[Y(t)-m_{ y}(t)\right]\Bigr{\rangle},\quad\sigma_{y}^{2}(t)=\Bigl{\langle}\left[Y(t)-m_{y}(t) \right]^{2}\Bigr{\rangle}, \tag{19}\]
are mean values and variances, and the discriminant \(\Delta(t)\) (not to be confused with the time increment \(\Delta t\) used earlier) is
\[\Delta(t)=\sqrt{\sigma_{x}^{2}(t)\sigma_{y}^{2}(t)-\sigma_{xy}^{2}(t)}. \tag{20}\]
The total crossing intensity \(\mu_{u}(t)\) will be given by Rice formula after substituting Eq. (17) into Eq. (4). We will first evaluate the intensities of upcrossings and downcrossings, \(\mu_{u}^{(+)}(t)\) and \(\mu_{u}^{(-)}(t)\) respectively and then obtain the total frequency \(\mu_{u}(t)\). From Eqs. (6) and (17) we write
\[\mu_{u}^{(+)}(t) = \int_{0}^{\infty}yp(u,y,t)dy \tag{21}\] \[= \frac{1}{2\pi\Delta}e^{-\sigma_{y}^{2}(u-m_{x})/2\Delta^{2}} \int_{0}^{\infty}y\exp\left\{-\frac{\sigma_{x}^{2}}{2\Delta^{2}}(y-m_{y})^{2 }+\frac{\sigma_{xy}(u-m_{x})}{\Delta^{2}}(y-m_{y})\right\}dy,\]
which, after performing the Gaussian integral and simple manipulations, yields
\[\mu_{u}^{(+)}(t)=\frac{\Delta(t)}{2\pi\sigma_{x}^{2}(t)}e^{-(u-m_{x}(t))^{2} /2\sigma_{x}^{2}(t)}\left[e^{-\eta_{u}^{2}(t)}+\sqrt{\pi}\eta_{u}(t)\mbox{ Erfc}\bigl{[}-\eta_{u}(t)\bigr{]}\right], \tag{22}\]
where
\[\eta_{u}(t)\equiv\frac{m_{y}(t)\sigma_{x}(t)}{\sqrt{2}\Delta(t)}+\frac{ \sigma_{xy}(t)}{\sqrt{2}\Delta(t)\sigma_{x}(t)}[u-m_{x}(t)], \tag{23}\]
and
\[\mbox{Erfc}(z)=\frac{2}{\sqrt{\pi}}\int_{z}^{\infty}e^{-t^{2}}dt\]
is the complementary error function.
As to downcrossings, from Eqs. (7) and (17) we have
\[\mu_{u}^{(-)}(t) = \int_{0}^{\infty}yp(x,-y,t)dy \tag{24}\] \[= \frac{1}{2\pi\Delta}e^{-\sigma_{y}^{2}(u-m_{x})/2\Delta^{2}} \int_{0}^{\infty}y\exp\left\{-\frac{\sigma_{x}^{2}}{2\Delta^{2}}(y+m_{y})^{2} -\frac{\sigma_{xy}(u-m_{x})}{\Delta^{2}}(y+m_{y})\right\}dy,\]
and by comparing Eq. (21) with Eq. (24) we see that, knowing \(\mu_{u}^{(+)}(t)\) we can recover \(\mu_{u}^{(-)}(t)\) after making the replacements
\[m_{y}(t)\longrightarrow-m_{y}(t),\qquad\sigma_{xy}(t)\longrightarrow-\sigma_ {xy}(t).\]
As a result from Eq. (22) we get
\[\mu_{u}^{(-)}(t)=\frac{\Delta(t)}{2\pi\sigma_{x}^{2}(t)}e^{-(u-m_{x}(t))^{2}/2 \sigma_{x}^{2}(t)}\left[e^{-\eta_{u}^{2}(t)}-\sqrt{\pi}\eta_{u}(t)\mathrm{Erfc} \big{[}\eta_{u}(t)\big{]}\right], \tag{25}\]
with \(\eta_{u}(t)\) given in Eq.(23).
The total number of crossings is given by the sum (cf. Eq. (8))
\[\mu_{u}(t)=\mu_{u}^{(+)}(t)+\mu_{u}^{(-)}(t).\]
Adding Eqs. (22) and (25) and taking into account that
\[\mathrm{Erfc}(-z)-\mathrm{Erfc}(z)=2\mathrm{Erf}(z),\]
where
\[\mathrm{Erf}(z)=\frac{2}{\sqrt{\pi}}\int_{0}^{z}e^{-x^{2}}dx,\]
is the error function, we obtain
\[\mu_{u}(t)=\frac{\Delta(t)}{\pi\sigma_{x}^{2}(t)}e^{-(u-m_{x}(t))^{2}/2\sigma_ {x}^{2}(t)}\left[e^{-\eta_{u}^{2}(t)}+\sqrt{\pi}\eta_{u}(t)\mathrm{Erf}\big{[} \eta_{u}(t)\big{]}\right]\,. \tag{26}\]
Equations (22), (25) and (26) constitute the most general forms of Rice formula for any Gaussian process.
Let us finish this section by presenting two particular but important cases.
(i) In the first case we suppose that \(X(t)\) and \(Y(t)\) are independent, in which case
\[\sigma_{xy}(t)=0\qquad\Rightarrow\qquad\Delta(t)=\sigma_{x}(t)\sigma_{y}(t)\]
and Eq. (26) reads
\[\mu_{u}(t)=\frac{\sigma_{y}(t)}{\pi\sigma_{x}(t)}e^{-(u-m_{x}(t))^{2}/2\sigma _{x}^{2}(t)}\left\{e^{-m_{y}^{2}(t)/2\sigma_{y}^{2}(t)}+\left(\frac{\pi}{2} \right)^{1/2}\frac{m_{y}(t)}{\sigma_{y}(t)}\mathrm{Erf}\left[\frac{m_{y}(t)}{ \sqrt{2}\sigma_{y}(t)}\right]\right\}. \tag{27}\]
If, in addition, \(m_{y}(t)=0\), we have
\[\mu_{u}(t)=\frac{\sigma_{y}(t)}{\pi\sigma_{x}(t)}e^{-(u-m_{x}(t))^{2}/2\sigma _{x}^{2}(t)}, \tag{28}\]
which coincides with the Rice original formula (15) in the stationary case when \(\sigma_{x}\), \(\sigma_{y}\) and \(m_{x}\) are time-independent.
(ii) A second and more relevant case consists in counting the crossing of the mean value of the process, regardless whether \(X(t)\) and \(\dot{X}(t)\) are correlated or not. In such a case (which is, in fact, equivalent to the zero-crossing problem and will be referred to as mean-crossing problem from now on) we have
\[u=m_{x}(t)\qquad\Rightarrow\qquad\eta_{u}(t)=\frac{m_{y}(t)\sigma_{x}(t)}{ \sqrt{2}\Delta(t)}\]
and Eq. (26) reads
\[\mu_{m}(t)=\frac{\Delta(t)}{\pi\sigma_{x}^{2}(t)}\left[e^{-m_{y}^{2}(t)\sigma _{x}^{2}(t)/2\Delta^{2}(t)}+\left(\frac{\pi}{2}\right)^{1/2}\frac{m_{y}(t) \sigma_{x}(t)}{\Delta(t)}\mathrm{Erf}\left[\frac{m_{y}(t)\sigma_{x}(t)}{\sqrt {2}\Delta(t)}\right]\right], \tag{29}\]
where we use the notation
\[\mu_{m}(t)=\mu_{m_{x}(t)}(t), \tag{30}\]
for the crossing of the mean value. Finally, if the average velocity is zero, \(m_{y}(t)=0\), we get
\[\mu_{m}(t)=\frac{\Delta(t)}{\pi\sigma_{x}^{2}(t)}, \tag{31}\]
or more explicitly (cf. Eq. (20))
\[\mu_{m}(t)=\frac{\sigma_{y}(t)}{\pi\sigma_{x}(t)}\sqrt{1-\left[\sigma_{xy}(t)/ \sigma_{x}(t)\sigma_{y}(t)\right]^{2}}, \tag{32}\]
which can be regarded as the generalization of the original Rice formula (16) for the zero-crossing problem in the case when \(X(t)\) and \(\dot{X}(t)\) are correlated (i.e., \(\sigma_{xy}(t)\neq 0\)).
Gaussian inertial processes. First examples
In many physical applications one frequently runs into random processes whose time evolution is given by a second-order differential equation with the appearance of inertial terms represented by second-order derivatives. For one-dimensional processes \(X(t)\) a rather general form is given by
\[\ddot{X}=F\Big{(}t,X,\dot{X},\xi(t)\Big{)}, \tag{33}\]
where \(F\) is an arbitrary function and \(\xi(t)\) is the input noise, a given random process which is usually modeled as Gaussian white noise. The origin of such equations typically stems from Newton's second law of motion, where \(X(t)\) represents the position of a particle moving under the effects of deterministic and random forces embodied by the function \(F\). A paradigmatic example is the "noisy oscillator", a linear (or non-linear) oscillator perturbed by random influences, either in the frequency (Kubo oscillator) or with an external random force or even with a random damping [34]. A simpler, yet very relevant case, is provided by the inertial Brownian motion in which \(F\) is a linear function independent of \(t\) and \(X\). An even simpler but highly nontrivial case is given by the random acceleration process where \(F=k\xi(t)\). By applying the results of the previous section we will obtain exact expressions of the crossing intensity for these linear inertial cases. In this section we address the examples of Brownian motion and random acceleration, while in the next section we deal with the noisy oscillator.3
Footnote 3: We note that any random process \(X(t)\) described by a second-order differential equation such as Eq.(33) is necessarily non Markovian [5]. However if we define \(Y(t)=\dot{X}(t)\), then the two dimensional random process \((X(t),Y(t))\) obeys a first-order equation (see for example the discussion after Eq.(37)), and is thus Markovian.
Before proceeding further let us note that all examples studied are linear. That is, \(F\) is a linear function and the evolution equation (33) can be written as
\[\ddot{X}+\beta\dot{X}+\alpha X+\gamma=k\xi(t), \tag{34}\]
where \(\alpha\), \(\beta\), \(\gamma\) and \(k\) are usually constant parameters, although they may be functions of time as in aging processes. In any case when the input noise \(\xi(t)\) is Gaussian, the linearity of Eq. (34) ensures that the output process \(X(t)\) is also Gaussian.
As is well known, in second-order equations inertial influences decay faster than damping effects, so that, as time increases (\(\beta t\gg 1\)) we have \(|\ddot{X}(t)|\ll|\beta\dot{X}(t)|\)[35]. In the asymptotic regime \(\beta t\to\infty\), Eq. (34) reduces to a first-order equation
\[\beta\dot{X}=-\alpha X-\gamma+k\xi(t), \tag{35}\]
which is the well known Ornstein-Uhlenbeck process. Let us finally remark that Rice's approach is not applicable to first-order processes driven by white noise. Indeed, in such a case the variance of \(\xi(t)\) is infinite and restricting ourselves to linear processes Eq. (35) implies that the variance of \(\dot{X}(t)\) is also infinite. As a result the joint density \(p(x,y,t)\) does not exists and Rice's approach is meaningless.4
Footnote 4: This can be directly seen below (cf. Eq. (48)) where the limit \(\beta\to\infty\) results in an infinite crossing intensity, which is absurd.
### Brownian motion
Suppose that \(X(t)\) represents the position of a Brownian particle moving inside a medium of damping constant \(\beta>0\) and external random force \(\xi(t)\), whose evolution equation is given by
\[\ddot{X}+\beta\dot{X}=k\xi(t), \tag{36}\]
where \(\xi(t)\) is zero-mean Gaussian white noise,
\[\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime}), \tag{37}\]
and \(k>0\) is the noise intensity. The initial conditions are \(X(0)=x_{0}\) and \(\dot{X}(0)=y_{0}\).
The second-order equation (36) is equivalent to the first-order system
\[\dot{X} = Y\] \[\dot{Y} = -\beta Y+k\xi(t),\]
whose solution reads
\[X(t) = x_{0}+\frac{y_{0}}{\beta}\left(1-e^{-\beta t}\right)+\frac{k}{ \beta}\int_{0}^{t}\left[1-e^{-\beta(t-t^{\prime})}\right]\xi(t^{\prime})dt^{\prime} \tag{38}\] \[Y(t) = y_{0}e^{-\beta t}+k\int_{0}^{t}e^{-\beta(t-t^{\prime})}\xi(t^{ \prime})dt^{\prime}, \tag{39}\]
from which we see (using \(\langle\xi(t)\rangle=0\)) that
\[m_{x}(t)=\langle X(t)\rangle=x_{0}+\frac{y_{0}}{\beta}\left(1-e^{-\beta t} \right),\qquad m_{y}(t)=\langle Y(t)\rangle=y_{0}e^{-\beta t}. \tag{40}\]
Let us observe that the Gaussian character of the input noise \(\xi(t)\) and the linearity of Eqs. (38) and (39) (or, alternatively, the linearity of Eq. (36)) show that \(X(t)\) and \(Y(t)\) are Gaussian processes as well. Therefore, in order to obtain the crossing intensity \(\mu_{u}(t)\) for the Brownian particle to cross some position \(u\), we may apply the results of the previous section which, as we have seen, need the knowledge of the variances \(\sigma_{x}^{2}(t)\), \(\sigma_{y}^{2}(t)\) and \(\sigma_{xy}(t)\).
In Appendix A we obtain
\[\sigma_{x}^{2}(t)=\frac{k^{2}}{\beta^{3}}\left(\beta t-\frac{3}{2}+2e^{-\beta t }-\frac{1}{2}e^{-2\beta t}\right), \tag{41}\]
\[\sigma_{y}^{2}(t)=\frac{k^{2}}{2\beta}\left(1-e^{-2\beta t}\right), \tag{42}\]
and
\[\sigma_{xy}(t)=\frac{k^{2}}{\beta^{2}}\left(\frac{1}{2}-e^{-\beta t}+\frac{1} {2}e^{-2\beta t}\right). \tag{43}\]
The exact expression for the crossing intensity \(\mu_{u}(t)\) is obtained by substituting Eqs. (40)-(43) into Eq. (26), along with the expressions for \(\Delta(t)\) and \(\eta_{u}(t)\) given by Eqs. (20) and (23) respectively. This ends in a rather cumbersome expression which we will not write.
As \(t\to\infty\), specifically for \(\beta t\gg 1\), we see that
\[m_{x}(t)\simeq x_{0}+\frac{y_{0}}{\beta},\qquad m_{y}(t)\simeq 0, \tag{44}\]
and
\[\sigma_{x}^{2}(t)\simeq\frac{k^{2}t}{\beta^{2}},\qquad\sigma_{y}^{2}(t)\simeq \frac{k^{2}}{2\beta},\qquad\sigma_{xy}(t)\simeq\frac{k^{2}}{2\beta^{2}},\qquad \qquad(\beta t\gg 1). \tag{45}\]
The fact that \(\sigma_{x}^{2}(t)\) grows linearly with time clearly shows the well-known fact that Brownian motion is not stationary. In this asymptotic case we have
\[\Delta(t)\simeq\frac{k^{2}t^{1/2}}{\sqrt{2}\beta^{3/2}},\qquad\frac{\Delta(t) }{\sigma_{x}^{2}(t)}\simeq\left(\frac{\beta}{2t}\right)^{1/2},\qquad\eta_{u}( t)\simeq\frac{\beta^{1/2}}{4kt}(u-m_{x}),\]
and Eq. (26) becomes
\[\mu_{u}(t)\simeq\frac{1}{\pi}\left(\frac{\beta}{2t}\right)^{1/2}e^{-\beta^{2} (u-m_{x})^{2}/2k^{2}t}\biggl{\{}e^{-\beta(u-m_{x})^{2}/(4kt)^{2}}+\sqrt{\pi} \frac{\beta^{1/2}}{4kt}(u-m_{x})\mbox{Erf}\Bigl{[}\frac{\beta^{1/2}}{4kt}(u-m_ {x})\Bigr{]}\biggr{\}},\quad(\beta t\gg 1). \tag{46}\]
Note that when \(u=m_{x}(t)\) the mean-crossing intensity is simply given by (cf. Eq. (30))
\[\mu_{m}(t)\simeq\frac{1}{\pi}\left(\frac{\beta}{2t}\right)^{1/2},\qquad(\beta t \gg 1).\]
This asymptotic behavior is nonetheless extensible to any crossing level. Indeed, recalling that [36]
\[\mathrm{Erf}(z)=\frac{2}{\sqrt{\pi}}e^{-z^{2}}[z+O(z^{2})], \tag{47}\]
and expanding the exponentials in (46) as \(\beta t\gg 1\) we easily see that
\[\mu_{u}(t)\simeq\frac{1}{\pi}\left(\frac{\beta}{2t}\right)^{1/2},\qquad\qquad( \beta t\gg 1), \tag{48}\]
which is valid for any crossing level \(u\). Let us note that while the crossing intensity decreases with time, the total number of crossings actually increases with time. Indeed, from Eqs. (5) and (48) we see that the average number of crossings within the interval \((t_{0},t)\) is given by (\(t_{0}\) and \(t\) large)
\[\langle N_{u}(t)\rangle=\int_{t_{0}}^{t}\mu_{u}(t^{\prime})dt^{\prime}\simeq \frac{1}{\pi}(2\beta t)^{1/2}\left[1-\sqrt{t_{0}/t}\right]. \tag{49}\]
We validate the analytical results presented above by Monte Carlo simulation of the evolution equation Eqs. (36). The simulations are carried out using the algorithm of Ref.[37], that we describe in Appendix B. Fig. 2 shows examples of random trajectories with \(\beta=1,k=1\) (see also Appendix B for the definition of the units of the simulation parameters) and \(x_{0}=y_{0}=0\). For each time interval \([t,t+\delta t)\) we measure \(\mu_{u}(t)\) by averaging over a large number (typically \(10^{6}\)) of trajectories. Fig. 3 shows the results corresponding to the above choice of parameters, for different values of \(u\), together with the analytical expression obtained by substituting Eqs. (40)-(43) into Eq. (26).
### Random acceleration
Let \(X(t)\) be the position of an unbounded particle subject to a random acceleration represented by zero-mean Gaussian white noise \(\xi(t)\). The dynamical equation of the process is now given by
\[\ddot{X}(t)=k\xi(t). \tag{50}\]
This apparently simple case represents nonetheless a nontrivial example of a non-Markovian process and it has been the object of research in the literature related to first-exit times [38], polymers [39], maxima statistics [40] and resettings [41] just to name a small sample.
Figure 2: Examples of random trajectories \(X(t)\) for the Brownian motion, with \(k=1,\beta=1,x_{0}=0,y_{0}=0\). Simulations are performed with a variable time step \(dt=0.01\sqrt{t}\).
Denoting again \(Y(t)=\dot{X}(t)\), and assuming \(X(0)=x_{0}\) and \(\dot{X}(0)=y_{0}\), the process, after integrating Eq. (50), is explicitly given by
\[X(t) = x_{0}+y_{0}t+k\int_{0}^{t}(t-t^{\prime})\xi(t^{\prime})dt^{\prime}, \tag{51}\] \[Y(t) = y_{0}+k\int_{0}^{t}\xi(t^{\prime})dt^{\prime}, \tag{52}\]
and
\[m_{x}(t)=x_{0}+y_{0}t,\qquad m_{y}(t)=y_{0}. \tag{53}\]
The bidimensional process \((X(t),Y(t))\) is evidently Gaussian and proceeding as in Appendix A we can obtain the variances. However, since this model is a particular case of the Brownian motion after setting \(\beta\to 0\), we can also obtain the variances by taking the limit \(\beta\to 0\) in Eqs. (41), (42) and (43). In either way, we get
\[\sigma_{x}^{2}(t)=\frac{1}{3}k^{2}t^{3},\qquad\sigma_{y}^{2}(t)=k^{2}t,\qquad \sigma_{xy}(t)=\frac{1}{2}k^{2}t^{2}, \tag{54}\]
and (cf. Eqs. (20))
\[\Delta(t)=\frac{1}{2\sqrt{3}}k^{2}t^{2}.\]
In this case the exact expression for the crossing intensity, Eq. (26), reads
\[\mu_{u}(t)=\frac{\sqrt{3}}{2\pi t}e^{-3(u-m_{x}(t))^{2}/2k^{2}t^{3}}\left[e^{ -\eta_{u}^{2}(t)}+\sqrt{\pi}\eta_{u}(t)\mathrm{Erf}(\eta_{u}(t))\right], \tag{55}\]
where (cf. Eq. (23))
\[\eta_{u}(t)=\frac{\sqrt{2}}{kt^{1/2}}\left[y_{0}+\frac{3}{2t}(u-m_{x}(t)) \right]. \tag{56}\]
The mean-crossing intensity -i.e., the crossing of the mean value \(u=m_{x}(t)=x_{0}+y_{0}t\)- is simpler and reads
\[\mu_{m}(t)=\frac{\sqrt{3}}{2\pi t}\left[e^{-2y_{0}^{2}/(k^{2}t)}+\frac{\sqrt{ 2\pi}y_{0}}{kt^{1/2}}\mathrm{Erf}\left(\frac{\sqrt{2}y_{0}}{kt^{1/2}}\right) \right]. \tag{57}\]
Figure 3: Crossing intensity \(\mu_{u}(t)\) for different values of \(u\) obtained from simulation (noisy colored lines), compared with the analytical prediction (smooth black lines). All simulation parameters are the same as in Fig. 2. The values of \(u\) correspond, from top to bottom, to the lines from top to bottom.
When \(y_{0}=0\) we simply have
\[\mu_{m}(t)=\frac{\sqrt{3}}{2\pi t}. \tag{58}\]
Let us see next that the exact expression (58) for the mean-crossing with zero initial velocity is precisely the asymptotic expression as \(t\to\infty\) of the crossing intensity for any level \(u\) and any \(y_{0}\). Indeed, from Eq. (56) we have
\[\eta_{u}(t)=\frac{\sqrt{2}}{kt^{1/2}}\left[y_{0}+O\left(\frac{1}{t}\right) \right]\qquad\Rightarrow\qquad e^{-\eta_{u}^{2}(t)}=1+O\left(\frac{1}{t} \right).\]
Collecting results into Eq. (55), bearing in mind that
\[e^{-3(u-m_{x}(t))^{2}/2k^{2}t^{3}}=1+O\left(\frac{1}{t^{3}}\right),\]
and recalling Eq. (47), we finally get
\[\mu_{u}(t)\simeq\frac{\sqrt{3}}{2\pi t},\qquad(t\to\infty), \tag{59}\]
valid for any level \(u\) and any initial velocity. As in the Brownian motion the crossing intensity also decreases with time, although with a different law (cf. Eq. (48)), while the average number of crossings in a time interval \((t_{0},t)\) increases logarithmically (\(t_{0}\) and \(t\) large),
\[\langle N_{u}(t)\rangle=\int_{t_{0}}^{t}\mu_{u}(t^{\prime})dt^{\prime}\simeq \frac{\sqrt{3}}{2\pi}\ln(t/t_{0}). \tag{60}\]
### Scaling and asymptotic regimes of the mean-crossing intensity
We now analyze in more detail the different short- and long-time limits of the mean-crossing intensity, for both Brownian motion and random acceleration. We can identify two characteristic time scales in Brownian motion, namely
\[\tau_{1}=\left(\frac{y_{0}}{k}\right)^{2}\quad\text{and}\quad\tau_{2}=\beta^{ -1}, \tag{61}\]
and depending on their relative value, we will obtain a different short-time behavior.
#### Random acceleration
In this case \(\beta=0\) and \(\tau_{2}=\infty\), therefore the only relevant time scale is \(\tau_{1}\), which is related to the initial velocity. Hence, we see from Eq. (57) that in this case the following scaling relation holds:
\[\mu_{m}(t)=\frac{1}{\tau_{1}}f(t/\tau_{1})\,, \tag{62}\]
where the function \(f\) is given by
\[f(s)=\frac{\sqrt{3}}{2\pi s}\left[e^{-2/s}+\sqrt{\frac{2\pi}{s}}\operatorname {Erf}\left(\sqrt{2/s}\right)\right]\,. \tag{63}\]
The following asymptotic limits result:
\[f(s)\sim\begin{cases}\sqrt{\frac{3}{2\pi}}s^{-3/2}&s\ll 1\\ \\ \frac{\sqrt{3}}{2\pi}s^{-1}&s\gg 1\,.\end{cases} \tag{64}\]
This scaling behavior is illustrated in Fig. 4 where, in order to better appreciate the different asymptotic limits, we plot \(t\mu_{m}(t)\), obtained from simulations at several values of \(y_{0}\), as a function of \(s=t/\tau_{1}\), together with the function \(sf(s)\) and its asymptotic limits. The simulation data agree perfectly with the analytical results. An enlarged view of the crossover region at \(t/\tau_{1}\) of order one is shown in Fig. 5.
#### Brownian motion
In this case \(\beta\neq 0\) and we will distinguish the cases when the initial velocity \(y_{0}\) is zero or different from zero.
(i) If \(y_{0}=0\) we have \(\tau_{1}=0\) and the only relevant time scale is \(\tau_{2}\). We thus see from Eqs. (32),(41),(42), and (43) that \(\mu_{m}(t)\) satisfies a different scaling relation
\[\mu_{m}(t)=\frac{1}{\tau_{2}}g(t/\tau_{2}) \tag{65}\]
Figure 4: Scaling plot of the mean-crossing intensity for random acceleration (\(\beta=0\)). The colored noisy lines correspond to simulation results for \(k=1\) and \(y_{0}=5,1,0.1\) (corresponding to \(\tau_{1}=25,1,0.01\)), obtained with a time step \(dt=\alpha\sqrt{t}\) with \(\alpha=0.001\) for \(t<1\) and \(\alpha=0.01\) for \(t>1\), and averaged over \(10^{6}\) trajectories. The non-monotonic behavior at small time is an artifacts of the time discretization, which disappears by decreasing the time step \(dt\). The solid (red) curved line corresponds to the analytical result in Eq. (63), and is in perfect agreement with the simulations. The straight solid and dashed (black) lines correspond, respectively, to the short-time and long-time asymptotics in Eq. (64).
Figure 5: Same as Fig. 4, but in linear scale and zooming in on the crossover region between the short- and long-time limits.
where
\[g(s)=\frac{e^{s}}{\pi}\frac{\left[e^{2s}\left(\frac{s}{2}-1\right)+2e^{s}-\frac{s }{2}-1\right]^{\frac{1}{2}}}{e^{2s}\left(s-\frac{3}{2}\right)+2e^{s}-\frac{1}{2}} \tag{66}\]
and the following asymptotic limits hold:
\[g(s)\sim\begin{cases}\frac{\sqrt{3}}{2\pi}s^{-1}&s\ll 1\\ \frac{1}{\pi\sqrt{2}}s^{-1/2}&s\gg 1\,.\end{cases} \tag{67}\]
The scaling behavior is illustrated in Figs. 6 and 7, where we plot \((t/\beta)^{1/2}\mu_{m}(t)\), with \(\mu_{m}(t)\) obtained from simulations at several values of \(\beta\) and with \(y_{0}=0\), as a function of \(s=t/\tau_{2}\), together with the function \(\sqrt{s}g(s)\) and its asymptotic limits. Also in this case the simulations agree perfectly with the analytical results.
(ii) For a non-vanishing initial velocity, \(y_{0}\neq 0\), we have the two time scales \(\tau_{1}\) and \(\tau_{2}\) defined in Eq. (61) and from Eqs. (32),(41),(42), and (43), we see that the crossing intensity can be written as
\[\mu_{m}(t)=\frac{1}{\tau_{2}}h(t/\tau_{2},\tau_{2}/\tau_{1}) \tag{68}\]
where
\[h(s,r)=g(s)\,\frac{2\pi}{\sqrt{3}}r\,q(s)\,f[rq(s)]\,,\quad s=t/\tau_{2},\;r= \tau_{2}/\tau_{1} \tag{69}\]
Here, \(f\) and \(g\) are the functions defined in Eqs. (63) and (66), respectively, and \(q(s)\) is the function
\[q(s)=\frac{2e^{2s}\left(s-2\right)+8\,e^{s}-4-2s}{2e^{-s}-\frac{1}{2}e^{-2s}+s -\frac{3}{2}}\,. \tag{70}\]
Eq. (68) defines a family of scaling relations parametrized by the ratio \(\tau_{2}/\tau_{1}\). In the limits \(\tau_{1}\neq 0,\tau_{2}\to\infty\) and \(\tau_{1}\to 0,0<\tau_{2}<\infty\), Eq. (68) reduces to, respectively, the aforementioned cases of random acceleration and Brownian motion with zero initial velocity (case (i)).
Figure 6: Scaling plot of the mean-crossing intensity for \(\beta\neq 0\) and \(y_{0}=0\). The noisy colored lines represent simulations obtained with \(\beta=0.02,0.1,0.5,1\) (\(\tau_{2}=50,10,2,1\)). See caption of Fig. 4 for details on the simulations. The solid (red) curved line corresponds to the analytical result in Eq. (66), and is in perfect agreement with the simulations. The straight solid and dashed straight (black) lines correspond, respectively, to the short-time and long-time asymptotics in Eq. (67).
Using \(q(s)\sim s\) for \(s\to 0\) and Eq. (64), we obtain
\[\mu_{m}(t)\sim\begin{cases}\sqrt{\frac{3}{2\pi}}\frac{1}{\tau_{2}}\left(\frac{\tau _{2}}{t}\right)^{3/2}\left(\frac{\tau_{1}}{\tau_{2}}\right)^{1/2},\qquad t\ll \tau_{2},\\ \frac{\sqrt{3}}{2\pi t^{1/2}}\qquad t\gg(\tau_{1},\tau_{2})\end{cases}\]
or equivalently
\[h(s,r)\sim\begin{cases}\sqrt{\frac{3}{2\pi}}r^{-1/2}s^{-3/2} \qquad s\to 0,\\ \frac{1}{\pi\sqrt{2}}s^{-1/2}\qquad s\to\infty\,.\end{cases} \tag{71}\]
In particular, for a given ratio \(\tau_{2}/\tau_{1}\) and when \(t\) is small enough we are in the "ballistic" regime \(\mu_{m}(t)\sim t^{-3/2}\). Let us note that the case \(\tau_{2}/\tau_{1}=2\) is especially relevant since it corresponds to choosing an initial velocity equal to the asymptotic value of the mean-squared velocity. That is,
\[y_{0}^{2}=\lim_{t\to\infty}\langle Y^{2}(t)\rangle=\frac{k^{2}}{2 \beta}, \tag{72}\]
where we have used Eq. (42). As we will see below this is a natural choice for the initial velocity for the Brownian motion of a particle 5. Figures 8, 9, and 10 show scaling plots for \(\tau_{2}/\tau_{1}=2,1/9,\) and \(5000\), respectively. Notice that if the two time scales are amply separated (i.e., \(1\ll\tau_{1}\ll\tau_{2}\)) we will have three power-law regimes, namely ballistic, random-acceleration, and diffusive:
Footnote 5: Let us remark that with this choice the mean-squared displacement \(\langle\Delta^{2}X(t)\rangle\), where \(\Delta X(t)=X(t)-X(0)\), scales as \(\langle\Delta X^{2}(t)\rangle\simeq k^{2}t^{2}/2\beta\) in the ballistic regime where \(t\ll\tau_{2}\)[42]. This can be easily checked using Eq. (38) and proceeding in the same way we obtained Eq. (41). In the diffusive regime, \(t\gg\tau_{2}\), we have the expected diffusive behavior \(\langle\Delta X^{2}(t)\rangle=\sigma_{x}^{2}(t)\simeq k^{2}t/\beta^{2}\).
\[\mu_{m}(t)\sim\begin{cases}\frac{\sqrt{3}}{2\pi}\frac{w_{0}}{k \delta^{3/2}}&1\ll t\ll\tau_{1}\\ \frac{\sqrt{3}}{2\pi i}&\tau_{1}\ll t\ll\tau_{2}\\ \frac{1}{\pi}\sqrt{\frac{\beta}{2t}}&t\gg\tau_{2}\end{cases} \tag{73}\]
Figure 7: Same as Fig. 6, but in linear scale and zooming in on the crossover region between the short- and long-time limits.
as can be seen in Fig. 10.
It is interesting to interpret the above results in the case of the Brownian motion of a particle of mass \(m\) under a viscous drag. The movement of the particle in one dimension is described by the equation
\[m\ddot{X}(t)=-\gamma\dot{X}(t)+\xi(t) \tag{74}\]
where \(\gamma\) is the drag coefficient (for example, \(\gamma=6\pi\eta r\) for a spherical particle, where \(\eta\) is the fluid viscosity and \(r\) is the particle radius), and \(\xi\) is zero-mean Gaussian white noise satisfying the fluctuation-dissipation theorem,
\[\langle\xi(t)\xi(t^{\prime})\rangle=2\gamma k_{B}T\delta(t-t^{\prime}), \tag{75}\]
where \(k_{B}\) is the Boltzmann constant and \(T\) is the temperature. By comparison with Eq. (36), we see that \(k=\sqrt{2\gamma k_{B}T}/m\) and \(\beta=\gamma/m\). Thus the duration of the ballistic regime is \(\tau_{2}=m/\gamma\) (a result obtained long ago by
Figure 8: Scaling plot of the mean-crossing intensity for \(\tau_{2}/\tau_{1}=2\). The noisy colored lines correspond to the simulation results for different values of \(\tau_{2}\). The curved (red) solid line corresponds to the analytical result in Eq. (68). The straight solid and dashed (black) lines correspond, respectively, to the short-time and long-time asymptotic regimes in Eq. (71).
Einstein [43]), and
\[\frac{\tau_{2}}{\tau_{1}}=\frac{k_{B}T}{my_{0}^{2}/2} \tag{76}\]
which is the ratio between twice the thermal energy and the initial kinetic energy. In an experiment tracking the motion of an individual particle, it is natural to assume that, when we start observing the particle, its velocity is already thermalized, namely \(y_{0}^{2}=\lim_{t\rightarrow\infty}\langle Y^{2}(t)\rangle=k^{2}/2\beta\). Thus we have \(my_{0}^{2}/2=k_{B}T/2\) and \(\tau_{2}/\tau_{1}=2\). Furthermore, as discussed above, the mean-squared displacement behaves as \(\langle\Delta X^{2}(t)\rangle\simeq(k_{B}T/m)t^{2}\) in the ballistic regime \(t\ll m/\gamma\), and \(\langle\Delta X^{2}(t)\rangle\simeq(2k_{B}T/\gamma)t\) in the diffusive regime \(t\gg m/\gamma\). The crossover between the two regimes, albeit more complex due to hydrodynamic interactions, has been observed experimentally [44].
## V Noisy oscillators
We now apply the results of Sect. III to harmonic oscillators driven by Gaussian white noise. The linearity of such systems ensures the Gaussian character of the oscillator response. We first focus on the damped case, which is stationary, and latter address the undamped oscillator, a non-stationary process presenting some distinctive and interesting features.
### Noisy oscillators with damping
We consider a linear oscillator subject to damping and driven by an external force assumed to be zero-mean Gaussian white noise. The time evolution is given by the second-order linear equation
\[\ddot{X}+\beta\dot{X}+\omega_{0}^{2}X=k\xi(t), \tag{77}\]
where \(\beta>0\) is the damping constant, \(\omega_{0}\) is the natural frequency of the deterministic oscillator without damping, and \(\xi(t)\) is Gaussian white noise with \(\langle\xi(t)\rangle=0\) and \(\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime})\). Again, due to the linearity of Eq. (77), both \(X(t)\) and \(\dot{X}(t)\) are Gaussian processes.
As we can see by direct substitution, the solution to Eq. (77) is
\[X(t)=Ae^{-\beta(t-t_{0})/2}\cos\bigl{[}\omega(t-t_{0})+\delta\bigr{]}+\frac{k} {\omega}\int_{t_{0}}^{t}e^{-\beta(t-t^{\prime})/2}\sin\bigl{[}\omega(t-t^{ \prime})\bigr{]}\xi(t^{\prime})dt^{\prime}. \tag{78}\]
Hence
\[Y(t)=\dot{X}(t)=-\frac{\beta}{2}X(t)-A\omega e^{-\beta(t-t_{0})/2}\sin\bigl{[} \omega(t-t_{0})+\delta\bigr{]}+k\int_{t_{0}}^{t}e^{-\beta(t-t^{\prime})/2}\cos \bigl{[}\omega(t-t^{\prime})\bigr{]}\xi(t^{\prime})dt^{\prime}, \tag{79}\]
where
\[\omega=\sqrt{\omega_{0}^{2}-\beta^{2}/4}. \tag{80}\]
In what follows we will assume that the oscillator works within the underdamped regime, i.e., \(\beta<2\omega_{0}\), so that \(\omega\) is real. The constants \(A\) and \(\delta\) are related to the initial conditions, \(X(t_{0})=x_{0}\) and \(\dot{X}(t_{0})=y_{0}\), by
\[A=\sqrt{x_{0}^{2}+\frac{1}{\omega^{2}}(y_{0}+\beta x_{0}/2)^{2}},\qquad\quad \delta=-\arctan\left[\frac{1}{\omega}\left(\frac{y_{0}}{x_{0}}+\frac{\beta}{ 2}\right)\right]\,. \tag{81}\]
From Eqs. (78)-(79) we see that the average values of position and velocity are
\[m_{x}(t)=Ae^{-\beta(t-t_{0})/2}\cos\bigl{[}\omega(t-t_{0})+\delta\bigr{]}, \qquad m_{y}(t)=-\frac{\beta}{2}m_{x}(t)-A\omega e^{-\beta(t-t_{0})/2}\sin \bigl{[}\omega(t-t_{0})+\delta\bigr{]}. \tag{82}\]
Let us incidentally note that these average values correspond to the response of the deterministic oscillator.
In Appendix C we show that the variances are
\[\sigma_{x}^{2}(t)=\frac{k^{2}}{\omega^{2}(\beta^{2}+4\omega^{2})}\left\{ \frac{2\omega^{2}}{\beta}-e^{-\beta(t-t_{0})}\left[\beta\sin^{2}\omega(t-t_{0 })+\omega\sin 2\omega(t-t_{0})+\frac{2\omega^{2}}{\beta}\right]\right\}, \tag{83}\]
\[\sigma_{y}^{2}(t)=\frac{k^{2}}{\beta^{2}+4\omega^{2}}\left\{\frac{1}{\beta}( \beta^{2}+2\omega^{2})-e^{-\beta(t-t_{0})}\left[\beta\cos^{2}\omega(t-t_{0}) -\omega\sin 2\omega(t-t_{0})+\frac{2\omega^{2}}{\beta}\right]\right\}, \tag{84}\]
and
\[\sigma_{xy}(t)=\frac{k^{2}}{2\omega(\beta^{2}+4\omega^{2})}\left\{2\omega-e^{ -\beta(t-t_{0})}\left[\beta\sin 2\omega(t-t_{0})+2\omega\cos 2\omega(t-t_{0}) \right]\right\}. \tag{85}\]
Knowing mean values and variances the exact expression for the crossing intensity of the oscillator to any level \(u\) is attained from Eq. (26) after using Eqs. (20) and (23). As in Brownian motion the resulting expression is clumsy and we will not write it explicitly. In any case the exact expression is mostly useful when the oscillator is in the transient state which may be useful in some specific applications. However, the behavior of the oscillator at longer times, when it enters into the stationary regime, turns out to be more relevant.
Contrary to the two cases developed in the previous section which are not stationary, the noisy oscillator (77) achieves the stationary regime at long times which exclude transient effects depending on the initial conditions. This is easily seen by taking the limit \(t_{0}\rightarrow-\infty\) in Eqs. (78)-(79), that is 6
Footnote 6: Let us recall that the stationary state is achieved when \(t-t_{0}\rightarrow\infty\). Such a limit may be taken by two different but equivalent ways: (i) either \(t_{0}\) is finite (for instance \(t_{0}=0\)) and \(t\rightarrow\infty\), or (ii) \(t\) is finite but the process started in the infinite past, so that \(t_{0}\rightarrow-\infty\). In writing Eqs. (86) and (87) we have taken the second interpretation.
\[X(t) = \frac{k}{\omega}\int_{-\infty}^{t}e^{-\beta(t-t^{\prime})/2}\sin \bigl{[}\omega(t-t^{\prime})\bigr{]}\xi(t^{\prime})dt^{\prime}, \tag{86}\] \[Y(t) = \frac{k}{\omega}\int_{-\infty}^{t}e^{-\beta(t-t^{\prime})/2}\bigl{[} -(\beta/2)\sin\omega(t-t^{\prime})+\omega\cos\omega(t-t^{\prime})\bigr{]}\xi( t^{\prime})dt^{\prime}. \tag{87}\]
In this regime (cf Eq. (82))
\[m_{x}(t)=m_{y}(t)=0, \tag{88}\]
and taking the limit \(t-t_{0}\rightarrow\infty\) in Eqs. (83)-(85) we get the stationary variances
\[\sigma_{x}^{2}=\frac{2k^{2}}{\beta(\beta^{2}+4\omega^{2})},\qquad\sigma_{y}^{2 }=\frac{k^{2}(\beta^{2}+2\omega^{2})}{\beta(\beta^{2}+4\omega^{2})},\qquad \sigma_{xy}=\frac{k^{2}}{\beta^{2}+4\omega^{2}},\]
which, in terms of the natural frequency \(\omega_{0}\) (cf. Eq. (80)), can be written as
\[\sigma_{x}^{2}=\frac{k^{2}}{2\beta\omega_{0}^{2}},\qquad\sigma_{y}^{2}=\frac{k^{2 }(\beta^{2}/2+2\omega_{0}^{2})}{4\beta\omega_{0}^{2}},\qquad\sigma_{xy}=\frac{k ^{2}}{4\omega_{0}^{2}}, \tag{89}\]
so that (cf. Eq. (20))
\[\Delta=\frac{k^{2}}{2\beta\omega_{0}}. \tag{90}\]
Substituting Eqs. (88), (89) and (90) into Eqs. (23) and (26), after simple manipulations, result in the stationary crossing intensity of the noisy oscillator:
\[\mu_{u}=\frac{\omega_{0}}{\pi}e^{-\beta\omega_{0}^{2}u^{2}/k^{2}}\left[e^{- \beta^{3}u^{2}/4k^{2}}+\sqrt{\pi}\frac{\beta^{3/2}u}{2k}\mathrm{Erf}\left( \frac{\beta^{3/2}u}{2k}\right)\right], \tag{91}\]
Note that in this case crossing the mean value corresponds to setting \(u=0\), which gives
\[\mu_{m}=\frac{\omega_{0}}{\pi}, \tag{92}\]
and we see that this crossing frequency (i.e., intensity) doubles the natural frequency of the deterministic oscillator.
Following the same procedure for the upcrossing and downcrossing intensities given in Eqs. (22) and (25) we easily find
\[\mu_{u}^{(\pm)}=\frac{\omega_{0}}{2\pi}e^{-\beta\omega_{0}^{2}u^{2}/k^{2}} \left[e^{-\beta^{3}u^{2}/4k^{2}}\pm\sqrt{\pi}\frac{\beta^{3/2}u}{2k}\mathrm{ Erfc}\left(\mp\frac{\beta^{3/2}u}{2k}\right)\right], \tag{93}\]
For the mean-crossing problem \(u=m_{x}=0\) and both intensities are equal
\[\mu_{m}^{(+)}=\mu_{m}^{(-)}=\mu_{m}/2,\]
and the frequencies of up and down crossings equal the natural frequency of the deterministic oscillator \(\omega_{0}/2\pi\).
### The undamped oscillator
When no damping is present, the evolution equation of the noisy linear oscillator is simply given by
\[\ddot{X}+\omega_{0}^{2}X=k\xi(t). \tag{94}\]
The formal solution to this equation with the initial conditions \(X(0)=x_{0}\) and \(\dot{X}(0)=y_{0}\) reads (see Eqs. (78) and (79))
\[X(t)=A\cos\left(\omega_{0}t+\delta\right)+\frac{k}{\omega_{0}}\int_{0}^{t}\sin \omega_{0}(t-t^{\prime})\xi(t^{\prime})dt^{\prime}, \tag{95}\]
and
\[Y(t)=-A\omega_{0}\sin(\omega_{0}t+\delta)+k\int_{0}^{t}\cos\omega_{0}(t-t^{ \prime})\xi(t^{\prime})dt^{\prime}, \tag{96}\]
where
\[A=\sqrt{x_{0}^{2}+y_{0}^{2}/\omega_{0}^{2}},\qquad\delta=-\arctan\left(\frac{ y_{0}}{\omega x_{0}}\right), \tag{97}\]
and we have set \(t_{0}=0\) without loss of generality because the process is time homogeneous, although not stationary, but obviously Gaussian.
The average values are
\[m_{x}(t)=A\cos(\omega_{0}t+\delta),\qquad m_{y}(t)=-A\omega_{0}\sin(\omega_{0} t+\delta), \tag{98}\]
and variances are now given by (cf. Appendix C)
\[\sigma_{x}^{2}(t)=\frac{k^{2}t}{2\omega_{0}^{2}}\left(1-\frac{1}{2\omega_{0}t} \sin 2\omega_{0}t\right),\quad\sigma_{y}^{2}(t)=\frac{k^{2}t}{2}\left(1+\frac{1}{ 2\omega_{0}t}\sin 2\omega_{0}t\right),\quad\sigma_{xy}(t)=\frac{k^{2}}{4\omega_{0}^{2} }\left(1-\cos 2\omega_{0}t\right), \tag{99}\]
and (cf. Eq. (20))
\[\Delta(t)=\frac{k^{2}t}{2\omega_{0}}\sqrt{1-\left(\frac{\sin\omega_{0}t}{ \omega_{0}t}\right)^{2}}. \tag{100}\]
Substituting these expressions into Eqs. (23) and (26) we get the exact expression of the crossing intensity \(\mu_{u}(t)\) for the undamped linear oscillator. Let us, however, focus on the behavior for large times -specifically when several periods, \(T_{0}=2\pi/\omega_{0}\), of the deterministic oscillator have elapsed- that is, when \(\omega_{0}t\gg 1\). In such a case one can easily check that
\[\frac{\Delta(t)}{\sigma_{x}^{2}(t)}=\omega_{0}\frac{\sqrt{1-\sin^{2}\omega_{0} t/\omega_{0}^{2}t^{2}}}{1-\sin 2\omega_{0}t/2\omega_{0}t}=\omega_{0}\left[1+O\left( \frac{1}{\omega_{0}t}\right)\right], \tag{101}\]
and
\[\eta_{u}(t)=\frac{1}{kt^{1/2}}\left[m_{y}(t)+\frac{1}{2t}(u-m_{x}(t)(1-\cos 2 \omega_{0}t))\right]\left[1+O\left(\frac{1}{\omega_{0}t}\right)\right]. \tag{102}\]
Let us incidentally note that within the same degree of approximation the function \(\eta_{u}(t)\) is independent of the crossing level \(u\) for sufficiently large values of \(t\). Indeed, from the above expression we see that
\[\eta_{u}(t)=\frac{m_{y}(t)}{kt^{1/2}}\left[1+O\left(\frac{1}{\omega_{0}t} \right)\right], \tag{103}\]
which is valid for all finite values of the crossing level \(u\). Finally, substituting (101) and (103) into (26) and taking into account (cf. Eq. (99)) that
\[\sigma_{x}^{2}(t)=\frac{k^{2}t}{2\omega_{0}^{2}}\left[1+O\left(\frac{1}{\omega _{0}t}\right)\right],\]
we have
\[\mu_{u}(t)=\frac{\omega_{0}}{\pi}e^{-\omega_{0}^{2}[u-m_{x}(t)]^{2}/k^{2}t} \left[e^{-m_{y}^{2}(t)/k^{2}t}+\sqrt{\pi}\frac{m_{y}(t)}{kt^{1/2}}{\rm Erf} \left(\frac{m_{y}(t)}{kt^{1/2}}\right)+O\left(\frac{1}{\omega_{0}t}\right) \right].\]
Recalling the asymptotic expression (47)
\[{\rm Erf}(z)=\frac{2z}{\sqrt{\pi}}e^{-z^{2}}\big{[}1+O\big{(}z^{2}\big{)} \big{]},\]
we obtain for sufficiently long times7
Footnote 7: Specifically for \(t\gg\omega_{0}^{-1}\) and \(t\gg m_{y}^{2}(t)/k^{2}\). Note that by virtue of Eqs. (97) and (98) \(m_{y}^{2}(t)/k^{2}=O(y_{0}^{2}/k^{2})\).
\[\mu_{u}(t)\simeq\frac{\omega_{0}}{\pi}\exp\left\{-[\omega_{0}^{2}(u-m_{x}(t)) ^{2}+m_{y}^{2}(t)]/k^{2}t\right\}. \tag{104}\]
Let us finally point out that, although the undamped noisy oscillator is not stationary, its crossing intensity tends as \(t\to\infty\) to a finite value independent of any finite crossing level \(u\),
\[\lim_{t\to\infty}\mu_{u}(t)=\frac{\omega_{0}}{\pi}, \tag{105}\]
a crossing frequency which doubles the natural frequency of the deterministic oscillator.
### Simulation results
We have simulated Eq. (77) for \(\beta\neq 0\) and \(\beta=0\) using the algorithm described in Appendix B. Examples of random trajectories for different values of \(\beta\) are shown in Fig. 11, together with the average \(m_{x}(t)=\langle X(t)\rangle\).
The mean crossing intensities \(\mu_{m}^{(\pm)}(t)\) and \(\mu_{m}(t)\) as a function of \(t\) are shown in Figs. 12 (\(\beta=0\)) and 13 (\(\beta\neq 0\)). Note that at large times \(\mu_{m}^{(\pm)}(t)\) tend to \(\omega_{0}/(2\pi)\) and \(\mu_{m}(t)\) tends to \(\omega_{0}/\pi\). In these figures, as well as in the subsequent ones, the smooth black lines show the analytical results which, as they should, are in all c
Figure 11: Example of random trajectories \(X(t)\) for the noisy oscillators with \(k=1,\omega_{0}=1,x_{0}=0,y_{0}=1\), shown by the thin (black) lines. From top to bottom, \(\beta=0.5\), \(\beta=0.1\) and \(\beta=0\). Data for \(\beta=0.5\) and \(\beta=0.1\) have been shifted upwards by 20 and 10, respectively, for better viewing. Simulations are performed with a fixed time step \(dt=0.01\). The thick (green) lines represent the average value \(\langle X(t)\rangle\) given in Eq. (82).
Figure 12: The mean-crossing intensity for the damped oscillator with \(\beta=0.1,k=1,\omega_{0}=1,x_{0}=0,y_{0}=1,dt=0.01\). The noisy colored lines show the simulations results for upcrossing, downcrossing, and total crossing intensities, obtained by averaging over \(10^{6}\) trajectories. The smooth (black) curves are the analytical results, obtained substituting Eqs. (82), (83), (84), (85) into Eqs. (22), (25), (26) for upcrossing, downcrossing, and total intensities, respectively. The horizontal (black) lines show the asymptotic limit for large times.
with the simulation results.
In comparison with the Brownian motion and random acceleration cases, the noisy oscillator presents an additional time scale \(\omega^{-1}\) (or \(\omega_{0}^{-1}\) in the undamped case). When this is much larger than the scales \(\beta^{-1}\) and \((y_{0}/k)^{2}\), the short-time behavior of \(\mu_{m}(t)\) is the same as that of the Brownian motion. In particular, if \(y_{0}=0\), then \(\mu_{m}(t)\sim\sqrt{3}/(2\pi t)\) as \((\beta t,\omega t)\ll 1\), and if \(y_{0}\neq 0\) then \(\mu_{m}\sim(y_{0}\sqrt{3}/2\pi k)t^{-3/2}\) as \(t\to 0\). These limits are well verified in the numerical simulations, and scaling plots similar to those for the Brownian motion and random acceleration are obtained (although we do not show them here).
Figure 14: Total crossing intensity \(\mu_{u}(t)\) for the damped oscillator with \(\beta=0.1,k=1,\omega_{0}=1,x_{0}=0,y_{0}=0,dt=0.01\), for different levels \(u\). The noisy colored lines show the simulations results averaged over \(10^{6}\) trajectories (labels for different \(u\) are, from top to bottom, in the same order as the lines). The smooth (black) curves are the analytical results, obtained substituting Eqs. (82), (83), (84), (85) into Eq. (26).
Figure 13: The mean-crossing intensity for the undamped oscillator (\(\beta=0\)) with \(k=1,\omega_{0}=1,x_{0}=0,y_{0}=1,dt=0.01\). The noisy colored lines show the simulations results for upcrossing, downcrossing, and total crossing intensities, obtained by averaging over \(10^{6}\) trajectories. The smooth (black) curves are the analytical results, obtained substituting Eqs. (98), (99), (100) into Eqs. (22), (25),(26) for upcrossing, downcrossing, and total intensities, respectively. The horizontal (black) lines show the asymptotic limit for large times.
For applications, it is more interesting to examine the behavior of the crossing intensity at a fixed level \(u\). This is shown in Figs. 14 and 15 for \(\beta\neq 0\) and \(\beta=0\), respectively. In both cases we choose zero initial velocity \(y_{0}=0\), so that the symmetry \(\mu_{u}(t)=\mu_{-u}(t)\) holds. Note that in the undamped case \(\mu_{u}(t)\) becomes independent of \(u\) at large enough times, as predicted analytically. A detailed view of the behavior at short times for \(\beta=0\) is shown in Fig. 16 in linear scale.
Analogous plots for \(y_{0}\neq 0\) show a qualitatively similar behavior, except that the symmetry in \(u\) is lost at short times.
Figure 16: Same as Fig. 15 but in linear scale and short times.
Figure 15: Total crossing intensity \(\mu_{u}(t)\) for the undamped oscillator (\(\beta=0\)) with \(k=1,\omega_{0}=1,x_{0}=0,y_{0}=0,dt=0.01\), for different levels \(u\). The colored lines show the simulations results averaged over \(10^{6}\) trajectories (labels for different \(u\) are, from top to bottom, in the same order as the lines). The smooth (black) curves are the analytical results, obtained substituting Eqs. (98), (99), (100) into Eq. (26).
Concluding remarks
We have analyzed the counting of crossing events to some preassigned level carried out by inertial random processes. The models studied are described by linear stochastic differential equations of second order driven by Gaussian white noise. The linearity of the equations of motion along with the Gaussian character of the input noise ensure that output processes are Gaussian as well.
We have firstly reviewed Rice formula for the crossing intensity and generalized it to embrace the most comprehensive kind of Gaussian process. The crossing intensity is an important quantity in many applications. In particular, as we discussed in Section II, its inverse is the return period, which in turn provides an upper bound on the distribution of the maximum of a stochastic process over a given time interval. One key result is the exact expression (26) of the crossing intensity for Gaussian processes in their most general form and the simpler version (32) for the zero crossing, that is, the crossing of the mean value:
\[\mu_{m}(t)=\frac{\sigma_{y}(t)}{\pi\sigma_{x}(t)}\sqrt{1-\left[\sigma_{xy}(t) /\sigma_{x}(t)\sigma_{y}(t)\right]^{2}}.\]
We have next specialized on some particular cases of physical interest whose dynamical evolution is described by linear stochastic equations of second oder. In all cases studied we have been able to obtain the exact form for the intensity of up, down and total crossings.
The simplest example is provided by the random acceleration process, a non-stationary process for which the crossing intensity is time dependent. At long times the crossing intensity to any level \(u\) decreases with time as
\[\mu_{u}(t)\sim t^{-1},\hskip 28.452756pt(t\to\infty),\]
and thus the average number of crossings increases as
\[\langle N_{u}(t)\rangle\sim\ln t,\hskip 28.452756pt(t\to\infty).\]
At short times we find
\[\mu_{u}(t)\sim t^{-3/2},\hskip 28.452756pt\langle N_{u}(t)\rangle\sim t^{-1/2}, \hskip 28.452756pt(t\to 0).\]
The next example is Brownian motion, which is also not stationary. In this case, at long times (i.e. in the diffusive regime) we obtain a slower decay than that of random acceleration:
\[\mu_{u}(t)\sim t^{-1/2},\hskip 28.452756pt\langle N_{u}(t)\rangle\sim t^{1/2}, \hskip 28.452756pt(t\to\infty)\,.\]
For short times (i.e. in the ballistic regime), if the initial velocity is zero (\(y_{0}=0\)) we have
\[\mu_{u}(t)\sim t^{-1},\hskip 28.452756pt\langle N_{u}(t)\rangle\sim\ln t, \hskip 28.452756pt(t\to 0)\,,\]
which is the same scaling as in random acceleration at long times.
The most general case is Brownian motion with non-zero initial velocity (\(y_{0}\neq 0\)). This has a more complex time structure since there are now two characteristic time scales. When these scales are well separated we observe three regimes: \(\mu_{u}(t)\sim t^{-3/2}\) at short times (random acceleration regime), \(t^{-1}\) at intermediate times (ballistic regime), and \(t^{-1/2}\) at long times (diffusive regime).
The third process studied has been the damped linear oscillator driven by Gaussian white noise. Due to damping, the oscillator reaches a stationary state as time increases, which implies a time-independent crossing intensity that for the mean-crossing problem has the simple expression:
\[\mu_{m}=\frac{\omega_{0}}{\pi},\]
which doubles the natural frequency of the deterministic oscillator. Let us note that in the stationary state, when transient effects have faded away, the average number of mean crossings during a time interval \(\Delta t\) follows the linear law:
\[\langle N_{m}(\Delta t)\rangle=\frac{\omega_{0}}{\pi}\Delta t.\]
The last example addressed has been the undamped oscillator. This case is not stationary and the crossing intensity depends on time but tends to a finite and non-zero value as \(t\to\infty\) that is independent of the crossing level:
\[\lim_{t\to\infty}\mu_{u}(t)=\frac{\omega_{0}}{\pi},\]
which again doubles the frequency of the deterministic oscillator.
Let us finally remark that Rice's approach can be extended to include random processes (whether inertial or not) driven by colored noise as well as to study the counting of maxima and minima. These works are under present investigation and some results will be presented soon.
## Appendix A Variances of the Brownian motion
From Eqs. (37)-(40) we have
\[\sigma_{x}^{2}(t) = \left<\left[X(t)-m_{x}(t)\right]^{2}\right>\] \[= \frac{k^{2}}{\beta^{2}}\int_{0}^{t}dt_{1}\int_{0}^{t}\left[1-e^{- \beta(t-t_{1})}\right]\left[1-e^{-\beta(t-t_{2})}\right]\delta(t_{1}-t_{2})dt_ {2}\] \[= \frac{k^{2}}{\beta^{2}}\int_{0}^{t}\left[1-e^{-\beta(t-t_{1})} \right]^{2}dt_{1},\]
and hence
\[\sigma_{x}^{2}(t)=\frac{k^{2}}{\beta^{3}}\left(\beta t-\frac{3}{2}+2e^{-\beta t }-\frac{1}{2}e^{-2\beta t}\right),\]
which agrees with Eq. (41). Proceeding in an analogous way, we have
\[\sigma_{y}^{2}(t) = \left<\left[Y(t)-m_{y}(t)\right]^{2}\right>\] \[= k^{2}\int_{0}^{t}dt_{1}\int_{0}^{t}e^{-\beta(t-t_{1})}e^{-\beta( t-t_{2})}\delta(t_{1}-t_{2})dt_{2}\] \[= k^{2}\int_{0}^{t}e^{-2\beta(t-t_{1})}dt_{1},\]
and we obtain Eq. (42):
\[\sigma_{y}^{2}(t)=\frac{k^{2}}{2\beta}\left(1-e^{-2\beta t}\right).\]
Finally,
\[\sigma_{xy}(t) = \left<\left[X(t)-m_{x}(t)\right]\left[Y(t)-m_{y}(t)\right]\right>\] \[= \frac{k^{2}}{\beta}\int_{0}^{t}dt_{1}\int_{0}^{t}e^{-\beta(t-t_{1 })}\left[1-e^{-\beta(t-t_{2})}\right]\delta(t_{1}-t_{2})dt_{2}\] \[= \frac{k^{2}}{\beta}\int_{0}^{t}e^{-\beta(t-t_{1})}\left[1-e^{- \beta(t-t_{1})}\right]dt_{1},\]
that is,
\[\sigma_{xy}(t)=\frac{k^{2}}{\beta^{2}}\left(\frac{1}{2}-e^{-\beta t}+\frac{1} {2}e^{-2\beta t}\right).\]
which is Eq. (43).
## Appendix B Simulation method
We will use the algorithm presented in Ref.[37] to simulate the Langevin equation
\[\ddot{X}+\beta\dot{X}=F(X,t)+k\xi(t),\]
where \(\xi(t)\) is Gaussian white noise satisfying \(\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime})\) and \(F(X(t),t)\) is a deterministic force.
Discretizing time as \(t_{n+1}=t_{n}+dt\), a random trajectory \((t_{n},X_{n})\), starting from the initial condition \(X_{0}=x_{0}\), \(Y_{0}=y_{0}\), where \(Y=\dot{X}\), is generated by iterating the following recursive equations (in our notation):
\[X_{n+1} = X_{n}+bdtY_{n}+\frac{b}{2}(dt)^{2}F_{n}+\frac{b}{2}k(dt)^{3/2}g_{n+1} \tag{12}\] \[Y_{n+1} = Y_{n}+\frac{b}{2}dt(F_{n}+F_{n+1})-\beta(X_{n+1}-X_{n})+k(dt)^{1/ 2}g_{n+1} \tag{13}\]
where \(F_{n}=F(X_{n},t_{n})\), \(b=(1+\beta/2)^{-1}\), and the \(g_{n}\) are i.i.d. Gaussian random variables with \(\langle g_{n}\rangle=0,\langle g_{n}^{2}\rangle=1\).
Averaging over \(R\) trajectories, we measure the up- and down-crossing intensities \(\mu_{u}^{+}(t_{n}),\mu_{u}^{-}(t_{n})\) at each \(t_{n}\), where for example \(\mu_{u}^{+}\) is total number of upcrossings taking place in \((t_{n},t_{n+1})\) (we say an upcrossing has taken place if \(X_{n}<u\) and \(X_{n+1}>u\)), divided by \(R\,t_{n}\). The total crossing intensity is \(\mu_{u}(t_{n})=\mu_{u}^{+}(t_{n})+\mu_{u}^{-}(t_{n})\).
In our numerical results, \(X\) and \(t\) are in arbitrary units. It is helpful to think of \(X\) as a length expressed in meters, and \(t\) expressed in seconds. Then, the units of the parameters are as follows: \([\beta]\)\(\Rightarrow\)s\({}^{-1}\), \([k]\)\(=\)m s\({}^{-3/2}\), \([y_{0}]\)\(=\)m s\({}^{-1}\), \([\omega_{0},\omega]\)\(=\)s\({}^{-1}\). For Brownian motion and random acceleration, we typically use a time step \(dt=\alpha\sqrt{t}\) with \(\alpha=10^{-3}\) for \(t<1\) and \(\alpha=10^{-2}\) for \(t>1\), except for large \(\beta\) for which we choose \(\alpha\) to be ten times smaller.
In all cases, we set \(R=10^{6}\).
## Appendix C Variances of the noisy oscillator
From Eqs. (78), (79) and (82), we have
\[\sigma_{x}^{2}(t) = \left\langle\left[X(t)-m_{x}(t)\right]^{2}\right\rangle\] \[= \frac{k^{2}}{\omega^{2}}e^{-\beta t}\int_{t_{0}}^{t}dt_{1}\int_{t _{0}}^{t}e^{\beta(t_{1}+t_{2})/2}\sin\omega(t-t_{1})\sin\omega(t-t_{2})\delta (t_{1}-t_{2})dt_{2}\] \[= \frac{k^{2}}{\omega^{2}}e^{-\beta t}\int_{t_{0}}^{t}e^{\beta t_{ 1}}\sin^{2}\omega(t-t_{1})dt_{1}=\frac{k^{2}}{\omega^{2}}\int_{0}^{t-t_{0}}e^ {-\beta t^{\prime}}\sin^{2}\omega t^{\prime}dt^{\prime},\]
hence
\[\sigma_{x}^{2}(t)=\frac{k^{2}}{\omega^{2}(\beta^{2}+4\omega^{2})}\left\{\frac {2\omega^{2}}{\beta}-e^{-\beta(t-t_{0})}\left[\beta\sin^{2}\omega(t-t_{0})+ \omega\sin 2\omega(t-t_{0})+\frac{2\omega^{2}}{\beta}\right]\right\},\]
which is Eq. (83). Proceeding in an analogous way, we have
\[\sigma_{y}^{2}(t) = \left\langle\left[Y(t)-m_{y}(t)\right]^{2}\right\rangle\] \[= k^{2}e^{-\beta t}\int_{t_{0}}^{t}dt_{1}\int_{t_{0}}^{t}e^{\beta( t_{1}+t_{2})/2}\cos\omega(t-t_{1})\cos\omega(t-t_{2})\delta(t_{1}-t_{2})dt_{2}\] \[= k^{2}\int_{0}^{t-t_{0}}e^{-\beta t^{\prime}}\cos^{2}\omega t^{ \prime}dt^{\prime},\]
and
\[\sigma_{y}^{2}(t)=\frac{k^{2}}{\beta^{2}+4\omega^{2}}\left\{\frac{1}{\beta}( \beta^{2}+2\omega^{2})-e^{-\beta(t-t_{0})}\left[\beta\cos^{2}\omega(t-t_{0})- \omega\sin 2\omega(t-t_{0})+\frac{2\omega^{2}}{\beta}\right]\right\},\]
which agrees with Eq. (84). Finally,
\[\sigma_{xy}(t) = \left\langle[X(t)-m_{x}(t)][Y(t)-m_{y}(t)]\right\rangle\] \[= \frac{k^{2}}{\omega}e^{-\beta t}\int_{t_{0}}^{t}dt_{1}\int_{t_{0} }^{t}e^{\beta(t_{1}+t_{2})/2}\sin\omega(t-t_{1})\cos\omega(t-t_{2})\delta(t_{1 }-t_{2})dt_{2}\] \[= \frac{k^{2}}{2\omega}e^{-\beta t}\int_{t_{0}}^{t}e^{\beta t_{1}} \sin 2\omega(t-t_{1})dt_{1}=\frac{k^{2}}{2\omega}\int_{0}^{t-t_{0}}e^{-\beta t ^{\prime}}\sin 2\omega t^{\prime}dt^{\prime},\]
that is,
\[\sigma_{xy}(t)=\frac{k^{2}}{2\omega(\beta^{2}+4\omega^{2})}\left\{2\omega-e^{- \beta(t-t_{0})}\left[\beta\sin 2\omega(t-t_{0})+2\omega\cos 2\omega(t-t_{0}) \right]\right\},\]
which is Eq. (85).
For the undamped oscillator \(\beta=0\) and from the above equations we have (recall we have set \(t_{0}=0\))
\[\sigma_{x}^{2}(t)=\frac{k^{2}}{\omega_{0}^{2}}\int_{0}^{t}\sin^{2}\omega_{0}t^{ \prime}dt^{\prime}=\frac{k^{2}t}{2\omega_{0}^{2}}\left(1-\frac{1}{2\omega_{0}t }\sin 2\omega_{0}t\right),\]
\[\sigma_{y}^{2}(t)=k^{2}\int_{0}^{t}\cos^{2}\omega_{0}t^{\prime}dt^{\prime}=\frac{k ^{2}t}{2}\left(1+\frac{1}{2\omega_{0}t}\sin 2\omega_{0}t\right),\]
\[\sigma_{xy}(t)=\frac{k^{2}}{2\omega_{0}}\int_{0}^{t}\sin 2\omega_{0}t^{\prime} dt^{\prime}=\frac{k^{2}}{4\omega_{0}^{2}}\left(1-\cos 2\omega_{0}t\right),\]
which agree with Eq. (99).
## Acknowledgments
This work has been partially funded by MINECO (Spain), Agencia Estatal de Investigacion (AEI) grant numbers PID2019-106811GB-C33 (AEI/10.13039/501100011033) (J.M.), PGC2018-094754-B-C22 (M.P.), and by Generalitat de Catalunya grant numbers 2017SGR608 (J.M.), 2017SGR1614 (M.P.). M.P. thanks Marco Palassini Vidal for inspiration.
|
2310.00538 | On Cayley algorithm for double partition | A double partition problem asks for a number of nonnegative integer solutions
to a system of two linear Diophantine equations with integer coefficients.
Artur Cayley suggested a reduction of a double partition to a sum of scalar
partitions with an algorithm subject to a set of conditions. We show that when
these conditions are not satisfied and the original algorithm fails its
modification solves the reduction problem. | Boris Rubinstein | 2023-10-01T01:36:19Z | http://arxiv.org/abs/2310.00538v1 | # On Cayley algorithm for double partition
###### Abstract
A double partition problem asks for a number of nonnegative integer solutions to a system of two linear Diophantine equations with integer coefficients. Artur Cayley suggested [2] a reduction of a double partition to a sum of scalar partitions with an algorithm subject to a set of conditions. We show that when these conditions are not satisfied and the original algorithm fails its modification solves the reduction problem.
**Keywords**: double partition.
**2010 Mathematics Subject Classification**: 11P82.
## 1 Scalar and vector restricted partition functions
### Scalar partitions
The problem of integer partition into a set of integers is equivalent to a problem of number of nonnegative integer solutions of the Diophantine equation
\[\sum_{i=1}^{m}x_{i}d_{i}={\bf x}\cdot{\bf d}=s. \tag{1}\]
A scalar partition function \(W(s,{\bf d})\equiv W(s,\{d_{1},d_{2},\ldots,d_{m}\})\) solving the above problem is a number of partitions of an integer \(s\) into positive integers \(\{d_{1},d_{2},\ldots,d_{m}\}\). The generating function for \(W(s,{\bf d})\) has a form
\[G(t,{\bf d})=\prod_{i=1}^{m}\frac{1}{1-t^{d_{i}}}=\sum_{s=0}^{\infty}W(s,{\bf d })\;t^{s}\;, \tag{2}\]
Introducing notation \(C[t^{s}](f(t))\) for a coefficient of \(t^{s}\) in the expansion of a function \(f(t)\) we have
\[W(s,{\bf d})=C[t^{s}]\left(\prod_{i=1}^{m}(1-t^{d_{i}})^{-1}\right). \tag{3}\]
Sylvester proved [9] a statement about splitting of the scalar partition function (SPF) into periodic and non-periodic parts and showed that it may be presented as a sum of "waves"
\[W(s,{\bf d})=\sum_{j=1}W_{j}(s,{\bf d})\;, \tag{4}\]
where summation runs over all distinct factors of the elements of the generator vector \(\mathbf{d}\). The wave \(W_{j}(s,\mathbf{d})\) is a quasipolynomial in \(s\) closely related to prime roots \(\rho_{j}\) of unity, namely, is a coefficient of \(t^{-1}\) in the series expansion in ascending powers of \(t\) of a function
\[F_{j}(s,t)=\sum_{\rho_{j}}\frac{\rho_{j}^{-s}e^{st}}{\prod_{k=1}^{m}\left(1- \rho_{j}^{d_{k}}e^{-d_{k}t}\right)}\,. \tag{5}\]
The summation is made over all prime roots of unity \(\rho_{j}=\exp(2\pi in/j)\) for \(n\) relatively prime to \(j\) (including unity) and smaller than \(j\). It was shown [4] that it is possible to express the wave \(W_{j}(s,\mathbf{d})\) as a finite sum of the Bernoulli polynomials of higher order.
### Vector partitions
Consider a function \(W(\mathbf{s},\mathbf{D})\) counting the number of integer nonnegative solutions \(\mathbf{x}\geq 0\) to a linear system \(\mathbf{D}\cdot\mathbf{x}=\mathbf{s}\), where \(\mathbf{D}\) is a nonnegative integer \(l\times m\) generator matrix. The function \(W(\mathbf{s},\mathbf{D})\) is called _vector partition function_ (VPF) as a natural generalization of SPF to the vector argument.
The generating function for the VPF reads
\[G(\mathbf{t},\mathbf{D})=\prod_{i=1}^{m}\frac{1}{1-\mathbf{t}^{\mathbf{c}_{i} }}=\sum_{\mathbf{s}}W(\mathbf{s},\mathbf{D})\mathbf{t}^{\mathbf{s}},\quad \mathbf{t}^{\mathbf{s}}=\prod_{k=1}^{l}t_{k}^{s_{k}},\quad\mathbf{t}^{ \mathbf{c}_{i}}=\prod_{k=1}^{l}t_{k}^{c_{ik}}, \tag{6}\]
where \(\mathbf{c}_{i}=\{c_{i1},c_{i2},\ldots,c_{il}\}\), \((1\leq i\leq m)\) denotes the \(i\)-th column of the matrix \(\mathbf{D}=\{\mathbf{c}_{1},\mathbf{c}_{2},\ldots,\mathbf{c}_{m}\}\). Note that some elements \(c_{ik}\) might equal zero. Generalizing the coefficient notation (3) to the case of function of several variables find for \(W(\mathbf{s},\mathbf{D})\)
\[W(\mathbf{s},\mathbf{D})=C[\mathbf{t}^{\mathbf{s}}]\left(\prod_{i=1}^{m}\frac {1}{1-\mathbf{t}^{\mathbf{c}_{i}}}\right). \tag{7}\]
Several approaches were suggested for VPF computation including method of residues [1, 10] and geometric decomposition into so called _chambers_[7] - regions in \(m\)-dimensional space where the partition function is characterized by a specific expression. A method of VPF computation was suggested [5] being a direct generalization of the approach developed in [4]. To this end _vector_ Bernoulli and Eulerian polynomials of higher order were introduced to find explicit expression for \(W(\mathbf{s},\mathbf{D})\). A drawback of this approach is that it does not provide any mechanism to define chamber boundaries. As SPF computation requires just well-known functions [4] it is promising to obtain a reduction method expressing vector partition through scalar ones.
### Sylvester-Cayley method of vector partition reduction
The problem of scalar and vector integer partitions has a long history and J.J. Sylvester made a significant contribution to its solution. In addition to the splitting algorithm for SPF [9] he suggested [8] an idea to reduce VPF into a sum of scalar partitions. The reduction is an iterative process based on the variable elimination in the generating function (6). Sylvester considered a specific double partition problem as an illustration of his method and determined regions of a plane \(\{s_{1},s_{2}\}\) each having a unique expression for VPF valid in this region only. He showed that the expressions in the adjacent chambers coincide at their common boundary (see also [7]).
This approach was successfully applied by A. Cayley [2] to double partitions subject to some restrictions on the elements of matrix \(\mathbf{D}\) - the vectors \(\mathbf{c}_{i}\) are noncollinear, the elements of every
column \(\mathbf{c}_{i}\) are relatively prime and for all elements \(c_{i2}\) of the second row the inequality \(c_{i2}<s_{2}+2,1\leq i\leq m\) holds. It should be noted that Cayley mentions that the elements of the matrix \(\mathbf{D}\) as well as \(s_{i}\) "being all positive integer numbers, not excluding zero" [2]. Direct computation shows that when the number of columns containing zero is larger than two, with zeros appearing in both rows of the matrix \(\mathbf{D}\), and also nonzero elements in such columns are larger than unity, the Cayley method fails. It also fails when the column elements have the greatest common divisor (GCD) larger than unity. These deficiencies call for a search of alternative approach of double partition reduction to scalar partitions.
### Partial fraction expansion algorithm
Computation of vector partition \(W(\mathbf{s},\mathbf{D})\) in (7) can be performed by iterative elimination [1] of \((l-1)\) variables \(t_{k},(2\leq k\leq l)\). Each elimination step includes partial fraction expansion (PFE) w.r.t. the eliminated variable with subsequent coefficient evaluation. This step is equivalent to elimination of \(k\)-th row of augmented \(l\times(m+1)\) matrix \(\mathbf{E}=\{\mathbf{s}=\mathbf{c}_{0},\mathbf{c}_{1},\mathbf{c}_{2},\ldots, \mathbf{c}_{m}\}\) made of the generator matrix \(\mathbf{D}\) and argument vector \(\mathbf{s}\); the same time one of the columns of \(\mathbf{D}\) is eliminated too. The number of newly generated \((l-1)\times m\) matrices is equal to \(m\). This algorithm was employed in [2] for a two-row positive matrix (see also [8]).
## 2 Cayley algorithm of double partition reduction
Consider the simplest vector partition case \(l=2\) following the algorithm described in [2]. Denote matrix \(\mathbf{E}\) columns as \(\mathbf{s}=\mathbf{c}_{0}=\{r,\rho\}^{T}\) and \(\mathbf{c}_{i}=\{b_{i},\beta_{i}\}^{T},\ (1\leq i\leq m)\), where \(T\) stands for transposition. Cayley specified following conditions that should be met in order to apply the algorithm [2]. First, all fractions \(b_{i}/\beta_{i}\) should be unequal, in other words, the columns \(\mathbf{c}_{i}\) must be pairwise linearly independent. It is also required that the elements of each column \(\mathbf{c}_{i}\) are relatively prime \(\gcd(b_{i},\beta_{i})=1\). Finally, all elements of the second row should satisfy a condition \(\beta_{i}<\rho+2\).
Assuming that all \(b_{i},\beta_{i}>0\) perform PFE step to present \(G(\mathbf{t},\mathbf{D})\) as sum of \(m\) fractions (\(\mathbf{t}=\{x,y\}\))
\[G(\mathbf{t},\mathbf{D})=\prod_{i=1}^{m}(1-x^{b_{i}}y^{\beta_{i}})^{-1}=\sum_ {i=1}^{m}T_{i}(x,y),\quad T_{i}=\frac{A_{i}(x,y)}{1-x^{b_{i}}y^{\beta_{i}}}, \tag{8}\]
where the functions \(A_{i}\) are rational in \(x\) and rational and integral (of degree \(\beta_{i}-1\)) in \(y\).
### Partial fraction expansion
Consider \(A_{1}(x,y)\) - we have for \(y=y_{0}=x^{-b_{1}/\beta_{1}}\),
\[A_{1}(x,y)=\prod_{i\neq 1}^{m}(1-x^{b_{i}}y^{\beta_{i}})^{-1}, \tag{9}\]
that is
\[A_{1}(x,x^{-b_{1}/\beta_{1}})=\prod_{i\neq 1}^{m}(1-x^{b_{i}-b_{1}\beta_{i}/ \beta_{1}})^{-1}. \tag{10}\]
Introduce a set of complex quantities \(\omega_{1j}=\omega_{1}^{j},\ 1\leq j\leq\beta_{1}-1\), where \(\omega_{1}=\exp(2\pi i/\beta_{1})\). Multiply both the numerator and denominator of the fraction in r.h.s. in (10) by
\[S_{1}(x,y_{0})=\prod_{i\neq 1}^{m}\prod_{j=1}^{\beta_{1}-1}(1-\omega_{1j}x^{b_{ i}}y_{0}^{\beta_{i}})=\prod_{i\neq 1}^{m}\prod_{j=1}^{\beta_{1}-1}(1-\omega_{1j}x^{b _{i}-b_{1}\beta_{i}/\beta_{1}}).\]
The denominator turns into
\[\Pi_{1}(x)=\prod_{i\neq 1}^{m}(1-x^{\beta_{1}b_{i}-b_{1}\beta_{i}}), \tag{11}\]
while the numerator reads
\[S_{1}(x,y_{0})=\Pi_{1}(x)A_{1}(x,y_{0})=\sum_{k=0}^{\beta_{1}-1}A_{1k}(x)x^{-kb _{1}/\beta_{1}}=\sum_{k=0}^{\beta_{1}-1}A_{1k}(x)y_{0}^{k}, \tag{12}\]
where \(A_{1k}\) are rational functions in \(x\). This leads to
\[A_{1}(x,y_{0})=\frac{1}{\Pi_{1}(x)}\sum_{k=0}^{\beta_{1}-1}A_{1k}(x)y_{0}^{k},\]
and we obtain for \(T_{1}(x,y)\)
\[T_{1}(x,y)=\frac{A_{1}(x,y)}{1-x^{b_{1}}y^{\beta_{1}}}=\frac{1}{\Pi_{1}(x)(1-x ^{b_{1}}y^{\beta_{1}})}\sum_{k=0}^{\beta_{1}-1}A_{1k}(x)y^{k}\;. \tag{13}\]
### Contribution evaluation
Our goal is to find a contribution \(C_{r,\rho}^{1}=C[x^{r}y^{\rho}](T_{1}(x,y)).\) Cayley employs a relation [2]
\[C[x^{r}y^{\rho}]\left(\frac{A_{1}(x,y)}{1-x^{b_{1}}y^{\beta_{1}}}\right)=C[x^ {r}y^{\rho}]\left(\frac{A_{1}(x,y)}{1-x^{b_{1}/\beta_{1}}y}\right), \tag{14}\]
that allows to write
\[\frac{A_{1}(x,y)}{1-x^{b_{1}/\beta_{1}}y}=U_{\beta_{1}-2}+\frac{A_{1}(x,x^{-b_ {1}/\beta_{1}})}{1-x^{b_{1}/\beta_{1}}y}, \tag{15}\]
where \(U_{\beta_{1}-2}\) is "a rational and integral function of the degree \(\beta_{1}-2\) in \(y\)" [2]. Introduce \(\delta y=y-y_{0}\), and perform a sequence of transformations
\[S_{1}(x,y) = \sum_{k=0}^{\beta_{1}-1}A_{1k}(x)y^{k}=\sum_{k=0}^{\beta_{1}-1}A_ {1k}(x)(y_{0}+\delta y)^{k} \tag{16}\] \[= \sum_{k=0}^{\beta_{1}-1}\sum_{m=0}^{k}A_{1k}(x)C(k,m)y_{0}^{m}( \delta y)^{k-m}\] \[= \sum_{k=0}^{\beta_{1}-1}A_{1k}(x)y_{0}^{k}+\sum_{k=1}^{\beta_{1} -1}\sum_{m=0}^{k-1}A_{1k}(x)C(k,m)y_{0}^{m}(\delta y)^{k-m}\] \[= S_{1}(x,y_{0})+\sum_{k=1}^{\beta_{1}-1}\sum_{m=0}^{k-1}A_{1k}(x) C(k,m)y_{0}^{m}(y-x^{-b_{1}/\beta_{1}})^{k-m}=\] \[= S_{1}(x,x^{-b_{1}/\beta_{1}})+\hat{U},\]
where \(C(k,m)\) denotes binomial coefficient and \(\hat{U}\) is an integral function of the degree \(\beta_{1}-1\) in \(y\)
\[\hat{U} = \sum_{k=1}^{\beta_{1}-1}\sum_{m=0}^{k-1}A_{1k}(x)C(k,m)y_{0}^{m}( y-x^{-b_{1}/\beta_{1}})^{k-m} \tag{17}\] \[= \sum_{k=1}^{\beta_{1}-1}\sum_{m=0}^{k-1}(-1)^{m-k}A_{1k}(x)C(k,m) x^{-kb_{1}/\beta_{1}}(1-x^{b_{1}/\beta_{1}}y)^{k-m}.\]
Now return to \(A_{1}(x,y)\) in (14) and find for \(U_{\beta_{1}-2}\) in (15)
\[U_{\beta_{1}-2}=\frac{1}{\Pi_{1}(x)}\cdot\frac{\hat{U}}{1-x^{b_{1}/\beta_{1}}y}= \frac{1}{\Pi_{1}(x)}\sum_{k=1}^{\beta_{1}-1}\sum_{m=0}^{k-1}(-1)^{m-k}A_{1k}(x)C (k,m)x^{-kb_{1}/\beta_{1}}(1-x^{b_{1}/\beta_{1}}y)^{k-1-m}\;,\]
so that \(U_{\beta_{1}-2}\) is an integral function of the degree \(\beta_{1}-2\) in \(y\) and the assumption \(\beta_{i}<\rho+2\) allows to drop this term from further consideration. Then using (15) we have
\[C^{1}_{r,\rho} = C[x^{r}y^{\rho}]\left(\frac{A_{1}(x,y)}{1-x^{b_{1}/\beta_{1}}y} \right)=C[x^{r}y^{\rho}]\left(\frac{A_{1}(x,x^{-b_{1}/\beta_{1}})}{1-x^{b_{1}/ \beta_{1}}y}\right)\] \[= C[x^{r}]\left(x^{\rho b_{1}/\beta_{1}}A_{1}(x,x^{-b_{1}/\beta_{1 }})\right)=C[x^{r-\rho b_{1}/\beta_{1}}]\left(A_{1}(x,x^{-b_{1}/\beta_{1}})\right)\] \[= C[x^{r\beta_{1}-\rho b_{1}}]\left(A_{1}(x^{\beta_{1}},x^{-b_{1}} )\right).\]
From (10) we obtain
\[A_{1}(x^{\beta_{1}},x^{-b_{1}})=\prod_{i\neq 1}^{m}(1-x^{b_{i}\beta_{1}-b_{1} \beta_{i}})^{-1},\]
and arrive at
\[C^{1}_{r,\rho}=C[x^{r\beta_{1}-\rho b_{1}}]\left(\prod_{i\neq 1}^{m}(1-x^{b_{i} \beta_{1}-b_{1}\beta_{i}})^{-1}\right)=C[x^{r\beta_{1}-\rho b_{1}}]\left(1/ \Pi_{1}(x)\right). \tag{19}\]
### Double partition as a sum of scalar partitions
Repeating computation in Sections 2.1 and 2.2 for each \(T_{i}(x,y),\;1\leq i\leq m\), we obtain for double partition function using (19)
\[W(\mathbf{s},\mathbf{D})=\sum_{k=1}^{m}C^{k}_{r,\rho}=\sum_{k=1}^{m}C[x^{r\beta_{ k}-\rho b_{k}}]\left(\prod_{i\neq k}^{m}(1-x^{b_{i}\beta_{k}-b_{k}\beta_{i}})^{ -1}\right).\]
Recall the SPF definition (3) and rewrite it as
\[W(s,\mathbf{d})=C[t^{s}]\left(\prod_{i=1}^{m}(1-t^{d_{i}})^{-1}\right).\]
It allows to obtain an expression of double partition as a sum of scalar partitions
\[W(\mathbf{s},\mathbf{D})=\sum_{i=1}^{m}W_{i}^{2}(\mathbf{s})=\sum_{i=1}^{m}W(L _{i},\mathbf{d}_{i}),\quad L_{i}=r\beta_{i}-b_{i}\rho,\quad d_{ij}=b_{j}\beta _{i}-b_{i}\beta_{j},\;j\neq i. \tag{20}\]
This compact expression is the main result of Cayley algorithm presented in [2]. Introducing \(2\times 2\) matrices made of \(\mathbf{c}_{i}\) and the columns of augmented matrix \(\mathbf{E}_{i}=\{\mathbf{c}_{0},\mathbf{c}_{1},\ldots,\mathbf{c}_{i-1}, \mathbf{c}_{i+1},\ldots,\mathbf{c}_{m}\}\) obtained from \(\mathbf{E}\) by removal of \(\mathbf{c}_{i}\)
\[\mathbf{D}_{i0}=\{\mathbf{c}_{0}=\mathbf{s},\mathbf{c}_{i}\},\quad\mathbf{D}_ {ij}=\{\mathbf{c}_{j},\mathbf{c}_{i}\},\quad j\neq i, \tag{21}\]
we observe that \(L_{i}=\mathcal{D}_{i0}\) and elements of \(\mathbf{d}_{i}\) are given by \(d_{ij}=\mathcal{D}_{ij}\), where \(\mathcal{D}_{ij}=\det\mathbf{D}_{ij}\).
Introduce an operation \(\mathscr{R}_{i}\) acting on \(2\times(m+1)\) augmented matrix \(\mathbf{E}\) as follows - first \(\mathbf{E}\) is split into the column \(\mathbf{c}_{i}\) and the matrix \(\mathbf{E}_{i}\), and then \(m\) determinants \(\mathcal{D}_{ij}\) are computed to form vector argument \(\mathcal{E}_{i}=\mathscr{R}_{i}[\mathbf{E}]\) of the scalar partition
\[W_{i}^{2}(\mathbf{s})=W(\mathcal{E}_{i}),\quad\mathcal{E}_{i}=\mathscr{R}_{i}[ \mathbf{E}]=\{\mathcal{D}_{ij}\},\quad 0\leq j\leq m,\ j\neq i. \tag{22}\]
Linear independence of columns \(\mathbf{c}_{i}\) implies that all elements in the generator sets \(\mathbf{d}_{i}\) are nonzero, but some of these might be negative (say, \(d_{ij_{k}}<0\) for \(1\leq k\leq K\)). Noting that \((1-t^{-a})^{-1}=-t^{a}(1-t^{a})^{-1}\) we find from (20)
\[W(L_{i},\mathbf{d}_{i})=(-1)^{K}W(L_{i}+\sum_{k=1}^{K}d_{ij_{k}},|\mathbf{d}_ {i}|),\quad|\mathbf{d}_{i}|=\{|d_{ij}|\}. \tag{23}\]
The solution (20) is not unique as we can eliminate the first row of \(\mathbf{E}\) and obtain
\[W(\mathbf{s},\mathbf{D})=\sum_{i=1}^{m}W(L_{i}^{\prime},\mathbf{d}_{i}^{ \prime}),\quad L_{i}^{\prime}=-L_{i},\quad\mathbf{d}^{\prime}=-\mathbf{d}. \tag{24}\]
As the term \(W(L_{i},\mathbf{d}_{i})\) in (20) and its counterpart \(W(L_{i}^{\prime},\mathbf{d}_{i}^{\prime})\) in (24) contribute only for nonnegative \(L_{i}\) and \(L_{i}^{\prime}\) respectively, we observe that the terms \(W(L_{i},\mathbf{d}_{i})\) and \(W(L_{i}^{\prime},\mathbf{d}_{i}^{\prime})\) belong to two adjacent chambers separated by the line \(L_{i}=0\) on which they coincide.
## 3 General case of double partition reduction
It was underlined above that reduction of double partition into a sum of scalar partitions (20) is possible when several conditions on the elements of generator matrix are met. In this Section we present an alternative approach that allows to drop these conditions and obtain less compact but equivalent reduction to scalar partitions.
### Partial fraction expansion
Consider expansion of \(S_{1}\) in (12) into rational functions \(A_{1k}(x)\). To find these functions write
\[S_{1}(x,y_{0})=\prod_{i\neq 1}^{m}\frac{(1-x^{\beta_{1}b_{1}}y_{0}^{\beta_{1} \beta_{i}})}{(1-x^{b_{i}}y_{0}^{\beta_{i}})}=\prod_{i\neq 1}^{m}\left(\sum_{k_{i}=0 }^{\beta_{1}-1}x^{k_{i}b_{i}}y_{0}^{k_{i}\beta_{i}}\right). \tag{25}\]
Introduce \((m-1)\)-dimensional vectors
\[\boldsymbol{b}_{1}^{\prime}=\{b_{2},b_{3},\ldots,b_{m}\},\ \boldsymbol{\beta}_{1}^{ \prime}=\{\beta_{2},\beta_{3},\ldots,\beta_{m}\},\ \boldsymbol{K}_{p}^{\prime}=\{k_{2},k_{3},\ldots,k_{m}\}, \tag{26}\]
with \(0\leq k_{i}\leq\beta_{1}-1\). Expanding r.h.s. of (25) we obtain a sum
\[S_{1}(x,y_{0})=\sum_{p=1}^{n_{1}}x^{\boldsymbol{K}_{p}^{\prime}\cdot \boldsymbol{b}_{1}^{\prime}}y_{0}^{\boldsymbol{K}_{p}^{\prime}\cdot \boldsymbol{\beta}_{1}^{\prime}},\quad n_{1}=\beta_{1}^{m-1},\quad y_{0}=x^{-b _{1}/\beta_{1}}. \tag{27}\]
For each \(\boldsymbol{K}_{p}^{\prime}\) we have
\[x^{\boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{b}_{1}^{\prime}}y_{0}^{ \boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{\beta}_{1}^{\prime}}=x^{ \boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{b}_{1}^{\prime}-b_{1}(\boldsymbol {K}_{p}^{\prime}\cdot\boldsymbol{\beta}_{1}^{\prime})/\beta_{1}}=x^{\nu}=x^{j _{x}}y_{0}^{j_{y}},\]
where the rational exponent \(\nu\) of \(x\) is split into an integer \(j_{x}\) and a fractional \((\nu-j_{x})\) parts. The fractional part of \(\nu\) gives an exponent \(j_{y}\) of \(y_{0}\), while \(x^{j_{x}}\) contributes to \(A_{1k}(x)\) in (12). We observe that
\[j_{y}=(\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})\bmod\beta_{1}. \tag{28}\]
Introduce \(L_{1}\)-norm of \(m\)-component vector \(|\mathbf{a}|=\sum_{i=1}^{m}a_{i}\) and use (28) to obtain
\[\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime}=\beta_{1}t+j_{y},\ t=\lfloor( \mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})/\beta_{1}\rfloor,\ 0\leq t=t(j_{y})\leq\lfloor(B_{1}-j_{y})/\beta_{1}\rfloor,\ B_{1}=(\beta_{1}-1 )|\mathbf{\beta}_{1}^{\prime}|, \tag{29}\]
where \(\lfloor\cdot\rfloor\) denotes the greatest integer less than or equal to real number. This leads to
\[x^{\mathbf{K}_{p}^{\prime}\cdot\mathbf{b}_{1}^{\prime}}y_{0}^{\mathbf{K}_{p}^{\prime}\cdot \mathbf{\beta}_{1}^{\prime}}=x^{\mathbf{K}_{p}^{\prime}\cdot\mathbf{b}_{1}^{\prime}-b_{1} \lfloor(\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})/\beta_{1}\rfloor}y_{ 0}^{(\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})\bmod\beta_{1}}. \tag{30}\]
and we have for \(A_{1}(x,y)\)
\[A_{1}(x,y)=\frac{1}{\Pi_{1}(x)}\sum_{j_{y}=0}^{\beta_{1}-1}A_{1,j_{y}}(x)y^{j _{y}}\, \tag{31}\]
with
\[A_{1,j_{y}}(x)=\sum_{j_{x}=N_{1}^{-}}^{N_{1}^{+}}a_{1,j_{x},j_{y}}x^{j_{x}}, \quad N_{1}^{-}=\min(\mathbf{K}_{p}^{\prime}\cdot\mathbf{b}_{1}^{\prime}-b_{1}\lfloor (\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})/\beta_{1}\rfloor),\quad N_{ 1}^{+}=(\beta_{1}-1)|\mathbf{b}_{1}^{\prime}|. \tag{32}\]
The integer coefficients \(a_{1,j_{x},j_{y}}\) are computed from the relation
\[\sum_{j_{y}=0}^{\beta_{1}-1}\sum_{j_{x}=N_{1}^{-}}^{N_{1}^{+}}a_{1,j_{x},j_{y }}x^{j_{x}}y^{j_{y}}=\sum_{p=1}^{n_{1}}x^{\mathbf{K}_{p}^{\prime}\cdot\mathbf{b}_{1}^{ \prime}-b_{1}\lfloor(\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})/\beta_{ 1}\rfloor}y^{(\mathbf{K}_{p}^{\prime}\cdot\mathbf{\beta}_{1}^{\prime})\bmod\beta_{1}},\quad n_{1}=\beta_{1}^{m-1}, \tag{33}\]
and the details of the computation are presented in Appendix A. The relation (31) leads to
\[T_{1}(x,y)=\frac{A_{1}(x,y)}{1-x^{b_{1}}y^{\beta_{1}}}=\frac{1}{\Pi_{1}(x)(1-x ^{b_{1}}y^{\beta_{1}})}\sum_{j_{y}=0}^{\beta_{1}-1}A_{1,j_{y}}(x)y^{j_{y}}. \tag{34}\]
### Contribution evaluation
Drop the assumption (14) and find \(C_{r,\rho}^{1}\)
\[C_{r,\rho}^{1}=C[x^{r}]\left(\Pi_{1}^{-1}(x)C[y^{\rho}]\left(S_{1}(x,y)/(1-x^{ b_{1}}y^{\beta_{1}})\right)\right). \tag{35}\]
Consider the inner term in (35) using (12)
\[C[y^{\rho}]\left(\frac{S_{1}(x,y)}{1-x^{b_{1}}y^{\beta_{1}}}\right)=C[y^{\rho} ]\left(\sum_{j_{y}=0}^{\beta_{1}-1}\sum_{p=0}^{\infty}A_{1,j_{y}}(x)x^{pb_{1} }y^{j_{y}+p\beta_{1}}\right)=\sum_{j_{y}=0}^{\beta_{1}-1}\sum_{p=0}^{\infty}A_ {1,j_{y}}(x)x^{pb_{1}}C[y^{\rho-j_{y}}]\left(y^{p\beta_{1}}\right).\]
The condition \(p\beta_{1}=\rho-j_{y}\) on integer value of \(p\) reduces the inner sum to a single term and we find
\[C[y^{\rho}]\left(\frac{S_{1}(x,y)}{1-x^{b_{1}}y^{\beta_{1}}}\right)=\sum_{j_{y }=0}^{\beta_{1}-1}A_{1,j_{y}}(x)x^{pb_{1}}=\sum_{j_{y}=0}^{\beta_{1}-1}A_{1,j_ {y}}(x)x^{(\rho-j_{y})b_{1}/\beta_{1}}=\sum_{j_{y}=0}^{\beta_{1}-1}A_{1,j_{y} }(x)x^{\rho b_{1}/\beta_{1}}y_{0}^{j_{y}}. \tag{36}\]
Note that the same condition on \(p\) implies that the sum (36) is equivalent to a single term
\[C[y^{\rho}]\left(S_{1}(x,y)/(1-x^{b_{1}}y^{\beta_{1}})\right)=A_{1,j_{y}}(x)x^{pb _{1}}=A_{1,j_{y}}(x)x^{(\rho-j_{y})b_{1}/\beta_{1}},\;j_{y}=\rho\bmod\beta_{1}, \tag{37}\]
where the polynomial \(A_{1,j_{y}}(x)\) given by (32) has integer coefficients \(a_{1,j_{x},j_{y}}\) determined by (33). It provides an expression for the contribution \(C^{1}_{r,\rho}\) of the column \({\bf c}_{1}\)
\[C^{1}_{r,\rho}=C[x^{r}]\left(\Pi_{1}^{-1}(x){\sum_{j=N_{1}^{-}}^{N_{1}^{+}}}a_{ 1,j_{x},j_{y}}x^{j+p}\right)={\sum_{j_{x}=N_{1}^{-}}^{N_{1}^{+}}}a_{1,j_{x},j_{ y}}C[x^{r-j_{x}-(\rho-j_{y})b_{1}/\beta_{1}}]\left(\Pi_{1}^{-1}(x)\right). \tag{38}\]
On the other hand using (36) in (35) we obtain
\[C^{1}_{r,\rho} = C[x^{r}]\left(\Pi_{1}^{-1}\sum_{j_{y}=0}^{\beta_{1}-1}A_{1,j_{y} }(x)x^{\rho b_{1}/\beta_{1}}y_{0}^{j_{y}}\right)=C[x^{r-\rho b_{1}/\beta_{1}} ]\left(\Pi_{1}^{-1}\sum_{j_{y}=0}^{\beta_{1}-1}A_{1,j_{y}}(x)y_{0}^{j_{y}}\right) \tag{39}\] \[= C[x^{r-\rho b_{1}/\beta_{1}}]\left(S_{1}(x,y_{0})/\Pi_{1}\right) =C[x^{r-\rho b_{1}/\beta_{1}}]\left(A_{1}(x,y_{0})\right).\]
Recalling that \(y_{0}=x^{-b_{1}/\beta_{1}}\) and comparing (39) to (18) we observe that both approaches produce the same result. Note however that the transformation
\[C[x^{r-\rho b_{1}/\beta_{1}}]\left(A_{1}(x,x^{-b_{1}/\beta_{1}})\right)=C[x^{ r\beta_{1}-\rho b_{1}}]\left(A_{1}(x^{\beta_{1}},x^{-b_{1}})\right),\]
used in (18) allowing to obtain (19) and (20) fails when \(b_{1}=0,\beta_{1}>1\). It is important to underline that the presented algorithm does not impose any restrictions on the value of \(\rho\).
Now using the reasoning in Section 2.3 we write
\[C^{1}_{r,\rho}=\bar{W}_{1}^{2}({\bf s})={\sum_{j_{x}=N_{1}^{-}}^{N_{1}^{+}}}a _{1,j_{x},j_{y}}W(r-j_{x}-(\rho-j_{y})b_{1}/\beta_{1},{\bf d}_{1}), \tag{40}\]
\[j_{y}=\rho\bmod\beta_{1},\quad N_{1}^{+}=(\beta_{1}-1)|\boldsymbol{b}_{1}^{ \prime}|,\quad d_{1j}=b_{j}\beta_{1}-b_{1}\beta_{j},\;j\neq 1.\]
The vector \({\bf d}_{1}\) coincides with that of in (20) and its elements can be computed a \(2\times 2\) determinants \({\bf D}_{ik}\) as shown in (21). The only difference is in the first symbolic argument of SPFs in (43). We observe that it also can be written as a determinant of \(\{({\bf s}-{\bf j})/\beta_{1},{\bf c}_{1}\}\), where \({\bf j}=\{j_{x},j_{y}\}^{T}\). Introduce a modified augmented matrix \({\bf E}_{1}({\bf j})\) and write \(W_{1}^{2}\) in (40) as
\[\bar{W}_{1}^{2}({\bf s})={\sum_{j_{x}=N_{1}^{-}}^{N_{1}^{+}}}a_{1,{\bf j}}W( \mathscr{R}_{1}[{\bf E}_{1}({\bf j})]),\quad{\bf E}_{1}({\bf j})=\{({\bf s}-{ \bf j})/\beta_{1},{\bf c}_{1},{\bf c}_{2},\ldots,{\bf c}_{m}\},\]
that can be presented in more general form
\[\bar{W}_{1}^{2}({\bf s})={\sum_{{\bf j}={\bf N}_{1}^{-}}^{{\bf N}_{1}^{+}}}a _{1,{\bf j}}W(\mathscr{R}_{1}[{\bf E}_{1}({\bf j})])\delta_{j_{y},\rho\bmod \beta_{1}},\quad{\bf j}=\{j_{x},j_{y}\}^{T}, \tag{41}\]
\[{\bf N}_{1}^{-}=\{\min(\boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{b}_{1}^{ \prime}-b_{1}\lfloor(\boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{\beta}_{1}^{ \prime})/\beta_{1}\rfloor),0\}^{T},\quad{\bf N}_{1}^{+}=(\beta_{1}-1)\{| \boldsymbol{b}_{1}^{\prime}|,1\}^{T}.\]
Comparing (41) to (22) we present \(\bar{W}_{1}^{2}\) as a weighted sum of \(W_{1}^{2}\) with shifted argument
\[\bar{W}_{1}^{2}({\bf s})={\sum_{{\bf j}={\bf N}_{1}^{-}}^{{\bf N}_{1}^{+}}}a_{1,{\bf j}}W_{1}^{2}(({\bf s}-{\bf j})/\beta_{1})\delta_{j_{y},\rho\bmod\beta_{1 }},\quad{\sum_{{\bf j}={\bf N}_{1}^{-}}^{{\bf N}_{1}^{+}}}a_{1,{\bf j}}=\beta_ {1}^{m-1}. \tag{42}\]
### Column with nonrelatively prime elements
The Cayley reduction method for double partition fails when at least one of the columns, say the first column \(\mathbf{c}_{1}\), has GCD of its elements larger than unity \(\gcd(\mathbf{c}_{1})=\gcd(b_{1},\beta_{1})=g_{1}>1\), and we write \(b_{1}=g_{1}b^{\star},\ \beta_{1}=g_{1}\beta^{\star}\). This case can be treated as discussed above in Sections 3.1 and 3.2 leading to \(C^{1}_{r,\rho}\) in (38). The only difference that this expansion cannot be reduced to a single term as in (39). The reason is that \(y_{0}=x^{-b_{1}/\beta_{1}}=x^{-b^{\star}/\beta^{\star}}\) and thus upper limit in the sum in (39) would equal not \(\beta_{1}\) but \(\beta^{\star}\), thus preventing a desired compactification.
### Double partition as a sum of scalar partitions
The solution (38) extended to other terms in PFE gives an expanded form of double partition equivalent to (20)
\[W(\mathbf{s},\mathbf{D})=\sum_{i=1}^{m}\bar{W}_{i}^{2}(\mathbf{s} ),\quad\bar{W}_{i}^{2}(\mathbf{s})=\sum_{j_{x}=N_{i}^{-}}^{N_{i}^{+}}a_{i,j_{ x},j_{y}}W(r-j_{x}-(\rho-j_{y})b_{i}/\beta_{i},\mathbf{d}_{i}), \tag{43}\] \[N_{i}^{+}=(\beta_{i}-1)|\boldsymbol{b}_{i}^{\prime}|,\quad d_{ ij}=b_{j}\beta_{i}-b_{i}\beta_{j},\ j\neq i,\] \[N_{i}^{-}=\min(\boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{b}_{ i}^{\prime}-b_{i}\lfloor(\boldsymbol{K}_{p}^{\prime}\cdot\boldsymbol{\beta}_{i}^{ \prime})/\beta_{i}\rfloor),\ \boldsymbol{K}_{p}^{\prime}=\{k_{1},k_{2},\ldots,k_{i-1},k_{i+1},\ldots,k_{m}\},\]
where the vectors \(\boldsymbol{b}_{i}^{\prime},\boldsymbol{\beta}_{i}^{\prime}\) are given by
\[\boldsymbol{b}_{i}^{\prime}=\{b_{1},b_{2},\ldots,b_{i-1},b_{i+1},\ldots,b_{m} \},\quad\boldsymbol{\beta}_{i}^{\prime}=\{\beta_{1},\beta_{2},\ldots,\beta_{i -1},\beta_{i+1},\ldots,\beta_{m}\}.\]
Using the notation introduced in (41) present \(\bar{W}_{i}^{2}(\mathbf{s})\) in (43) as
\[\bar{W}_{i}^{2}(\mathbf{s})=\sum_{\mathbf{j}=\mathbf{N}_{i}^{-}}^ {\mathbf{N}_{i}^{+}}a_{i,\mathbf{j}}W_{i}^{2}((\mathbf{s}-\mathbf{j})/\beta_{ i})\delta_{j_{y},\rho\bmod\beta_{i}},\quad\mathbf{j}=\{j_{x},j_{y}\}^{T}, \tag{44}\] \[\mathbf{N}_{i}^{-}=\{\min(\boldsymbol{K}_{p}^{\prime}\cdot \boldsymbol{b}_{i}^{\prime}-b_{i}\lfloor(\boldsymbol{K}_{p}^{\prime}\cdot \boldsymbol{\beta}_{i}^{\prime})/\beta_{i}\rfloor),0\}^{T},\quad\mathbf{N}_{i} ^{+}=(\beta_{i}-1)\{|\boldsymbol{b}_{i}^{\prime}|,1\}^{T}.\]
Note that when _columns \(\mathbf{c}_{i}\) are noncollinear_ the contributions \(C^{i}_{r,\rho}\) can be computed independently of each other and in this case one can write an expression for _double partition as a mixture of terms \(W_{i}^{2}\) and \(\bar{W}_{j}^{2}\)_.
### Double partition with collinear columns
The strongest condition for Cayley algorithm applicability is the linear independence of the generator matrix columns. It appears that in case when it fails a double partition can be reduced to a superposition of scalar partition convolutions.
Assume that vectors corresponding to the first \(n\) columns of the generator matrix \(\mathbf{D}\) are parallel and rewrite the linear system \(\mathbf{D}\cdot\mathbf{x}=\mathbf{s}\) as follows
\[\sum_{i=1}^{m}\mathbf{c}_{i}x_{i}=\mathbf{s},\quad\mathbf{c}_{i}=u_{i}\mathbf{ c},\quad\mathbf{c}=\{b,\beta\},\quad 1\leq i\leq n<m, \tag{45}\]
where \(u_{i}\) are positive integers. The problem (45) is equivalent to
\[\sum_{i=n+1}^{m}\mathbf{c}_{i}x_{i}=\mathbf{s}-l\mathbf{c},\quad\sum_{i=1}^{n }u_{i}x_{i}=l,\quad 0\leq l\leq l_{max}=\min(r/b,\rho/\beta). \tag{46}\]
Introduce a vector \({\bf u}=\{u_{1},u_{2},\ldots,u_{n}\}\) and a new matrix with \((m-n)\) columns \({\bf D}_{n+1}=\{{\bf c}_{n+1},\ldots,{\bf c}_{m}\}\), for which the corresponding double partition \(W({\bf D}_{n+1},{\bf s}-l{\bf c})\) admits a reduction to SPFs either as a sum of \(W_{i}^{2}\) or a mixture of \(W_{i}^{2}\) and \(\bar{W}_{j}^{2}\). The number of solutions of (46) for given value of \(l\) is equal to a number of solutions of the first equation given by double partition \(W({\bf D}_{n+1},{\bf s}-l{\bf c})\) multiplied by a number of solutions of the second equation, _i.e._, scalar partition \(W(l,{\bf u})\). Then the double partition \(W({\bf D},{\bf s})\) is equivalent to a convolution
\[W({\bf D},{\bf s})=\sum_{l=0}^{l_{max}}W(l,{\bf u})W({\bf D}_{n+1},{\bf s}-l{ \bf c}),\quad l_{max}=\min(r/b,\rho/\beta). \tag{47}\]
## 4 Double partitions with zero column
Computation of contribution to double partition related to a column \({\bf c}_{1}\) with zero element can be divided into three separate cases: a) \(b_{1}>0,\ \beta_{1}=0\), b) \(b_{1}=0,\ \beta_{1}=1\), c) \(b_{1}=0,\ \beta_{1}>1\).
### \(b_{1}>0,\ \beta_{1}=0\)
In this case application of (19) gives
\[C^{1}_{r,\rho}=C[x^{-\rho b_{1}}]\left(1/\Pi_{1}(x)\right)=0,\quad\Pi_{1}(x)= \prod_{i\neq 1}^{m}(1-x^{-b_{1}\beta_{i}}),\]
as the coefficient of \(x\) with negative exponent vanishes. The corresponding scalar partition function reads
\[C^{1}_{r,\rho}=W(-\rho b_{1},\{-b_{1}\beta_{2},-b_{1}\beta_{3}, \ldots,-b_{1}\beta_{m}\})=(-1)^{m-1}W(-b_{1}\rho-b_{1}\sum_{i=2}^{m}\beta_{i}, \{b_{1}\beta_{2},b_{1}\beta_{3},\ldots,b_{1}\beta_{m}\})\] \[=(-1)^{m-1}W(-\rho-\sum_{i=2}^{m}\beta_{i},\{\beta_{2},\beta_{3},\ldots,\beta_{m}\})=0, \tag{48}\]
where the last expression is obtained by cancelling the common factor \(b_{1}\) in the elements of the augmented single row matrix. Thus all columns with \(\beta_{i}=0\)_do not_ contribute into the sum in (20) and (43). This result is easy to understand by noticing that for every column with \(\beta_{i}=0\) the corresponding factor in (8) does not depend on \(y\) and thus the procedure of \(y\) elimination does not involve these terms. In other words, the corresponding term \(T_{i}\) in the sum in (8) is just equal to zero.
### \(b_{1}=0,\ \beta_{1}=1\)
In a particular case \(\beta_{1}=1\) we have (31) with \(A_{10}=1\), _i.e._, \(S_{1}(x,y)=1\) and \(\Pi_{1}(x)=\prod_{i\neq 1}^{m}(1-x^{b_{i}})\). Use (39) to find
\[C^{1}_{r,\rho}=C[x^{r}]\left(1/\Pi_{1}(x)\right)=W(r,{\bf d}_{1}),\quad d_{1j} =b_{j},\ j\neq 1. \tag{49}\]
It is worth to underline that (49) can be also obtained by direct implementation of the original Cayley algorithm with \(b_{1}=0,\ \beta_{1}=1\) to find from (19)
\[C^{1}_{r,\rho}=C[x^{r}]\left(\prod_{i\neq 1}^{m}(1-x^{b_{i}})^{-1}\right)=C[x^{ r}]\left(1/\Pi_{1}(x)\right)=W(r,{\bf d}_{1}),\quad d_{1j}=b_{j},\ j\neq 1. \tag{50}\]
### \(b_{1}=0,\ \beta_{1}>1\)
Consider a case when a single element in the first row of matrix \({\bf D}\) equals zero and apply the algorithm presented above in Sections 3.1, 3.2. Without loss of generality we can choose this _zero column_ of \({\bf D}\) to be the first one - \({\bf c}_{1}\), _i.e._, \(b_{1}=0\). In this case (8) turns into
\[G({\bf t},{\bf D})=(1-y^{\beta_{1}})^{-1}\prod_{i=2}^{m}(1-x^{b_{i}}y^{\beta_{i }})^{-1}=G_{1}(x,y)+\sum_{i=2}^{m}T_{i}(x,y),\quad G_{1}(x,y)=\frac{A_{1}(x,y)}{ 1-y^{\beta_{1}}}, \tag{51}\]
where \(T_{i}(x,y)\) is given in (8) and processed as shown above leading to the term \(W(L_{i},{\bf d}_{i})\) in (20).
Repeating the steps discussed in Sections 3.1, 3.2 for \(b_{1}=0\) we obtain a particular case of (40)
\[C^{1}_{r,\rho}=\bar{W}_{1}^{2}=\sum_{j_{x}=0}^{N_{1}^{+}}a_{1,j_{x},j_{y}}W(r -j_{x},\beta_{1}{\bf d}_{1}),\quad j_{y}=\rho\bmod\beta_{1},\quad d_{1i}=b_{i },\ i\neq 1. \tag{52}\]
Note that scalar partition function \(W(s,\beta_{1}{\bf d}_{1})=W(s/\beta_{1},{\bf d}_{1})\) is nonzero only if \(s\) is divisible by \(\beta_{1}\), and thus only terms with \(j_{x}\equiv r\ (\bmod\beta_{1})\) contribute into the sum in (52)
\[\bar{W}_{1}^{2}=\sum_{j_{x}=0}^{N_{1}^{+}}a_{1,j_{x},j_{y}}W((r-j_{x})/\beta_{ 1},{\bf d}_{1}),\ j_{y}=\rho\bmod\beta_{1},\ j_{x}\equiv r\ (\bmod\beta_{1}),\ d_{1i}=b_{i},\ i\neq 1. \tag{53}\]
Computation of coefficients \(a_{1,j_{x},j_{y}}\) is a particular case of the general algorithm discussed in Appendix A; in Appendix B we present an example of the reduction of a double partition with zero column to a set of SPFs.
### Multiple zero columns
Generalization of the result for a single zero column discussed above to a case of multiple such columns with \(b_{i}=0,\ \beta_{i}>0,\ 1\leq i\leq n<m\), presents a particular example of linear dependent columns. Use the results (45-47) presented in Section 3.5 with \({\bf c}=\{0,1\},\ u_{i}=\beta_{i},\ 1\leq i\leq n<m\), and \(l_{max}=\rho\) to obtain a convolution
\[W({\bf D},{\bf s})=\sum_{l=0}^{\rho}W(l,{\bf u})W({\bf D}_{n+1},{\bf s}-l{\bf c }). \tag{54}\]
Note that the necessity to apply this approach arises quite rarely, namely, when both rows of the generator matrix \({\bf D}\) have at least two zero elements.
### Alternative expression for zero column contribution
Performing the elimination of the second row we obtain
\[W({\bf s},{\bf D})=\bar{W}_{1}^{2}+\sum_{i=2}^{m}W_{i}^{2}=\bar{W}_{1}^{2}+ \sum_{i=2}^{m}W(L_{i},{\bf d}_{i}),\quad L_{i}=r\beta_{i}-b_{i}\rho,\quad d_{ ij}=b_{j}\beta_{i}-b_{i}\beta_{j},\ j\neq i. \tag{55}\]
The same time the first row elimination as shown in (24) produces
\[W({\bf s},{\bf D})=\sum_{i=2}^{m}W(L^{\prime}_{i},{\bf d}^{\prime}_{i})=\sum_ {i=2}^{m}W(-L_{i},-{\bf d}_{i}), \tag{56}\]
as the term corresponding to the first column vanishes. From (55,56) we find an alternative expression for the contribution of the column with \(b_{1}=0\) as
\[\bar{W}_{1}^{2}=\sum_{i=2}^{m}\left[W(-L_{i},-{\bf d}_{i})-W(L_{i},{\bf d}_{i}) \right]. \tag{57}\]
## 5 Conclusion
The double partition problem subject to specific conditions considered in [2] admits an elegant compact solution in which the \(i\)-th column of the positive generator matrix \({\bf D}\) leads to a single scalar partition contribution \(W_{i}\). This solution is obtained by elimination of a variable of a corresponding generating function that in its turn requires application of partial fraction expansion to the generating function. The main steps of method are discussed in Section 2; it can be applied when the generator matrix has linearly independent columns, and column elements are positive and relatively prime.
In Section 3 we present a modification of Cayley method that instead of a single term \(W_{i}\) produces its equivalent \(\bar{W}_{i}\) as a weighted sum of \(W_{i}\) with shifted argument. Computation of the coefficients in these expressions can be reduced to a finite sum of double partitions with generator matrix of smaller size (Appendix A). We show that the superposition \(\bar{W}_{i}\) is equivalent to the single term \(W_{i}\) only when the specific conditions on the generator matrix elements are met and such compactification fails when the restrictions are lifted. An example of nonreducible superposition \(\bar{W}_{i}\) is given in Section 3.3 where we consider columns with elements that are not relatively prime. In case of noncollinear columns each column can be processed independently and thus a double partition can be written as a mixture of terms \(W_{i}\) and \(\bar{W}_{i}\). When a few columns are linear dependent a double partition leads to a _convolution_ of scalar partitions derived in Section 3.5.
The case of the generator matrix \({\bf D}\) with zero elements is considered in Section 4. An example of a reduction of double partition with a single zero column is presented in Appendix B. Matrix with multiple zero columns discussed in Section 4.4 is particular case of collinear columns reducible to SPF convolution.
In conclusion we show that any double partition can be expressed through superposition or convolution of scalar partitions. All components of this representation are computable using the same algorithm that makes the double partition problem self-contained. As each scalar partition term \(W(L_{i},{\bf d}_{i})\) has nonzero contribution only for \(L_{i}\geq 0\), reduction of double partition to SPFs allows simple determination of partition chambers. A possibility of extension of this result to multiple partitions corresponding to matrices with more than two rows remains an open question and will be discussed elsewhere.
|
2308.02165 | Diffusion probabilistic models enhance variational autoencoder for
crystal structure generative modeling | The crystal diffusion variational autoencoder (CDVAE) is a machine learning
model that leverages score matching to generate realistic crystal structures
that preserve crystal symmetry. In this study, we leverage novel diffusion
probabilistic (DP) models to denoise atomic coordinates rather than adopting
the standard score matching approach in CDVAE. Our proposed DP-CDVAE model can
reconstruct and generate crystal structures whose qualities are statistically
comparable to those of the original CDVAE. Furthermore, notably, when comparing
the carbon structures generated by the DP-CDVAE model with relaxed structures
obtained from density functional theory calculations, we find that the DP-CDVAE
generated structures are remarkably closer to their respective ground states.
The energy differences between these structures and the true ground states are,
on average, 68.1 meV/atom lower than those generated by the original CDVAE.
This significant improvement in the energy accuracy highlights the
effectiveness of the DP-CDVAE model in generating crystal structures that
better represent their ground-state configurations. | Teerachote Pakornchote, Natthaphon Choomphon-anomakhun, Sorrjit Arrerut, Chayanon Atthapak, Sakarn Khamkaeo, Thiparat Chotibut, Thiti Bovornratanaraks | 2023-08-04T06:53:22Z | http://arxiv.org/abs/2308.02165v1 | Diffusion probabilistic models enhance variational autoencoder for crystal structure generative modeling
###### Abstract
The crystal diffusion variational autoencoder (CDVAE) is a machine learning model that leverages score matching to generate realistic crystal structures that preserve crystal symmetry. In this study, we leverage novel diffusion probabilistic (DP) models to denoise atomic coordinates rather than adopting the standard score matching approach in CDVAE. Our proposed DP-CDVAE model can reconstruct and generate crystal structures whose qualities are statistically comparable to those of the original CDVAE. Furthermore, notably, when comparing the carbon structures generated by the DP-CDVAE model with relaxed structures obtained from density functional theory calculations, we find that the DP-CDVAE generated structures are remarkably closer to their respective ground states. The energy differences between these structures and the true ground states are, on average, 68.1 meV/atom lower than those generated by the original CDVAE. This significant improvement in the energy accuracy highlights the effectiveness of the DP-CDVAE model in generating crystal structures that better represent their ground-state configurations.
**Keywords:** denoising diffusion probabilistic models, variational autoencoder, crystal structures, ground state, density functional theory
**Open data:** The dataset used in this work has been made available at
[https://github.com/trachote/dp-cdvae](https://github.com/trachote/dp-cdvae)
## I Introduction
Advances in computational materials science has enabled the accurate prediction of novel materials possessing exceptional properties. Remarkably, these computational advancements have facilitated the successful experimental synthesis of materials that exhibit the anticipated properties. Some predicted materials, such as near-room-temperature superconductors, have been successfully synthesized under high-pressure conditions, with their superconducting temperatures in accordance with density functional theory (DFT) calculations [1; 2]. To achieve accurate predictions, _a priori_ knowledge of plausible molecular and crystal structures play a vital role in both theoretical and experimental studies. Several algorithms, such as evolutionary algorithms, swarm particle optimization, random sampling method, and etc., have been employed for structure prediction [3; 4; 5]. These algorithms rely on identifying local minima on the potential energy landscape obtained from DFT calculations [6; 7]. In the case of crystal structures, where atoms are arranged in three-dimensional space with periodic boundaries, additional criteria are necessary to enforce crystal symmetry constraints [5].
Recent approach for structure prediction employs denoising diffusion models to perform probabilistic inference. These models sample molecular and crystal structures from a probability distribution of atomic coordinates and types [8; 9; 10; 11], bypassing the computationally intensive DFT calculation to tediously determine the potential energy landscape. By leveraging sufficiently large datasets containing various compounds, this method enables the generation of diverse compositions and combinations of elements simultaneously. Furthermore, the models allow for the control of desired physical properties of the generated structures through conditional probability sampling [12; 13; 14]. These machine learning-based algorithms also hold promise for solving inverse problem efficiently, resolving structures from experimental characterizations, e.g., x-ray absorption spectroscopy and other techniques, a challenging problem in materials science [15; 16; 17].
There are two primary types of denoising diffusion models: score matching approach and denoising diffusion probabilistic models (DDPM) [18; 19; 20]. These two models can denoise (reverse) a normal distribution such that the distribution gradually transforms into the data distribution of interest. The score matching approach estimates the score function of the perturbed data directing the normal distribution toward the data distribution and employing large step sizes of variance. In contrast,
DDPM gradually denoises the random noise through a joint distribution of data perturbed at different scales of variance.
Since atomic positions in crystal structures are periodic and can be invariant under some rotation groups depending on their crystal symmetry, the core neural networks should favourably possess roto-translational equivariance [21; 22; 23]. Xie et al. [8] has proposed a model for crystal prediction by a combination between variational autoencoder (VAE) [24] and the denoising diffusion model, called crystal diffusion VAE (CDVAE). The model employs the score matching approach with (annealed) Langevin dynamics to generate new crystal structures from random coordinates [18]. The neural networks for an encoder and the diffusion model are rototranslationally equivariant. As a result, CDVAE can generate crystal structures with realistic bond lengths and respect crystal symmetry.
Because of the periodic boundary condition imposed on the unit cell, gradually injecting sufficiently strong noises (in the forward process) to the fractional coordinates can lead to the uniform distribution of atomic positions at late times, the consequence of ergodicity in statistical mechanical sense. Rather than beginning with a Gaussian distribution and denoising it as in the original CDVAE formulation, Jiao et al. [25] perturbed and sampled atomic positions beginning with a wrapped normal distribution which satisfies the periodic boundary condition. With this approach, the reconstruction performance has been significantly improved. Other circular (periodic) distributions, e.g., the wrapped normal and von Mises distributions, are not natural for DDPM framework since there is no known analytical method to explicitly incorporate such distributions into the framework. There, one needs to resort to an additional sampling procedure to construct the DDPM [26].
In this work, we introduce a crystal generation framework called diffusion probabilistic CDVAE (DP-CDVAE). Similar to the original CDVAE, our model consists of two parts: the VAE part and the diffusion part. The purpose of the VAE part is to predict the lattice parameters and the number of atoms in the unit cell of crystal structures. On the other hand, the diffusion part utilizes the diffusion probabilistic approach to denoise fractional coordinates and predict atomic coordinates. By employing the DDPM instead of the score matching approach, the DP-CDVAE model shows reconstruction and generation task performances that are statistically comparable to those obtained from original CDVAE. Importantly, we demonstrate the significantly higher ground-state generation performance of DP-CDVAE, through the distance comparison between generated structures and those optimized using the DFT method. We also analyze the changes in energy and volume after relaxation to gain further insights into models' capabilities.
## II Results
The performances of DP-CDVAE models are herein presented. There are four DP-CDVAE models, differed by the choice of the encoder (see Fig. S1). DimeNet\({}^{++}\) has been employed for the main encoder for every DP-CDVAE models [27]. We then modify the encoder of DP-CDVAE to encode the crystal structure by two additional neural networks: a multilayer perceptron that takes the number of atoms in the unit cell (\(N_{a}\)) as an input, and a graph isomorphism network (GINE) [28]. Their latent features are combined with the latent features from DimeNet\({}^{++}\) through another multilayer perceptron. The \(N_{a}\) is encoded such that the model can decode the \(N_{a}\) accurately, and GINE encoder is inspired by GeoDiff [10] whose model is a combination of SchNet [29] and GINE which yields better performance.
Three datasets, **Perov-5**[30; 31], **Carbon-24**[32], and **MP-20**[33], were selected to evaluate the performance of the model. The Perov-5 dataset consists of perovskite materials with cubic structures, but with variations in the combinations of elements within the structures. The Carbon-24 dataset comprises carbon materials, where the data consists of carbon element with various crystal systems obtained from _ab initio_ random structure searching algorithm at pressure of 10 GPa [32]. The MP-20 dataset encompasses a wide range of compounds and structure types.
### Reconstruction performance
The reconstruction performance is determined by the similarity between reconstructed and ground-truth structures. The similarity can be evaluated using Niggli's algorithm implemented in StructureMatcher method from pymatgen library [34]. The reconstructed and ground-truth structures are similar if they pass the criteria of StructureMatcher which are stol=0.5, angle_tol=10, ltol=0.3. _Match rate_ is the percentage of those structures passed the criteria. If the reconstructed and ground-truth structures are similar under the criteria, the root-mean-square distance between their atomic positions is computed and then normalized by \(\sqrt[3]{V/N_{a}}\), where \(V\) is the unit-cell volume, and \(N_{a}\) is the number of atoms in the unit cell. An average of the distances of every pair of structures (\(\langle\delta_{\text{rms}}\rangle\)), computed from Niggli's algorithm, is used as the performance metric.
Table 1 presents the reconstruction performance of different models for three different datasets: Perov-5, Carbon-24, and MP-20. For the Perov-5 dataset, the DP-CDVAE model achieves a match rate of 90.04%, indicating its ability to reconstruct a significant portion of the ground-truth structures. This performance is slightly lower than the CDVAE model but still demonstrates the effectiveness of our model. In terms of \(\langle\delta_{\text{rms}}\rangle\), the DP-CDVAE model achieves a value of 0.0212, comparable
to the FTCP model [35], but slightly higher than the CDVAE model. Similarly, for the Carbon-24 and MP-20 datasets, the DP-CDVAE model performs well in terms of both match rate and \(\langle\delta_{\rm rms}\rangle\). It achieves match rates of 45.57% and 32.42% for Carbon-24 and MP-20, respectively. The corresponding \(\langle\delta_{\rm rms}\rangle\) values for Carbon-24 and MP-20 are 0.1513 and 0.0383, respectively, comparable to the CDVAE model.
Regarding the DP-CDVAE+\(N_{a}\) model, the additional encoding of \(N_{a}\) into the model leads to improved match rates for all datasets, with an increase of 2-5%. This enhancement can be attributed to the accurate prediction of \(N_{a}\). However, in terms of \(\langle\delta_{\rm rms}\rangle\), only the Perov-5 dataset shows an improvement, with a value of 0.0149. On the other hand, for the Carbon-24 and MP-20 datasets, the \(\langle\delta_{\rm rms}\rangle\) values are higher compared to the DP-CDVAE model.
For the DP-CDVAE+GINE and DP-CDVAE+\(N_{a}\)+GINE models, the additional encoding of GINE into the models leads to a substantial drop in match rates compared to the DP-CDVAE model, particularly for the Perov-5 and Carbon-24 datasets. In contrast, there is a moderate increase in the match rates for the MP-20 dataset. The \(\langle\delta_{\rm rms}\rangle\) values for the Perov-5 and Carbon-24 datasets are comparable to those of the DP-CDVAE model. However, for the MP-20 dataset, the \(\langle\delta_{\rm rms}\rangle\) is noticeably higher in the models with GINE encoder compared to the DP-CDVAE model.
Overall, while the reconstruction performance of the DP-CDVAE model may be slightly lower than the CDVAE model in terms of match rate, it still demonstrates competitive performance with relatively low \(\langle\delta_{\rm rms}\rangle\). The match rate can be enhanced by additionally encoding the \(N_{a}\), but the performance is traded off by the increase in \(\langle\delta_{\rm rms}\rangle\).
### Generation performance
We follow the CDVAE model that used three metrics to determine the generation performance of the models [8]. The first metric is _Validity_ percentage. The structures that are valid for structure and composition tests must satisfy two criteria: distances of every pair of atoms are larger than 0.5 A and the total charge in the unit cell is neutral. The second metric is called _coverage_ (COV), which utilizes structure and composition fingerprints to evaluate the similarity between the generated and ground-truth structures. COV-R (Recall) represents the percentage of ground-truth structures covered by the generated structures. COV-P (Precision) represents the percentage of generated structures that are similar to the ground-truth structures, indicating the quality of the generation. The third metric is the Wasserstein distance
Figure 1: The schematic summarizing the architecture for training the DP-CDVAE model. Multiple sub-networks are trained to minimize the total loss function of Eq.(S1). The encoder (\(G_{\phi}(\mathbf{L}\mathbf{r}_{f},Z,N_{a})\)) compresses input pristine crystal structures into the latent feature (\(\mathbf{z}\)). The predicted lattice parameters (\(\mathbf{L_{z}}\)), the predicted number of atoms (\(N_{\mathbf{z}}\)), and \(\mathbf{A_{z}}\) are decoded from \(\mathbf{z}\). Here, \(\mathbf{A_{z}}\) enables the sampling of atomic types (\(Z_{\rm t}\)), and all the decoded features enable the reconstruction of crystal structures. The input fractional coordinates \(\mathbf{r}_{f}\) undergo perturbation (dash-dotted line) at time step \(t\) and then are transformed by \(\mathbf{\pi}(\cdot)\) to satisfy the periodic boundary condition (dotted line), serving as the coordinates for the reconstructed crystal structures. These reconstructed structures, \((\mathbf{L_{z}r}_{f_{t}},Z_{t},\mathbf{z},t)\), are subsequently fed into the diffusion network (\(D_{\theta}(\mathbf{L_{z}r}_{f_{t}},\mathbf{f_{t}})\)), where \(\mathbf{f_{t}}\) is a node feature composing of \(Z_{t}\), \(\mathbf{z}\), and \(t\). The diffusion network predicts the noise added to the fractional coordinates (\(\mathbf{\epsilon}_{\theta}\)) as well as the one-hot vector of atomic types (\(\mathbf{A}_{\theta}\)), see Sec. IV.3. Dashed-line boxes represent the unit cells of the crystal structures.
between property distributions of generated and ground-truth structures. Three property statistics are density (\(\rho\)), which is total atomic mass per volume (unit g/cm\({}^{3}\)), formation energy (\(E_{form}\), unit eV/atom), and the number of elements in the unit cell (# elem.). A separated and pre-trained neural network is employed to predict \(E\) of the structures where the detail of the pre-training can be found in Ref. [8]. The first and second metrics are computed over 10,240 generated structures, and 1000 structures are randomly chosen from the generated structures that pass the validity tests to compute the third metric. The ground-truth structures used to evaluate the generation performance are from the test set.
In Table 2, the DP-CDVAE model achieves a validity rate of 100% for the Perov-5 dataset and close to 100% for the Carbon-24 and MP-20 datasets in terms of structure. The validity rate for composition is comparable to that of the CDVAE model. The DP-CDVAE model also demonstrates comparable COV-R values to the CDVAE model across all three datasets. Furthermore, the DP-CDVAE models with \(N_{a}\) and/or GINE encoders exhibit similar Validity and COV-R metrics to those of the DP-CDVAE model. However, for COV-P, all DP-CDVAE models yield lower values compared to CDVAE.
On the other hand, our models show significant improvements in property statistics. In the case of the MP-20 dataset, the DP-CDVAE models, particularly those with the GINE encoder, yield substantially smaller Wasserstein distances for \(\rho\), \(E_{form}\), and the number of elements compared to other models. For the Carbon-24 dataset, our models also exhibit a smaller Wasserstein distance for \(\rho\) compared to the CDVAE model.
### Ground-state performance
Another objective of the structure generator is to generate novel structures that also are close to the ground state. To verify that, the generated structures are relaxed using the DFT calculation where the _relaxed structures_ exhibit balanced internal stresses with external pressures and reside in local energy minima. These relaxed structures are then compared with the generated structures to evaluate their similarity. In this study, we have chosen a set of 100 generated structures from each of CDVAE, CDVAE+Fourier, and DP-CDVAE models for relaxation where CDVAE+Fourier model is CDVAE model with Fourier embedding features of the perturbed coordinates. However, relaxation procedures for multi-element compounds can be computationally intensive. To address this, we have specifically selected materials composed solely of carbon atoms, using the model trained on Carbon-24 dataset. This selection ensures a convergence of the self-consistent field in DFT calculation. Moreover, in the relaxation, we consider the ground state of the relaxed structures at a temperature of 0 K and a pressure of 10 GPa since the carbon structures in the training set are stable at 10 GPa [32].
We here introduce a ground-state performance presented in Table 3. The StructureMatcher with the same criteria as in the reconstruction performance is used to evaluate the similarity between the generated and relaxed structures. The relaxed structure was used as a based structure to determine if the generated structure can be matched. Four metrics used to determine the similarity are 1) match rate, 2) \(\langle\delta_{\rm rms}\rangle\), 3) \(\Delta V_{\rm rms}\) and 4) \(\Delta E_{\rm rms}\). The \(\Delta V_{\rm rms}\) and \(\Delta E_{\rm rms}\) represent the root mean square differences in volume and energy, respectively, between the generated structures and the relaxed structures in the dataset.
In Table 3, the DP-CDVAE model achieves the highest match rate and the lowest \(\langle\delta_{\rm rms}\rangle\) and \(\Delta E_{\rm rms}\). Although the CDVAE+Fourier model achieves the lowest \(\Delta V_{\rm rms}\), the DP-CDVAE model demonstrates the \(\Delta V_{\rm rms}\) that is comparable to that of the CDVAE+Fourier model.
## III Discussion
The DP-CDVAE models significantly enhance the generation performance, particularly in terms of property statistics, while maintaining comparable COVs to those of CDVAE. Specifically, for Carbon-24 and MP-20 datasets, the density distributions between the generated and ground-truth structures from DP-CDVAE models exhibit substantially smaller Wasserstein distance compared those of the CDVAE model (see Table 2). The \(\Delta V_{\rm rms}\) of the DP-CDVAE model presented in Table 3 is significantly lower than that of the original CDVAE. This is corresponding to smaller Wasserstein distance of \(\rho\) shown in Table 2. The DP-CDVAE model also demonstrates significantly smaller \(\langle\delta_{\rm rms}\rangle\) than the original CD
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Models & \multicolumn{3}{c}{Match rate (\%) \(\uparrow\)} & \multicolumn{3}{c}{\(\langle\delta_{\rm rms}\rangle\downarrow\)} \\ \cline{2-7} & Perov-5 & Carbon-24 & MP-20 & Perov-5 & Carbon-24 & MP-20 \\ \hline FTCP [8] & **99.34** & **62.28** & **69.89** & 0.0259 & 0.2563 & 0.1593 \\ CDVAE [8] & 97.52 & 55.22 & 45.43 & 0.0156 & **0.1251** & **0.0356** \\ DP-CDVAE & 90.04 & 45.57 & 32.42 & 0.0212 & 0.1513 & 0.0383 \\ DP-CDVAE+\(N_{a}\) & 91.86 & 50.99 & 36.17 & **0.0149** & 0.1612 & 0.0560 \\ DP-CDVAE+GINE & 80.50 & 49.02 & 34.08 & 0.0214 & 0.1599 & 0.0455 \\ DP-CDVAE+\(N_{a}\)+GINE & 88.30 & 38.28 & 37.44 & 0.0180 & 0.1921 & 0.0525 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Reconstruction performance.
VAE. These suggest that our lattice generation closely approximates the relaxed lattice, while also achieving atomic positions that closely resemble the ground-state configuration. Additionally, the distribution of the number of elements in the unit cells is relatively similar to that of the data in the test set, particularly in the results from the models with GINE encoder. This could be attributed to the capability of GINE to search for graph isomorphism [36].
Moreover, \(\Delta E\) is the energy difference between the generated structures and their corresponding relaxed structures. The ground-state energy represents a local minimum that the generated structure is relaxed towards. A value of \(\Delta E\) close to zero indicates that the generated structure is in close proximity to the ground state. In Table 3, it can be observed that our model achieves the \(\Delta E_{\text{rms}}\) value of 400.7 meV/atom which is about 68.1 meV/atom lower than the \(\Delta E_{\text{rms}}\) of CDVAE. The mode of \(\Delta E\) of our model is 64 - 128 meV/atom, which is lower than its root-mean-square value (see Fig. S2). Nevertheless, both the \(\Delta E_{\text{rms}}\) and the mode of \(\Delta E\) exhibit relatively high values. In many cases, the formation energy of synthesized compounds is reported to be above the convex hull less than 36 meV/atom [37; 38; 39]. To obviate the need for time-consuming DFT relaxation, it is essential for the generated structures to be even closer to the ground state. Therefore, achieving lower \(\Delta E_{\text{rms}}\) values remains a milestone for future work.
## IV Methods
### Diffusion probabilistic model
In the diffusion probabilistic model, the data distribution is gradually perturbed by noise in the forward process until it becomes a normal distribution at late times. In this study, the distribution of the fractional coordinate (\(\mathbf{r}_{f}\)) is considered since their values of every crystal structures distribute over the same range,i.e., \(\mathbf{r}_{f}\in[0,1)^{3}\). The Markov process is assumed for the forward diffusion such that the joint distribution is a product of the conditional distributions conditioned on the knowledge of the fractional coordinate at the previous time step:
\[\begin{split} q(\mathbf{r}_{1:T}|\mathbf{r}_{0})& =\prod_{t=1}^{T}q(\mathbf{r}_{t}|\mathbf{r}_{t-1}),\\ q(\mathbf{r}_{t}|\mathbf{r}_{t-1})&=\mathcal{N} (\mathbf{r}_{t};\sqrt{\alpha_{t}}\mathbf{r}_{t-1},(1-\alpha_{t})\mathbf{I}), \end{split} \tag{1}\]
\begin{table}
\begin{tabular}{c l c c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{Validity (\%) \(\uparrow\)} & \multicolumn{3}{c}{COV (\%) \(\uparrow\)} & \multicolumn{3}{c}{Property statistics \(\downarrow\)} \\ \cline{3-10} & & Struc. & Comp. & R. & P. & \(\rho\) & \(E_{form}\) & \# elem. \\ \hline \multirow{7}{*}{Perov-5} & G-SchNet [8] & 99.92 & 98.79 & 0.18 & 0.23 & 1.625 & 4.746 & 0.0368 \\ & P-G-SchNet [8] & 79.63 & **99.13** & 0.37 & 0.25 & 0.2755 & 1.388 & 0.4552 \\ & CDVAE [8] & **100** & 98.59 & 99.45 & **98.46** & 0.1258 & **0.0264** & 0.0628 \\ & DP-CDVAE & **100** & 98.07 & 99.52 & 98.39 & 0.1807 & 0.0713 & 0.0767 \\ & DP-CDVAE+\(N_{\text{a}}\) & 99.99 & 97.34 & **99.55** & 97.22 & **0.1027** & 0.0287 & 0.0437 \\ & DP-CDVAE+GINE & **100** & 96.11 & 98.94 & 95.63 & 0.2114 & 0.0832 & 0.0498 \\ & DP-CDVAE+\(N_{\text{a}}\)+GINE & **100** & 97.09 & 99.52 & 96.73 & 0.1368 & 0.0425 & **0.0210** \\ \hline \multirow{7}{*}{Carbon-24} & G-SchNet [8] & 99.94 & – & 0.00 & 0.00 & 0.9427 & 1.320 & – \\ & P-G-SchNet [8] & 48.39 & – & 0.00 & 0.00 & 1.533 & 134.7 & – \\ & CDVAE [8] & **100** & – & 99.80 & **83.08** & 0.1407 & 0.2850 & – \\ & DP-CDVAE & 99.92 & – & 99.56 & 77.98 & 0.1109 & **0.2596** & – \\ & DP-CDVAE+\(N_{\text{a}}\) & 99.73 & – & 99.61 & 72.29 & 0.1080 & 0.3030 & – \\ & DP-CDVAE+GINE & 99.50 & – & **100** & 68.13 & **0.0977** & 0.3623 & – \\ & DP-CDVAE+\(N_{\text{a}}\)+GINE & 98.61 & – & 99.21 & 65.13 & 0.1267 & 0.4136 & – \\ \hline \multirow{7}{*}{MP-20} & G-SchNet [8] & 99.65 & 75.96 & 38.33 & 99.57 & 3.034 & 42.09 & 0.6411 \\ & P-G-SchNet [8] & 77.51 & 76.40 & 41.93 & **99.74** & 4.04 & 2.448 & 0.6234 \\ & CDVAE [8] & **100** & **86.70** & 99.15 & 99.49 & 0.6875 & 0.2778 & 1.432 \\ \cline{1-1} & DP-CDVAE & 99.59 & 85.44 & 98.93 & 98.96 & 0.4037 & 0.1547 & 0.9179 \\ \cline{1-1} & DP-CDVAE+\(N_{\text{a}}\) & 99.81 & 84.95 & 99.36 & 99.33 & 0.4889 & 0.1800 & 1.053 \\ \cline{1-1} & DP-CDVAE+GINE & 99.82 & 81.92 & **99.48** & 99.00 & 0.2785 & 0.0603 & **0.5679** \\ \cline{1-1} & DP-CDVAE+\(N_{\text{a}}\)+GINE & 99.90 & 83.89 & 95.51 & 99.27 & **0.1790** & **0.0522** & 0.6909 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Generation performance
where \(\mathbf{r}_{0}\sim q(\mathbf{r}_{f})\) which is the data distribution of the fractional coordinate, \(t\) is the discretized diffusion time step, \(T\) is the final diffusion time, \(\alpha_{t}\) is a noise schedule with a sigmoid scheduler [40], and the conditional \(q(\cdot|\cdot)\) is a Gaussian kernel due to the Markov diffusion process assumption. Then \(\mathbf{r}_{t}\) can be expressed in the Langevin's form through the reparameterization trick as
\[\mathbf{r}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{r}_{0}+\sqrt{1-\bar{\alpha}_{t}} \boldsymbol{\epsilon}, \tag{2}\]
where \(\boldsymbol{\epsilon}\sim\mathcal{N}(0,\mathbf{I})\), and \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\). This update rule does not necessitate \(\mathbf{r}_{t}\) to remain in \([0,1)^{3}\); however, we can impose the periodic boundary condition for the fractional coordinate so that
\[\mathbf{r}_{f_{t}}=\boldsymbol{\pi}(\mathbf{r}_{t})\coloneqq\mathbf{r}_{t}- \lfloor\mathbf{r}_{t}\rfloor. \tag{3}\]
Then, \(\mathbf{r}_{f_{t}}\in[0,1)^{3}\).
In the reverse diffusion process, if the consecutive discretized time step is small compared to the diffusion timescale, the reverse coordinate trajectories can be approximately sampled also from the product of Gaussian diffusion kernels as
\[\begin{split} p_{\theta}(\mathbf{r}_{0:T})&=p( \mathbf{r}_{T})\prod_{t=1}^{T}p_{\theta}(\mathbf{r}_{t-1}|\mathbf{r}_{t}),\\ p_{\theta}(\mathbf{r}_{t-1}|\mathbf{r}_{t})&= \mathcal{N}(\mathbf{r}_{t-1};\boldsymbol{\mu}_{\theta},\sigma_{t}^{2}\mathbf{ I}),\end{split} \tag{4}\]
where
\[\begin{split}\boldsymbol{\mu}_{\theta}&=\frac{1}{ \sqrt{\bar{\alpha}_{t}}}\Big{(}\mathbf{r}_{t}-\frac{1-\alpha_{t}}{\sqrt{1- \bar{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}\Big{)},\\ \sigma_{t}^{2}&=\frac{(1-\bar{\alpha}_{t-1})(1- \alpha_{t})}{1-\bar{\alpha}_{t}}.\end{split} \tag{5}\]
The reverse conditional distribution can be trained by minimizing the Kullback-Leibler divergence between \(p_{\theta}(\mathbf{r}_{t-1}|\mathbf{r}_{t})\) and \(q(\mathbf{r}_{t-1}|\mathbf{r}_{t},\mathbf{r}_{0})\), the posterior of the corresponding forward process [20]. We use GemNetT for the diffusion network to train the parametrized noise \(\boldsymbol{\epsilon}_{\theta}\)[41]. Then, the coordinate in the earlier time can be sampled from \(\mathbf{r}_{t-1}\sim p_{\theta}(\mathbf{r}_{t-1}|\mathbf{r}_{t})\), whose corresponding reverse Langevin's dynamics reads
\[\mathbf{r}_{t-1}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\Big{(}\mathbf{r}_{t}-\frac {1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}\Big{)} +\sigma_{t}\boldsymbol{\epsilon}^{\prime}, \tag{6}\]
where \(\boldsymbol{\epsilon}^{\prime}\sim\mathcal{N}(0,\mathbf{I})\). Crucially, we empirically found that the final reconstruction performance is considerably improved when we impose the periodic boundary condition on the fractional coordinate at every time step such that \(\mathbf{r}_{t-1}\sim p_{\theta}(\mathbf{r}_{t-1}|\mathbf{r}_{f_{t}})\) and \(\alpha_{t}\) in the first term of Eq. (6) is replaced by \(\bar{\alpha}_{t}\). Namely, in our modified reverse process, the coordinate is sampled from
\[\begin{split}\mathbf{r}_{t-1}&=\frac{1}{\sqrt{\bar{ \alpha}_{t}}}\Big{(}\mathbf{r}_{f_{t}}-\sqrt{1-\bar{\alpha}_{t}}\boldsymbol{ \epsilon}_{\theta}\Big{)}+\sigma_{t}\boldsymbol{\epsilon}^{\prime},\\ \mathbf{r}_{f_{t}}&=\boldsymbol{\pi}(\mathbf{r}_{t} ).\end{split} \tag{7}\]
An illustration of denoising atomic coordinates with Eq. (7) is demonstrated in Fig. 2. The model performance using Eq. (6) is shown in Table S1, whereas the performance using Eq. (7) is shown in Table S1.
### Graph neural networks
Graph neural networks architecture facilitate machine learning of crystal graphs \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), graph representations of crystal structures. \(\mathcal{V}\) and \(\mathcal{E}\) are sets of nodes and edges, respectively, defined as
\[\begin{split}\mathcal{V}&=\{(\mathbf{f}_{n},\mathbf{ r}_{c_{n}})\mid\mathbf{f}_{n}\in\mathbb{R}^{M},\;\mathbf{r}_{c_{n}}=\mathbf{L} \mathbf{r}_{f_{n}}\in\mathbb{R}^{3}\},\\ \mathcal{E}&=\{\Delta\mathbf{r}_{c_{m}}^{(\mathbf{T} )}\mid\Delta\mathbf{r}_{c_{mn}}^{(\mathbf{T})}=\mathbf{r}_{c_{m}}-\mathbf{r}_{c _{n}}+\mathbf{T};\;\mathbf{r}_{c_{m}},\mathbf{r}_{c_{n}}\in\mathbb{R}^{3}\}, \end{split}\]
where \(n\) and \(m\) are indices of atoms in a crystal structure, \(\mathbf{f}_{n}\) is a vector of \(M\) features of an atom in the unit cell, \(\mathbf{T}\) is a translation vector, and \(\mathbf{L}\) is a lattice matrix that converts a fractional coordinate \(\mathbf{r}_{f_{n}}\) into its atomic Cartesian coordinate \(\mathbf{r}_{c_{n}}\). The atomic features, fractional coordinates, and atomic Cartesian coordinates of the crystal structure are vectorized (concatenated) as \(\mathbf{f}=(\mathbf{f}_{1},\ldots,\mathbf{f}_{N_{a}})\in\mathbb{R}^{N_{a}\times M}\), \(\mathbf{r}_{f}=(\mathbf{r}_{f_{1}},\ldots,\mathbf{r}_{f_{N_{a}}})\in\mathbb{R}^{ N_{a}\times 3}\), and \(\mathbf{r}_{c}=(\mathbf{r}_{c_{1}},\ldots,\mathbf{r}_{c_{N_{a}}})\in\mathbb{R}^{ N_{a}\times 3}\). Three graph neural networks implemented in this work are DimeNet\({}^{++}\)[27], GINE [28], and GemNetT [41]. DimeNet\({}^{++}\) and GINE are employed for encoders, and GemNetT is used for a diffusion network. DimeNet\({}^{++}\) and GemNetT, whose based architecture concerns geometry of the graphs, are rotationally equivariant. GemNetT has been devised by incorporating the polar angles between four atoms into DimeNet\({}^{++}\). This development grants GemNetT a higher degree of expressive power compared to DimeNet\({}^{++}\)[42]. Furthermore, GINE has been developed to distinguish a graph isomorphism, but not graph geometry nor the distance between nodes, which is important for our study. Thus we supplement the edge attributes into GINE with the distances between atoms, i.e. \(\mathcal{E}=\{||\Delta\mathbf{r}_{c_{mn}}^{(\mathbf{T})}||\}\).
### DP-CDVAE's architecture
The forward process of DP-CDVAE model is illustrated in Fig. 1. The model is a combination of two generative models, which are VAE and diffusion probabilistic model. The pristine crystal structures consist of the fractional coordinate (\(\mathbf{r}_{f}\)), the lattice matrix (\(\mathbf{L}\)), ground-truth atomic type (\(Z\)), and the number of atoms in a unit cell (\(N_{a}\)). For crystal graphs of the encoders, their node features are \(\mathbf{f}=Z\). The number of atoms in a unit cell \(N_{a}\) is encoded through multilayer perceptron before concatenated with the latent features from other graph encoders. They are encoded to train \(\boldsymbol{\mu}_{\phi}\) and \(logvar_{\phi}\) where \(\phi\) is a learnable parameter of the encoders. The latent variables (\(\mathbf{z}\)) can be obtained by
\[\mathbf{z}=\boldsymbol{\mu}_{\phi}+e^{logvar_{\phi}}\boldsymbol{\epsilon}^{\prime}, \tag{8}\]
where \(\boldsymbol{\epsilon}^{\prime}\sim\mathcal{N}(0,\mathbf{I})\). Then, \(\mathbf{z}\) will be decoded to compute the lattice lengths and angles, which then yield the lattice matrix (\(\mathbf{L_{z}}\)), \(N_{a}\), and \(\mathbf{A_{z}}\). In the original CDVAE, \(\mathbf{A_{z}}\) is the probability vector indicating the fraction of each
atomic type in the compound and is used to perturb \(Z\) by
\[Z_{t}\sim\mathcal{M}(\text{softmax}(\mathbf{A}+\sigma_{t}^{\prime}\mathbf{A_{z}})) \tag{9}\]
where \(\mathcal{M}\) is a multinomial distribution, \(\mathbf{A}\) is a one-hot vector of ground-truth atomic type \(Z\), and \(\sigma_{t}^{\prime}\) is the variance for perturbing atomic types at time \(t\), which is distinct from \(\sigma_{t}\) used for perturbing the atomic coordinates. Similar to the original CDVAE, \(\sigma_{t}^{\prime}\) is selected from the range of [0.01, 5].
For the diffusion network, the input structures are constructed from \(\mathbf{r}_{f_{t}}\), \(Z_{t}\), and \(\mathbf{L_{z}}\) where the (Cartesian) atomic coordinates at time \(t\) are computed by \(\mathbf{r}_{c_{t}}=\mathbf{L_{z}}\mathbf{r}_{f_{t}}\). These are then utilized by the crystal graphs for the diffusion network, whose node features are \(\mathbf{f}_{t}=(Z_{t},\mathbf{F}_{t},\mathbf{z},t)\) where \(\mathbf{F}_{t}\) is a Fourier embedding feature of \(\mathbf{r}_{t}\) (see section S3). As proposed by Ho et al. [20], we use the simple loss to train the model such that
\[\mathcal{L}_{simple}=\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\mathbf{r}_{c_{t }},\mathbf{f}_{t})\|^{2}. \tag{10}\]
Since the diffusion model is trained to predict both \(\mathbf{\epsilon}\) and \(\mathbf{A}\), so the loss of the diffusion network is
\[\mathcal{L}_{diff}=\mathcal{L}_{simple}+\lambda\mathcal{L}_{CE}(\mathbf{A}, \mathbf{A}_{\theta}(\mathbf{r}_{c_{t}},\mathbf{f}_{t})), \tag{11}\]
where \(\mathcal{L}_{CE}\) is the cross entropy loss, \(\lambda\) is a loss scaling factor, and \(t\in\{1,...,T\}\) where \(T=1000\). In this work, \(t\) is randomly chosen for each crystal graph and randomly reinitialized for each epoch in the training process. The total loss in the trainig process is shown in Eq. S1.
In the reverse diffusion process, we measure the model performance of two tasks: reconstruction and generation tasks. For the former task, \(\mathbf{z}\) is obtained from Eq. (8) by using the ground-truth structure as an input of the encoders. For the latter task, \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})\), which is then used to predict \(N_{a}\), \(\mathbf{L_{z}}\), \(\mathbf{A_{z}}\), and concatenate with the node feature of the crystal graph in the diffusion network. At the initial step, \(t=T\), \(Z_{T}\) is sampled from the highest probability of \(\mathbf{A_{z}}\), and the final-time coordinate is obtained from sampling a Gaussian distribution, i.e. \(\mathbf{r}_{T}\sim\mathcal{N}(0,\mathbf{I})\). The coordinates can be denoised using Eq. (7), and the predicted atomic types are updated in each reversed time step by \(\text{argmax}(\mathbf{A}_{\theta})\).
### DFT calculations
The Vienna _ab initio_ Simulation Package (VASP) was employed for structural relaxations and energy calculations based on DFT [43, 44]. The calculations were conducted under the generalized gradient approximation (GGA) and the project augmented wave (PAW) method [45, 46]. The thresholds for energy and force convergence were set to \(10^{-5}\) eV and \(10^{-5}\) eV/A, respectively. The plane-wave energy cutoff was set to 800 eV, and the Brillouin zone integration was carried out on a k-point mesh of 5 \(\times\) 5 \(\times\) 5 created by the Monkhorst-Pack method [47, 48].
###### Acknowledgements.
This research project is supported by the Second Century Fund (C2F), Chulalongkorn University. This Research is funded by Thailand Science research and Innovation Fund Chulalongkorn University (IND66230002) and National Research Council of Thailand (NRCT): (NRCT5-RSA63001-04). T.C. acknowledges funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation [grant number B05F650024]. The authors acknowledge high performance computing resources including NVIDIA A100 GPUs from Chula Intelligent and Complex Systems Lab, Faculty of Science, and from the Center for AI in Medicine (CU-AIM), Faculty of Medicine, Chulalongkorn
Figure 2: The schematic depicting the reverse diffusion process of the DP-CDVAE model. Initially, atomic coordinates are sampled from a normal distribution and subsequently mapped into the unit cell (dashed-line box) using the periodic boundary-imposing function \(\mathbf{\pi}(\cdot)\). White circles outside the unit cell depict the atomic coordinates prior to the periodic boundary condition is imposed, while colored circles represent atoms that are inside the unit cell of interest. The action of \(\mathbf{\pi}(\cdot)\) on the atoms outside the unit cell is represented by an arrow that translates the white circles into colored circles in the unit cell. Left to right show the reverse direction of the arrow of time, depicting the reverse diffusion process.
University, Thailand. We acknowledge the supporting computing infrastructure provided by NSTDA, CU, CUAASC, NSRF via PMUB [B05F650021, B37G660013] (Thailand). URL:www.e-science.in.th. The Computational Materials Physics (CMP) Project, SLRI, Thailand, is acknowledged for providing computational resource.
|
2310.01632 | Imitation Learning from Observation through Optimal Transport | Imitation Learning from Observation (ILfO) is a setting in which a learner
tries to imitate the behavior of an expert, using only observational data and
without the direct guidance of demonstrated actions. In this paper, we
re-examine optimal transport for IL, in which a reward is generated based on
the Wasserstein distance between the state trajectories of the learner and
expert. We show that existing methods can be simplified to generate a reward
function without requiring learned models or adversarial learning. Unlike many
other state-of-the-art methods, our approach can be integrated with any RL
algorithm and is amenable to ILfO. We demonstrate the effectiveness of this
simple approach on a variety of continuous control tasks and find that it
surpasses the state of the art in the IlfO setting, achieving expert-level
performance across a range of evaluation domains even when observing only a
single expert trajectory without actions. | Wei-Di Chang, Scott Fujimoto, David Meger, Gregory Dudek | 2023-10-02T20:53:20Z | http://arxiv.org/abs/2310.01632v2 | # Imitation Learning from Observation through Optimal Transport
###### Abstract
Imitation Learning from Observation (ILFO) is a setting in which a learner tries to imitate the behavior of an expert, using only observational data and without the direct guidance of demonstrated actions. In this paper, we re-examine the use of optimal transport for IL, in which a reward is generated based on the Wasserstein distance between the state trajectories of the learner and expert. We show that existing methods can be simplified to generate a reward function without requiring learned models or adversarial learning. Unlike many other state-of-the-art methods, our approach can be integrated with any RL algorithm, and is amenable to ILFO. We demonstrate the effectiveness of this simple approach on a variety of continuous control tasks and find that it surpasses the state of the art in the IIFO setting, achieving expert-level performance across a range of evaluation domains even when observing only a single expert trajectory _without_ actions.
## I Introduction
Imitation Learning (IL) is a widely used and effective tool for teaching robots complex behaviors. Although Reinforcement Learning (RL) has demonstrated success in learning motor skills from scratch in real-world systems [1, 2], Imitation Learning (IL) remains a proven and practical way to learn behaviors from demonstrations, without the need for a hand-tuned and engineered reward signal required for RL. However, acquiring access to expert actions can be highly impractical. For example, robotic systems which are too challenging to smoothly teleoperate, or in applications where the action spaces of the demonstrator and the imitator do not match, such as in Sim-to-Real problems [3].
Imitation Learning from Observation (ILFO) eliminates the need for demonstrated actions by learning behaviors from sequences of expert states, instead of requiring both expert states and actions. While IL algorithms often rely on teleoperation to demonstrate behavior, ILFO algorithms learn from observational data alone, similar to how humans learn new skills by watching others. ILFO algorithms can reduce the cost of data collection of robot behavior, making them instrumental for deploying IL in complex real-world robotics systems, and open the door to more complex downstream tasks, such as learning from visual demonstrations.
Moving to the observation-only space however, introduces new challenges. While IL algorithms can learn by matching demonstrated actions, ILFO algorithms require more exploration to succeed [4], as they can only imitate the state trajectories of an expert indirectly, through observed outcomes of an unknown state transition function, and without the direct guidance of demonstrated actions. This emphasis on exploration creates a further challenge in that the states visited by the learner are more likely to be distant or non-overlapping with those of the expert. This is problematic for imitation via distribution matching [5, 6, 7], as the widely used KL divergence is ill-defined for non-overlapping distributions and may provide a poor signal when the behavior of the learner is distinct from the expert. While IL methods can circumvent this problem by accelerating early learning with behavior cloning, ILFO methods must deal with randomly initialized policies, which are unlikely to behave similarly to an expert demonstrator.
The field of optimal transport has garnered much attention in recent years, with theoretical and computational developments allowing its use for evaluating distances between distributions defined on high-dimensional metric spaces [8, 9]. The Wasserstein distance, in particular, can compare non-overlapping distributions, as well as quantify the spatial shift between the supports of the distributions. These properties make it a natural alternative to KL divergence-based objectives used by existing methods. Moreover, the Wasserstein distance can be computed without requiring separate models or learned components. This makes the Wasserstein distance more computationally efficient and conceptually simpler than other methods that rely on incremental adversarial signals learned via online interaction [5, 10, 11].
Prior work based on the Wasserstein distance for IL or ILFO rely on numerous techniques, such as adversarial or learned components, or designed for sample-inefficient on-policy RL algorithms. However, we find that we can significantly simplify an existing approach, Sinkhorn Imitation Learning (SIL) [11], removing adversarial components which rely on on-policy learning. The resulting approach, Observational Off-Policy Sinkhorn (OOPS) generates a reward function for _any_ RL algorithm, which minimizes the Wasserstein distance between expert and learner state trajectories. We benchmark our approach against existing methods proposed to optimize the Wasserstein distance [11, 12], as well as current state-of-the-art ILFO algorithms [6, 13] on a variety of continuous control tasks. Our approach outperforms state-of-the-art methods for ILFO, achieving near-expert performance in every evaluated task with only a single trajectory, without observing any actions. To facilitate reproducibility, all of our code is open-sourced1.
Footnote 1: [https://github.com/weidi-chang/OOPS](https://github.com/weidi-chang/OOPS)
## II Background
**Setting.** Our task is formulated by an episodic finite-horizon MDP \((\mathcal{S},\mathcal{A},\mathcal{P},r,p_{0},T)\), with state space \(\mathcal{S}\), action space \(\mathcal{A}\), transition dynamics \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to[0,1]\)
reward function \(r:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), initial state distribution \(p_{0}:\mathcal{S}\rightarrow[0,1]\), and \(T\) the horizon. While the overarching objective is to maximize reward, in the Imitation Learning from Observation (ILFO) setting, the agent never observes the true reward. Instead, ILFO algorithms must use sequences of states (trajectories \(\tau\)), generated by an unknown expert, to infer a reward signal, or objective. We therefore only assume access to a dataset \(D_{E}\) of \(N\) state-only trajectories, \(D_{E}=\{\tau_{0},\tau_{1},...,\tau_{N-1}\}\).
**Optimal Transport.** Optimal Transport (OT) seeks to compute a matching between the two measures (source and target) while minimizing the transport cost [14]. In our work, we aim to minimize the distance between the distribution of trajectories defined by the learner and the expert.
Writing out trajectories in terms of their transitions \(\tau=\{(s_{0},s_{1}),(s_{1},s_{2}),...,(s_{T-1},s_{T})\}\), and viewing each transition as a datapoint, forms a discrete measure \(\alpha\) over the state transition space \(\mathcal{S}\times\mathcal{S}\), with weights \(\mathbf{a}\) and locations \((s_{i},s_{i+1})_{E}\in\mathcal{S}\times\mathcal{S}\) for the expert: \(\alpha=\sum_{i=0}^{T}a_{i}\sigma_{(s_{i},s_{i+1})_{E}}\) where \(\sigma_{(s_{i},s_{i+1})}\) is the Dirac delta function at position \((s_{i},s_{i+1})\). Similarly for the learner, with weights \(\mathbf{b}\) and locations \((s_{i},s_{i+1})_{\pi}\) for the learner, the trajectory rollout forms the measure \(\beta=\sum_{i=0}^{T}b_{i}\sigma_{(s_{i},s_{i+1})_{\pi}}\)[15]. In each trajectory we consider each timestep as being equally important, and as such restrict the weight vectors \(\mathbf{a}\) and \(\mathbf{b}\) to the uniform weight vectors: \(\sum_{i=0}^{T}a_{i}=1,a_{i}=\frac{1}{T}\,\forall\,0<i<T\), and \(\sum_{i=0}^{T}b_{i}=1,b_{i}=\frac{1}{T}\,\forall\,0<i<T\).
While the Monge formulation of OT enforces a one-to-one matching between measures, the Kantorovich formulation relaxes the OT problem by allowing each source point to split mass: the mass at any source point may be distributed across several locations [14, 15]. This provides the Wasserstein distance (or Kantorovich metric) over a distance metric \(d\):
\[W_{p}(\alpha,\beta):=\left(\min_{P}\left(\sum_{i}^{T}\sum_{j}^{T}d(\alpha_{i},\beta_{j})^{p}P_{i,j}\right)\right)^{\frac{1}{p}}, \tag{1}\]
which uses a coupling matrix \(P\in\mathbb{R}_{+}^{n\times m}\), where \(P_{i,j}\) is the mass flowing from bin \(i\) to bin \(j\):
\[P\in\mathbb{R}^{n\times m}\text{ such that }\sum_{j}P_{i,j}=\mathbf{a}\text{ and }\sum_{i}P_{i,j}=\mathbf{b}. \tag{2}\]
The optimal coupling \(P\) between \(\alpha\) and \(\beta\) gives us the minimal cost transport plan between the measures defined by the trajectories \(\tau_{\pi}\) and \(\tau_{E}\).
**Sinkhorn distance.** The Sinkhorn distance \(W_{\text{Sk}}\) is an entropy regularized version of the Wasserstein distance [8], for \(W_{1}\), with \(p=1\) this equals:
\[W_{\text{Sk}}(\tau_{\pi},\tau_{E}):=\min_{\tilde{P}}\sum_{i=0}^{T}\sum_{j=0}^ {T}d(\alpha_{i},\beta_{j})\tilde{P}_{i,j}-\lambda\mathcal{H}(\tilde{P}),\]
where the entropy term \(\mathcal{H}(\tilde{P}):=\sum_{i=0}^{T}\sum_{j=0}^{T}\tilde{P}_{ij}\log\tilde{P }_{ij}\). For any given value of \(\lambda>0\), the optimal coupling matrix \(\tilde{P}\) for \(W_{\text{Sk}}\) can be computed efficiently using the iterative Sinkhorn algorithm [16]. At the cost of convergence speed, as \(\lambda\) approaches 0, the Wasserstein distance is recovered, while increasing its value blurs out the transport matrix and spreads the mass between the two measures. This approximation is useful as it provides a computationally efficient method for estimating the optimal coupling matrix for the Wasserstein distance \(\tilde{P}\approx P\) for small \(\lambda\), where \(W_{\text{Sk}}\) upper bounds \(W_{1}\).
## III Related Work
**Imitation Learning.** Learning from Demonstrations (LfD) approaches can be generally classified into two types of approaches: IL methods which learn directly from expert data and Inverse Reinforcement Learning (IRL) methods [17] which first infer a reward function to subsequently optimize with RL. GAIL [5] and related methods [10, 18] leverage adversarial training. Such methods have been shown to optimize a distribution matching objective between the state-action distribution of the learner and the expert, in terms of various probability divergence metrics [5, 6, 19, 7]. Each divergence objective leads to distinct imitative behavior (zero-forcing or mean-seeking or both) which can be exploited in different scenarios [20]. In contrast, our approach minimizes a Wasserstein distance-based objective, better suited for the ILFO context we operate in.
**Imitation Learning from Observations.** Due to the challenging nature of ILFO, many methods rely on learning a model, via an inverse dynamics model used to infer the missing actions of the expert [21], use objectives based on the transition dynamics of the expert [22, 23], or simply model the entire MDP [4]. Adversarial methods have also been adapted from the IL context [24, 25]. Another common theme is \(f\)-divergence minimization, [7] derive an approach based on the analytical gradients of \(f\)-divergences and show that different variants (FKL, RKL, JS) can be achieved through their framework. OPOLO [13], leverages off-policy learning on top of an inverse dynamics model and adversarial training. As opposed to existing methods, our approach leverages the Wasserstein distance to compute a non-adversarial and model-free reward for ILFO.
**Optimal Transport for Imitation Learning.** Minimization of the Wasserstein distance for IL has been previously considered in [26, 27] through Wasserstein Generative Adversarial Network (WGAN)-inspired approaches [28]. In an adversarial policy learning set up similarly to GAIL [5] and by restricting the discriminator to be a 1-Lipschitz function, these approaches can minimize the \(W_{1}\) distance between the policy and the reference trajectory data distribution. However these methods suffer from the drawbacks of adversarial frameworks, which are hard to optimize and tune [29], and have been shown to be poor estimators of \(W_{1}\)[30].
More recent works [11, 12, 31] make use of Wasserstein distance solvers, or related approximations, for IL. Our approach is closely based on Sinkhorn Imitation Learning (SIL) [11], which uses the Sinkhorn distance [8] to compute an entropy regularized Wasserstein distance between the state-action occupancy of the learner and expert. However, rather than use an upper bound defined by off-policy samples, they use on-policy RL [32] to optimize the cosine distance over the representation space of an adversarial discriminator
trained alongside the imitation agent. In our work, we found that we are able to vastly improve sample-efficiency by using an off-policy agent instead, and are able to consider a simpler objective without adversarial or learned representations, an aspect previously thought required for good performance. Another related approach, PWIL [12], uses a greedy formulation of the Wasserstein distance, and matches the current state-action pair \((s,a)\) to its closest counterpart in the expert demonstration dataset at every step of the rollout. In our experimental analysis (Figure 3), we show that our approximation via the Sinkhorn distance creates a tighter upper bound of the true Wasserstein distance, and that it is a crucial aspect for consistent performance. Furthermore, we focus on ILfO, giving new insights of the capabilities of OT in this context, and show that our approach matches or outperforms existing state of the art methods.
## IV Wasserstein Imitation Learning from Observational Demonstrations
In this section, we introduce our approach for minimizing the Wasserstein distance between expert trajectories and learner rollouts. To do so, we derive a reward function based on the distance between state transitions in pairs of trajectories.
**Deriving a reward from the Wasserstein distance.** With the absence of a true reward signal, the ILfO setting can be framed as a divergence-minimization problem, where the objective is to match the trajectory distributions of the learner and the expert. In our case, we choose the Wasserstein distance as a metric for this task. Unlike the widely used KL divergence, the Wasserstein distance is defined for distributions with non-overlapping support, making it amenable to scenarios where the behavior of the learner and the expert may be particularly distinct. We can define our ILfO task as minimizing the Wasserstein distance \(W_{1}\) between trajectories \(\tau_{\pi}\) sampled from the learner policy \(\pi\) and example trajectories \(\tau_{E}\) provided by an expert \(E\):
\[\min_{\pi}\mathbb{E}_{\tau_{\pi},\tau_{E}}\left[W_{1}(\tau_{\pi},\tau_{E})\right]=\min_{\pi}\mathbb{E}_{\tau_{\pi},\tau_{E}} \tag{3}\] \[\left[\min_{P}\left(\sum_{i=0}^{T}\sum_{j=0}^{T}d((s_{i},s_{i+1}) _{\pi},(s_{j},s_{j+1})_{E})P_{i,j}\right)\right].\]
As the Wasserstein distance between a pair of trajectories can be defined as a sum over each of the transitions in each trajectory, for a given coupling matrix \(P\), we can define a reward function
\[\tilde{r}_{t}(s_{t}, s_{t+1}|\tau_{\pi},\tau_{E},P):= \tag{4}\] \[-\sum_{j=0}^{T}d((s_{t},s_{t+1})_{\pi},(s_{j},s_{j+1})_{E})P_{t,j},\]
such that summing the reward \(\tilde{r}_{t}\) over a given learner trajectory \(\tau_{\pi}\) is exactly equal to the Wasserstein distance
\[W_{1}(\tau_{\pi},\tau_{E})=\min_{P}\left(-\sum_{i=0}^{T}\tilde{r}_{t}(s_{t},s_ {t+1}|\tau_{\pi},\tau_{E},P)\right). \tag{5}\]
This naturally suggests an objective that involves the summation of rewards \(\tilde{r}_{t}\) over learner trajectories
\[J(\pi|E,P):=\mathbb{E}_{\pi,E}\left[\sum_{t=0}^{T}\tilde{r}_{t}(s_{t},s_{t+1}| \tau_{\pi},\tau_{E},P)\right], \tag{6}\]
where our original objective (Equation (3)) can be recovered:
\[\max_{\pi}\min_{P}J(\pi|E,P)=\min_{\pi}\mathbb{E}_{\tau_{\pi},\tau_{E}}\left[ W_{1}(\tau_{\pi},\tau_{E})\right]. \tag{7}\]
As the optimal coupling matrix \(P\) can be approximated by the iterative Sinkhorn algorithm [16], the maximization of the objective \(J\) with any RL algorithm, can be used as a replacement to minimizing the Wasserstein distance.
**Off-policy minimization of the Wasserstein distance.** As the reward \(\tilde{r}_{t}(s_{t},s_{t+1}|\tau_{\pi_{\pi},\tau_{E}},P)\) is defined as a function of a trajectory \(\tau_{\pi_{n}}\) gathered by the learner \(\pi_{n}\), any stale reward determined by trajectories from a previous policy \(\pi_{n-m}\), \(m\geq 1\), will not correspond with the Wasserstein distance of the current learner (as noted in Equation (5)). However, working with the assumption that a policy \(\pi_{n}\) is better than any previous policy with respect to \(J\), (i.e. \(J(\pi_{n})\geq J(\pi_{n-m})\) where \(m\geq 1\)), we remark that stale rewards provide an upper bound on the Wasserstein distance:
\[W_{1}(\tau_{\pi_{n}}, \tau_{E})=\min_{P}\left(-\sum_{i=0}^{T}\tilde{r}_{t}(s_{t},s_{t+1 }|\tau_{\pi_{n}},\tau_{E},P)\right) \tag{8}\] \[\leq\min_{P}\left(-\sum_{i=0}^{T}\tilde{r}_{t}(s_{t},s_{t+1}| \tau_{\pi_{n-m}},\tau_{E},P)\right). \tag{9}\]
This means that previously collected off-policy trajectories can be used for learning in a principled manner, at the cost of the tightness of the upper bound of the Wasserstein distance. In our experimental results, we show that reusing prior data dramatically improves the sample efficiency of our algorithm over approaches which rely exclusively on online data [11].
Our final approach, Observational Off-Policy Sinkhorn (OOPS) discovers a reward function in a similar manner to existing approaches [11, 33], but in state transition space rather than state-action space. Unlike these prior approaches, OOPS avoids complexities such as adversarial learning or heuristic-based design of the reward function with multiple hyperparameters. OOPS is summarized in Algorithm 1.
```
1:Input: Dataset of expert demonstrations \(D_{E}\).
2:for episodes \(n=1,...,N\)do
3: Collect a trajectory from the environment.
4: Compute the coupling matrix \(P\) using the Sinkhorn algorithm [16].
5: Compute the reward \(\tilde{r}\) with \(D_{E}\) and \(P\) (Equation (4)).
6: Train learner with a RL algorithm, and the collected trajectories and reward \(\tilde{r}\).
```
**Algorithm 1** OOPS
### _Results_
We evaluate our algorithm on five MuJoCo locomotion benchmark environments from the OpenAI Gym suite [34, 35], and three robotics tasks [36, 37] in the ILfO setting.
For each environment, the dataset of expert trajectories \(D_{E}\) is generated via a pre-trained Soft Actor-Critic agent [38].
We use OOPS to generate a reward function for two RL algorithms, TD3 [39] and DDPG [40]. Our baselines include state of the art ILFO methods: f-IRL [7] (its best-performing FKL variant in particular) and OPOLO [13], as well as IL methods which also consider the Wasserstein distance: Primal Wasserstein Imitation Learning (PWIL) [12] and Sinkhorn Imitation Learning (SIL) [11]. In order to compare algorithms in the ILFO setting, we use the state-only version of PWIL, PWIL\(-(s)\)[12], and modify SIL [11] by replacing the action \(a\) in all pairs \((s,a)\) with the corresponding next state \(s^{\prime}\) in the transition. All algorithms are given a budget of 1M environment interactions (and 1M updates), are evaluated on 5 random seeds, and use the original implementations provided by the authors.
**Locomotion.** We report the evaluation results of our approach compared against the four baseline algorithms in Table I, varying the number of expert demonstrations used for imitation. The learning curves for the single demonstration setting are shown in Figure 1.
OOPS+TD3 consistently matches, or outperforms, all other baseline methods in every task and for every amount of expert demonstrations. We also find that OOPS+DDPG can roughly match the performance of the expert in every environment, other than Humanoid. The poor results on Humanoid are unsurprising, as previous results have demonstrated that DDPG tends to fail at the Humanoid task in the standard RL setting [38]. Regardless, since DDPG is known to underperform TD3 and SAC, matching the performance of the SAC expert suggests that the OOPS reward function can produce a stronger learning signal than the original task reward. This shows that OOPS is not dependent on the choice of RL algorithm, assuming the RL algorithm is capable of solving the desired task.
**Simulated robotics environments.** For the top three performing algorithms (OPOLO, PWIL\(-(s)\), and OOPS+TD3), we also benchmark on three robotics tasks: _BipedalWalker_, a 2D simulated terrain traversal environment, which tests the ability to deal with range sensor data. _Minitaur_ is a quadruped locomotion task based on a faithful modeling of Ghost Robotics' Minitaur platform. This PyBullet [36] environment was initially created for Sim-to-Real [37], to transfer learnt running gaits onto a real-world Minitaur_. _Minitaur_Duck_ is a variation of the Minitaur environment that
Fig. 1: Learning curves for 1 expert demonstrations across 5 random seeds. The shaded area represents a standard deviation. OOPS+TD3 consistently matches or outperforms the baseline approaches.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \# Expert Traj. & \multicolumn{2}{c}{Hopper} & Walker2d & HalfCheetah & Ant & Humanoid \\ & & 3420 \(\pm\) 36 & 4370 \(\pm\) 124 & 11340 \(\pm\) 95 & 5018 \(\pm\) 140 & 5973 \(\pm\) 17 \\ \hline \multirow{6}{*}{1} & \(f\)-IRL (FKL) & 0.91 \(\pm\) 0.03 & 0.42 \(\pm\) 0.10 & 0.63 \(\pm\) 0.13 & 0.47 \(\pm\) 0.10 & 0.47 \(\pm\) 0.32 \\ & OPOLO & 0.73 \(\pm\) 0.09 & 0.80 \(\pm\) 0.14 & 0.88 \(\pm\) 0.02 & 0.89 \(\pm\) 0.04 & 0.04 \(\pm\) 0.01 \\ & SIL \(-(s,s^{\prime})\) & 0.17 \(\pm\) 0.06 & 0.07 \(\pm\) 0.02 & -0.17 \(\pm\) 0.09 & -0.41 \(\pm\) 0.07 & 0.07 \(\pm\) 0.00 \\ & PWIL \(-(s)\) & 0.91 \(\pm\) 0.14 & 0.71 \(\pm\) 0.30 & 0.01 \(\pm\) 0.01 & 0.76 \(\pm\) 0.05 & 0.14 \(\pm\) 0.14 \\ & OOPS+DDPG (Ours) & 0.90 \(\pm\) 0.10 & **0.99 \(\pm\) 0.03** & **1.05 \(\pm\) 0.01** & **1.00 \(\pm\) 0.02** & 0.16 \(\pm\) 0.20 \\ & OOPS+TD3 (Ours) & **0.98 \(\pm\) 0.02** & **0.95 \(\pm\) 0.09** & **1.05 \(\pm\) 0.01** & **1.00 \(\pm\) 0.03** & **0.74 \(\pm\) 0.04** \\ \hline \multirow{6}{*}{4} & \(f\)-IRL (FKL) & 0.92 \(\pm\) 0.04 & 0.38 \(\pm\) 0.12 & 0.69 \(\pm\) 0.12 & 0.38 \(\pm\) 0.07 & 0.51 \(\pm\) 0.28 \\ & OPOLO & 0.72 \(\pm\) 0.15 & 0.91 \(\pm\) 0.03 & 0.90 \(\pm\) 0.02 & **1.02 \(\pm\) 0.04** & 0.20 \(\pm\) 0.12 \\ & SIL \(-(s,s^{\prime})\) & 0.25 \(\pm\) 0.07 & 0.09 \(\pm\) 0.03 & -0.22 \(\pm\) 0.14 & -0.61 \(\pm\) 0.22 & 0.07 \(\pm\) 0.01 \\ & PWL \(-(s)\) & **0.98 \(\pm\) 0.02** & 0.88 \(\pm\) 0.03 & 0.00 \(\pm\) 0.02 & 0.78 \(\pm\) 0.03 & 0.23 \(\pm\) 0.28 \\ & OOPS+DDPG (Ours) & 0.75 \(\pm\) 0.34 & **0.96 \(\pm\) 0.03** & **1.05 \(\pm\) 0.01** & **0.99 \(\pm\) 0.01** & 0.07 \(\pm\) 0.01 \\ & OOPS+TD3 (Ours) & **0.94 \(\pm\) 0.07** & **0.97 \(\pm\) 0.01** & **1.05 \(\pm\) 0.01** & **0.99 \(\pm\) 0.03** & **0.65 \(\pm\) 0.15** \\ \hline \multirow{6}{*}{10} & \(f\)-IRL (FKL) & 0.91 \(\pm\) 0.05 & 0.39 \(\pm\) 0.09 & 0.65 \(\pm\) 0.10 & 0.39 \(\pm\) 0.17 & 0.40 \(\pm\) 0.22 \\ & OPOLO & 0.66 \(\pm\) 0.08 & **0.96 \(\pm\) 0.04** & 0.95 \(\pm\) 0.01 & **1.00 \(\pm\) 0.03** & 0.16 \(\pm\) 0.06 \\ & SIL \(-(s,s^{\prime})\) & 0.17 \(\pm\) 0.09 & 0.08 \(\pm\) 0.03 & -0.20 \(\pm\) 0.09 & -0.24 \(\pm\) 0.11 & 0.07 \(\pm\) 0.00 \\ & PWL \(-(s)\) & **0.98 \(\pm\) 0.01** & 0.87 \(\pm\) 0.08 & 0.01 \(\pm\) 0.02 & 0.78 \(\pm\) 0.04 & 0.23 \(\pm\) 0.28 \\ & OOPS+DDPG (Ours) & **0.93 \(\pm\) 0.03** & 0.78 \(\pm\) 0.39 & **1.03 \(\pm\) 0.04** & 0.79 \(\pm\) 0.38 & 0.21 \(\pm\) 0.25 \\ & OOPS+TD3 (Ours) & **0.97 \(\pm\) 0.01** & **0.95 \(\pm\) 0.03** & **1.05 \(\pm\) 0.01** & **1.00 \(\pm\) 0.02** & **0.64 \(\pm\) 0.22** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Final performance of different ILFO algorithms at 1M timesteps, using 1, 4, 10 expert demonstrations. Values for each task are normalized by the average return of the expert. \(\pm\) captures the standard deviation. The highest value and any within \(0.05\) are **bolded**. The average un-normalized return of the expert is listed below each task. All results are averaged across 5 seeds and 10 evaluations.
places a duck on top of the Minitaur's body. The goal is to learn a stable gait without the duck falling off the Minitaur.
We show the results on these robotics environments with varying amounts of expert demonstrations in Table II. While OOPS+TD3 and OPOLO achieve a similar high performance when using all 10 expert demonstrations, OOPS+TD3 surpasses OPOLO when using fewer demonstrations.
### _Analysis and Ablations_
To better understand the performance of our approach, in this section, we perform additional analysis to test the quality and importance of various components.
**Accuracy of proxy reward.** OOPS generates a proxy reward function which minimizes the Wasserstein distance between the learner's trajectories and the demonstrated expert trajectories. Consequently, if the proxy reward is accurate, it should correlate strongly with the true environment reward. To evaluate this relationship, we collect a dataset of varied trajectory quality using the expert policy from the main results, with added Gaussian noise \(\mathcal{N}(0,\ell^{2})\) with \(\ell\in[0,1.5]\). Figure 2 shows the calibration plots between the proxy reward and the original task reward, showing a strong correlation in every environment.
Next we compare the quality of trajectories, in terms of the Wasserstein distance, rather than the true environment reward. In Table III, we compare the Wasserstein distance between the expert trajectories and the final policy rollouts obtained at the end of training from each of the top-3 performing methods (OOPS, OPOLO, PWIL\(-(s)\)). The Wasserstein distance is measured in three spaces: state-only \((s)\), state-transition \((s,s^{\prime})\), and state-action \((s,a)\).
We find that OOPS obtains the lowest state-action Wasserstein distance to the expert trajectories in four of the five studied environments, with Walker2d being the only disagreement with the previous experiment, as even though OOPS+TD3 obtains a better task reward in Table I, PWIL\(-(s)\) obtains a lower state-action Wasserstein distance to the expert. To further evaluate the quality of the Wasserstein distance used by PWIL, we take OOPS and replace the Sinkhorn algorithm with the greedy formulation \(W_{greedy}\) proposed by PWIL to compute the Wasserstein distance in \((s,s^{\prime})\) space. The results are reported in Table IV (under \(W_{greedy}\)), and show a loss in performance.
**Quality of estimated Wasserstein distance.** We now evaluate the quality of our estimated Wasserstein distance.
In Figure 3, we compare the quality of different approximations of the state transition Wasserstein distance: the Sinkhorn distance \(W_{\text{Sk}}\) with varying \(\lambda\), the network simplex solver \(W_{\text{simplex}}\) introduced in [41], and \(W_{\text{greedy}}\) proposed for PWIL [12]. To compare each approach, we compute the Wasserstein distance between trajectories generated by the final policy of OOPS+TD3 and the expert trajectories, using each of the various approximations. As each method results in different estimates of the coupling matrix \(P\), they all provide an upper bound on the true Wasserstein distance, where lower estimates of Wasserstein distance is a tighter bound. We find that for very low values of \(\lambda\), \(W_{\text{Sk}}\) computes lower cost couplings than \(W_{\text{simplex}}\), and up to \(\lambda\approx 0.4\) obtains better approximations than \(W_{\text{greedy}}\).
Next, we compare these three approaches for computing the Wasserstein distance in terms of performance. The results are shown in Table IV (Wasserstein Distance Solver). Unsurprisingly, large values of \(\lambda\), which approximate the Wasserstein distance \(W_{1}\) poorly, results in lower performance. For sufficiently small values of \(\lambda\), we find that OOPS+TD3
\begin{table}
\begin{tabular}{c l c c c} \hline \hline \multirow{2}{*}{\# Expert Traj.} & \multicolumn{2}{c}{BipedalWalker} & \multicolumn{2}{c}{Minitaur} & \multicolumn{2}{c}{MinitaurDuck} \\ & 318.90 \(\pm\) 9.20 & 12.36 \(\pm\) 0.75 & 10.68 \(\pm\) 1.20 \\ \hline \multirow{3}{*}{1} & OPOLO & 0.96 \(\pm\) 0.01 & 0.76 \(\pm\) 0.08 & **1.00 \(\pm\) 0.04** \\ & PWIL \(-\)\((s)\) & 0.89 \(\pm\) 0.01 & 0.53 \(\pm\) 0.19 & 0.30 \(\pm\) 0.14 \\ & OOPS+TD3 & **0.93 \(\pm\) 0.01** & **1.01 \(\pm\) 0.04** & **0.94 \(\pm\) 0.18** \\ \hline \multirow{3}{*}{4} & OPOLO & **0.96 \(\pm\) 0.01** & 0.84 \(\pm\) 0.09 & **1.01 \(\pm\) 0.03** \\ & PWLL \(-\)\((s)\) & 0.90 \(\pm\) 0.01 & 0.52 \(\pm\) 0.15 & 0.21 \(\pm\) 0.09 \\ & OOPS+TD3 & **0.92 \(\pm\) 0.01** & **0.91 \(\pm\) 0.09** & **1.02 \(\pm\) 0.05** \\ \hline \multirow{3}{*}{10} & OPOLO & **0.98 \(\pm\) 0.00** & **0.98 \(\pm\) 0.04** & **1.00 \(\pm\) 0.02** \\ & PWIL \(-\)\((s)\) & 0.88 \(\pm\) 0.01 & 0.58 \(\pm\) 0.09 & 0.15 \(\pm\) 0.16 \\ \cline{1-1} & OOPS+TD3 & **0.93 \(\pm\) 0.01** & **1.03 \(\pm\) 0.03** & **0.99 \(\pm\) 0.09** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Final performance of ILO algorithms on the robotics environments when using 1, 4, and 10 expert demonstrations. Values for each task are normalized by the average return of the expert. \(\pm\) captures the standard deviation. The highest value and any within \(0.05\) are **bolded**. The average un-normalized return of the expert is listed below each task. All results are averaged across 5 seeds and 10 evaluations.
Fig. 2: Calibration plot comparing the proxy reward with the original reward function of the benchmark domains. Each point represents the average of the sum of each reward function, over 5 trajectories. Trajectories are generated by adding noise \(\mathcal{N}(0,\ell^{2})\) to the expert policy. The calibration plots show a strong correlation between the proxy reward and the true task reward.
maintains a consistent performance. This suggests that \(\lambda\) can generally be ignored and left to a default value.
Finally, we attempt different settings for the Wasserstein distance. In Table IV we display the change in performance from OOPS when using \(W_{1}\) or \(W_{2}\) when the distance metric \(d\) is the Euclidean distance \(||\cdot||_{2}\), and \(W_{1}\) when \(d\) is the cosine distance. OOPS uses \(W_{1}\) with the square root of the Euclidean distance, which de-emphasizes large differences in magnitude in a similar fashion to the cosine distance. We find that this choice of \(d\) provides significant benefits in high dimensional domains (Humanoid) where magnitudes matter but can vary significantly. We also compare with the learnt adversarial distance metric used by SIL [11] (denoted OOPS\({}_{\text{adv}}\)), and find that while this version outperforms vanilla SIL, the adversarial component is harmful.
**Transition vs. state occupancy.** For OOPS, we define trajectories by their state-next-state transitions \((s,s^{\prime})\), rather than individual states \(s\). Matching based on states can potentially admits multiple minimums since trajectories with the same states out of order can still minimize the state occupation distributional distance. Furthermore, if the reward function is based on state _and_ action, then it is clear that only matching state occupancy is insufficient. This setting is far more common in robotics as the reward is often defined by change, such as an increase in velocity, and typically considers costs associated with the action space. Since expert actions are unavailable in the ILfO setting, we must rely on \((s,s^{\prime})\), which offers a strong approximation as the outcome \(s^{\prime}\) is defined as a function of both the state \(s\) and action \(a\). We posit that enforcing a local ordering of states provides a higher fidelity signal for ILfO. We validate this empirically in our ablations (Table IV). While using state-only occupancy matches the performance of OOPS+TD3 in most environments, there is a large drop in performance in Walker2d. This aligns with our intuition: matching by state occupancy will typically provide a good matching, but can be problematic in certain environments depending on the state representation and transition dynamics.
## VI Conclusion
In this paper, we introduce OOPS, an ILfO algorithm that produces a reward function which minimizes the Wasserstein distance between the state transition trajectory of the expert and the imitation agent. We validate our approach through an extensive set of experiments and demonstrate that OOPS surpasses the current state-of-the-art methods in the ILfO setting across benchmark and robotics domains. Combined with off-policy RL, OOPS exhibits exceptional sample efficiency and low variance in performance, key qualities for the practical deployment of IL algorithms on real systems.
Fig. 3: Wasserstein distances between the 10 final rollout trajectories of OOPS+TD3 and the expert, using different solvers for the coupling matrix \(P\) (\(W_{\text{greedy}}\) and \(W_{\text{implex}}\)) compared against the Sinkhorn distance \(W_{\text{sk}}\) when varying the parameter \(\lambda\). Results are averaged over 10 expert trajectories. The Sinkhorn distance, for low enough values of \(\lambda\) computes a tighter upper bound to the Wasserstein distance estimates than \(W_{\text{greedy}}\)[12].
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Environment Space}} & \multicolumn{3}{c}{Hopper} & \multicolumn{3}{c}{Walker2d} & \multicolumn{3}{c}{HalfCheetah} & \multicolumn{3}{c}{Ant} & \multicolumn{3}{c}{Humanoid} \\ & (s) & (s, s’) & (s, a) & (s) & (s, s’) & (s, a) & (s) & (s, s’) & (s, s’) & (s, a) & (s) & (s, s’) & (s, a) \\ \hline OPOLO & 5.91 & 8.40 & 6.33 & 3.02 & 4.32 & 3.47 & **1.60** & **2.39** & **1.91** & 4.64 & 7.24 & **5.05** & 80.75 & 114.53 & 81.90 \\ PWIL – \((s)\) & 1.74 & 2.56 & 2.38 & **2.04** & **2.96** & **2.78** & 6.48 & 9.27 & 6.93 & **3.83** & 6.00 & 5.90 & 53.52 & 76.06 & 54.94 \\ OOPS+TD3 & **1.66** & **2.38** & **2.06** & 2.28 & 3.27 & 3.02 & **1.63** & **2.41** & **2.01** & **3.83** & **5.90** & **5.17** & **25.64** & **37.03** & **27.63** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Final Wasserstein distance in state occupancy, state transition, and state-action space of the 10 final trained agent rollouts to the expert trajectories for different ILfO algorithms, lower is better. We highlight in blue the best performing agent in state-action space, considered ground truth in this experiment, and **bold** the best performing agent according to each metric. Agents were trained using 10 expert demonstration trajectories, for 1M timesteps. Distances are averaged over 10 reference expert trajectories.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Hopper & Walker2d & HalfCheetah & Ant & Humanoid \\ \hline \multicolumn{1}{c}{\begin{tabular}{c} Occupancy (Default: \((s,s^{\prime})\)) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } \\ \hline State only & 0.10 & -33.93 & -0.57 & 0.30 & -0.94 \\ \hline \multicolumn{1}{c}{\begin{tabular}{c} Wasserstein Distance Solver (Default: \(\lambda=0.05\)) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } \\ \hline \(W_{\text{greedy}}\) & -14.99 & -7.75 & -45.46 & -0.79 & -19.21 \\ \(W_{\text{simplog}}\) & -10.91 & -6.09 & -1.03 & -2.40 & -33.99 \\ \(\lambda=0.005\) & -3.12 & -2.65 & -0.35 & -1.48 & -2.34 \\ \(\lambda=0.1\) & -1.72 & -3.87 & -1.39 & -3.88 & -9.18 \\ \(\lambda=0.5\) & -58.64 & -25.20 & -15.09 & -10.95 & -24.44 \\ \hline \multicolumn{1}{c}{\begin{tabular}{c} Wasserstein Distance Variations (Default: \(W_{1},d=\sqrt{||\cdot||_{2}}\)) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } \\ \hline \(W_{2},d=||\cdot||_{2}\) & -36.52 & -21.83 & -11.88 & -16.10 & -47.92 \\ \(W_{1},d=||\cdot||_{2}\) & -4.61 & -1.42 & -1.73 & -1.95 & -22.80 \\ \(W_{1},d=\cos\) & 0.13 & -10.04 & -4.09 & -2.84 & -34.63 \\ \hline \multicolumn{1}{c}{\begin{tabular}{c} Adversarial Distance (Default: Unused) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} \\ \end{tabular} } & \multicolumn{1}{c}{
\begin{tabular}{c} \\ \end{tabular} } \\ \hline SIL – \((s,s^{\prime})\) & -82.50 & -91.61 & -119.10 & -124.88 & -90.83 \\ OOPS\({}_{\text{adv}}\) & -21.09 & -76.58 & -101.84 & -17.68 & -97.87 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Results different variations of OOPS in terms of percent difference. All results use 10 expert trajectories, and are averaged across 5 seeds and 10 evaluations. State only uses \(W_{1}\) over \((s)\) rather than \((s,s^{\prime})\). Wasserstein distance solver modifies the solver used by OOPS to determine the coupling matrix \(P\). Adversarial distance refers to the use of the adversarial distance function from SIL [11] and also includes the full SIL method for comparison. |
2306.12816 | XAI-TRIS: Non-linear image benchmarks to quantify false positive
post-hoc attribution of feature importance | The field of 'explainable' artificial intelligence (XAI) has produced highly
cited methods that seek to make the decisions of complex machine learning (ML)
methods 'understandable' to humans, for example by attributing 'importance'
scores to input features. Yet, a lack of formal underpinning leaves it unclear
as to what conclusions can safely be drawn from the results of a given XAI
method and has also so far hindered the theoretical verification and empirical
validation of XAI methods. This means that challenging non-linear problems,
typically solved by deep neural networks, presently lack appropriate remedies.
Here, we craft benchmark datasets for three different non-linear classification
scenarios, in which the important class-conditional features are known by
design, serving as ground truth explanations. Using novel quantitative metrics,
we benchmark the explanation performance of a wide set of XAI methods across
three deep learning model architectures. We show that popular XAI methods are
often unable to significantly outperform random performance baselines and edge
detection methods. Moreover, we demonstrate that explanations derived from
different model architectures can be vastly different; thus, prone to
misinterpretation even under controlled conditions. | Benedict Clark, Rick Wilming, Stefan Haufe | 2023-06-22T11:31:11Z | http://arxiv.org/abs/2306.12816v2 | # XAI-TRIS: Non-linear benchmarks
###### Abstract
The field of 'explainable' artificial intelligence (XAI) has produced highly cited methods that seek to make the decisions of complex machine learning (ML) methods 'understandable' to humans, for example by attributing 'importance' scores to input features. Yet, a lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method and has also so far hindered the theoretical verification and empirical validation of XAI methods. This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies. Here, we craft benchmark datasets for three different non-linear classification scenarios, in which the important class-conditional features are known by design, serving as ground truth explanations. Using novel quantitative metrics, we benchmark the explanation performance of a wide set of XAI methods across three deep learning model architectures. We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods. Moreover, we demonstrate that explanations derived from different model architectures can be vastly different; thus, prone to misinterpretation even under controlled conditions.
## 1 Introduction
Only recently, a trend towards the objective empirical validation of XAI methods using ground truth data has been observed Tjoa & Guan (2020); Li et al. (2021); Zhou et al. (2022); Arras et al. (2022); Gevaert et al. (2022); Agarwal et al. (2022). These studies are, however, limited in the extent to which they permit a quantitative assessment of explanation performance, in the breadth of XAI methods evaluated, and in the difficulty of the posed 'explanation' problems. In particular, most published benchmark datasets are constructed in a way such that realistic correlations between class-dependent (e.g., the foreground or object of an image) and class-agnostic (e.g., the image background) features are excluded. In practice, such dependencies can give rise to features acting as suppressor variables. Briefly, suppressor variables have no statistical association to the prediction target on their own, yet including them may allow an ML model to remove unwanted signals (noise), which can lead to improved predictions. In the context of image or photography data, suppressor variables could be parts of the background that capture the general lighting conditions. A model can use such information to normalize the illumination of the object and, thereby, improve object detection. More details on the principles of suppressor variables can be found in Conger (1974); Friedman & Wall (2005); Haufe et al. (2014); Wilming et al. (2022). Here we adopt the formal requirement that an input feature should only be considered important if it has a statistical association with the prediction target, or is associated to it by construction. In that sense, it is undesirable to attribute importance to pure suppressor features.
Yet, Wilming et al. (2022) have shown that some of the most popular model-agnostic XAI methods are susceptible to the influence of suppressor variables, even in a linear setting. Using synthetic linearly separable data defining an explicit ground truth for XAI methods and linear models, Wilming et al. showed that a significant amount of feature importance is incorrectly attributed to suppressor variables. They proposed quantitative performance metrics for an objective validation of XAI methods, but limited their study to linearly separable problems and linear models. They demonstrate that methods based on so-called activation patterns (that is, univariate mappings from predictions to input features), based on the work of Haufe et al. (2014), provide the best explanations. However, it is unclear as to what extent these results would transfer to various non-linear settings.
Thus, well-designed non-linear ground truth data comprising of realistic correlations between important and unimportant features are needed to study the influence of suppressor variables on XAI explanations in non-trivial settings, which is the purpose of this paper. We go beyond existing work in the following ways:
**First**, we design one linear and three non-linear binary image classification problems, in which different types and combinations of tetrominoes Golomb (1996), overlaid on a noisy background, need to be distinguished. In all cases, ground truth explanations are explicitly known through the location of the tetrominoes. Apart from the linear case, these classification problems require (different types of) non-linear predictive models to be solved effectively.
**Second**, based on signal detection theory and optimal transport, we define two suitable quantitative metrics of 'explanation performance' designed to handle the case of few important features.
**Third**, using three different types of background noise (white, correlated, imagenet), we invoke the presence of suppressor variables in a controlled manner and study their effect on explanation performance.
**Fourth**, we evaluate the explanation performance of no less than sixteen of the most popular model-agnostic and model-specific XAI methods, across three different machine learning architectures.
Finally, we propose four model-agnostic baselines that can serve as null models for explanation performance.
## 2 Methods
### Data generation
For each scenario, we construct an individual dataset of \(64\times 64\)-sized images as \(\mathcal{D}=\left(\mathbf{x}^{(n)},y^{(n)}\right)_{n=1}^{N}\), consisting of _i.i.d_ observations \((\mathbf{x}^{(n)}\in\mathbb{R}^{D},y^{(n)}\in\{0,1\})_{n=1}^{N}\), where feature space \(D=64^{2}=4096\) and \(N=40,000\). Here, \(\mathbf{x}^{(n)}\) and \(y^{(n)}\) are realizations of the random variables \(\mathbf{X}\) and \(Y\), with joint probability density function \(p_{\mathbf{X},Y}(\mathbf{x},y)\).
In each scenario, we generate a sample \(\mathbf{x}^{(n)}\) as a combination of a signal pattern \(\boldsymbol{a}^{(n)}\in\mathbb{R}^{D}\), carrying the set of truly important features used to form the ground truth for an ideal explanation, with some background noise \(\boldsymbol{\eta}^{(n)}\in\mathbb{R}^{D}\). We follow two different generative models depending on whether the two components are combined additively or multiplicatively.
Additive generation processFor additive scenarios, we define the data generation process
\[\mathbf{x}^{(n)}=\alpha(R^{(n)}\circ(H\circ\boldsymbol{a}^{(n)}))+(1-\alpha)( G\circ\boldsymbol{\eta}^{(n)}), \tag{1}\]
for the \(n\)-th sample. Signal pattern \(\mathbf{a}^{(n)}=\mathbf{a}(y^{n})\) carries differently shaped tetromino patterns depending on the binary class label \(y^{(n)}\sim\text{Bernoulli}(\nicefrac{{1}}{{2}})\). We apply a 2D Gaussian spatial smoothing filter \(H:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) to the signal component to smooth the integration of the pattern's edges into the background, with smoothing parameter (spatial standard deviation of the Gaussian) \(\sigma_{\text{smooth}}=1.5\). The Gaussian filter \(H\) can technically provide infinite support to \(\boldsymbol{a}^{(n)}\), so in practice we threshold the support at \(5\%\) of the maximum level. White Gaussian noise \(\boldsymbol{\eta}^{(n)}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{D})\), representing a non-informative background, is sampled from a multivariate normal distribution with zero mean and identity covariance \(\mathbf{I}_{D}\). For each classification problem, we define a second background scenario, denoted as CORR, in which we apply a separate 2D Gaussian spatial smoothing filter \(G:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) to the noise component \(\boldsymbol{\eta}^{(n)}\). Here, we set the smoothing parameter to \(\sigma_{\text{smooth}}=10\). The
third background type is that of samples from the ImageNet database Deng et al. (2009), denoted IMAGENET. We scale and crop images to be \(64\times 64\)-px in size, preserving the original aspect ratio. Each 3-channel RGB image is converted to a single-channel gray-scale image using the built-in Python Imaging Library (PIL) functions and is zero-centered by subtraction of the sample's mean value.
As alluded to below, we also analyze a scenario where the signal pattern \(\mathbf{a}^{(n)}\) underlies a random spatial rigid body (translation and rotation) transformation \(R^{(n)}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\). All other scenarios make use of the identity transformation \(R^{(n)}\circ\mathbf{a}^{(n)}=\mathbf{a}^{(n)}\). Transformed signal and noise components \((R^{(n)}\circ\mathbf{a}^{(n)})\) and \((G\circ\mathbf{\eta}^{(n)})\) are horizontally concatenated into matrices \(\mathbf{A}=\big{[}(R^{(1)}\circ\mathbf{a}^{(1)}),\dots,(R^{(N)}\circ\mathbf{a}^{(N)}) \big{]}\) and \(\mathbf{E}=\big{[}(G\circ\mathbf{\eta}^{(1)}),\dots,(G\circ\mathbf{\eta}^{(N)})\big{]}\). Signal and background components are then normalized by the Frobenius norms of \(\mathbf{A}\) and \(\mathbf{E}\): \((R^{(n)}\circ\mathbf{a}^{(n)})\leftarrow(R^{(n)}\circ\mathbf{a}^{(n)})/||\mathbf{A}|| _{\mathbf{F}}\) and \((G\circ\mathbf{\eta}^{(n)})\leftarrow(G\circ\mathbf{\eta}^{(n)})/||\mathbf{E}||_{ \mathbf{F}}\), where the Frobenius norm of a matrix \(\mathbf{A}\) is defined as \(||\mathbf{A}||_{\mathbf{F}}\coloneqq(\sum_{n=1}^{N}\sum_{d=1}^{D}(\mathbf{a}_{d}^{ (n)})^{2})^{1/2}\). Finally, a weighted sum of the signal and background components is calculated, where the scalar parameter \(\alpha\in[0,1]\) determines the signal-to-noise ratio (SNR).
Multiplicative generation processFor multiplicative scenarios, we define the generation process
\[\mathbf{x}^{(n)}=\Big{(}\mathbf{1}-\alpha\left(R^{(n)}\circ(H^{(n)}\circ\mathbf{a }^{(n)}))\right)\Big{)}\left(G\circ\mathbf{\eta}^{(n)}\right)\, \tag{2}\]
where \(\mathbf{a}^{(n)}\), \(\mathbf{\eta}^{(n)}\), \(R^{(n)}\), \(H\) and \(G\) are defined as above, \(\mathbf{A}\) and \(\mathbf{E}\) are Frobenius-normalized, and \(\mathbf{1}\in\mathbb{R}^{D}\).
For data generated via either process, we scale each sample \(\mathbf{x}^{(n)}\in\mathbb{R}^{D}\) to the range \([-1,1]^{D}\), such that \(\mathbf{x}^{(n)}\leftarrow\mathbf{x}^{(n)}/\max|\mathbf{x}|\), where \(\max|\mathbf{x}|\) is the maximum absolute value of any feature in the dataset.
Emergence of suppressorsNote that the correlated background noise scenario induces the presence of suppressor variables, both in the additive and the multiplicative data generation processes. A suppressor here would be a pixel that is not part of the foreground \(R^{(n)}\circ\mathbf{a}^{(n)}\), but whose activity is correlated with a pixel of the foreground by virtue of the smoothing operator \(G\). Based on previously reported characteristics of suppressor variables Conger (1974); Friedman & Wall (2005); Haufe et al. (2014); Wilming et al. (2022), we expect that XAI methods may be prone to attributing importance to suppressor features in the considered linear and non-linear settings, leading to drops in explanation performance as compared to the white noise background setting.
ScenariosWe make use of tetrominoes Golomb (1996), geometric shapes consisting of four blocks (here each being \(8\times 8\)-pixels), to define each signal pattern \(\mathbf{a}^{(n)}\in\mathbb{R}^{64\times 64}\). We choose these as the basis for signal patterns as they allow a fixed and controllable amount of features (pixels) per sample, and specifically the 'T'-shaped and 'L' shaped tetrominoes due to their four unique appearances under each 90-degree rotation. These induce statistical associations between features and target in four different binary classification problems:
Linear (LIN) and multiplicative (MULT)For the linear case, we use the additive generation model Eq. (1), and for the multiplicative case, we instead use the multiplicative generation model. In both, signal patterns are defined as a 'T'-shaped tetromino pattern \(\mathbf{a}^{\mathbf{T}}\) near the top left corner if \(y=0\) and an 'L'-shaped tetromino pattern \(\mathbf{a}^{\text{L}}\) near the bottom-right corner if \(y=1\), leading to the binary classification problem. Each pattern is encoded such that \(a_{i,j}^{\text{TL}}=1\) for each pixel in the tetromino pattern, positioned at the \(i\)-th row and \(j\)-th column of \(\mathbf{a}^{\text{T/L}}\), and zero otherwise.
Translations and rotations (RIGID)In this scenario, \(a^{\text{T/L}}\) defining each class are no longer in fixed positions but are randomly translated and rotated by multiples of 90 degrees according to a rigid body transform \(R^{(n)}\), constrained such that the entire tetromino is contained within the image. In contrast to the other scenarios, we use a 4-pixel thick tetromino here to enable a larger set of transformations, and thus increase the complexity of the problem. This is an additive manipulation in accordance with (1).
XorThe final scenario is that of an additive XOR problem, where we use both tetromino variants \(a^{\text{TLA}}\) in every sample. Transformation \(R^{(n)}\) is, once again, the identity transform here. Class membership is defined such that members of the first class, where \(y=0\), combine both tetrominoes with the background of the image either positively or negatively, such that \(\mathbf{a}^{\text{XOR++}}=\mathbf{a}^{\text{T}}+\mathbf{a}^{\text{L}}\) and \(\mathbf{a}^{\text{XOR--}}=-\mathbf{a}^{\text{T}}-\mathbf{a}^{\text{L}}\). Members of the opposing class, where \(y=1\), imprint one shape positively, and the other negatively, such that \(\mathbf{a}^{\text{XOR++}}=\mathbf{a}^{\text{T}}-\mathbf{a}^{\text{L}}\) and \(\mathbf{a}^{\text{XOR++}}=-\mathbf{a}^{\text{T}}+\mathbf{a}^{\text{L}}\). Each of the four XOR cases are equally frequently represented across the dataset.
Figure 1 shows two examples from each class of each classification problem and for the three background types - Gaussian white noise (WHITE), smoothed Gaussian white noise (CORR), and ImageNet samples (IMAGENET). Figure 4 in the supplementary material shows examples of each of the 12 scenarios across four signal-to-noise ratios (SNRs).
With each classification scenario defined, we can form the ground truth feature set of important pixels for a given input based on the positions of tetromino pixels as
\[\mathcal{F}^{+}(\mathbf{x}^{(n)})\coloneqq\left\{d\mid\left(R^{(n)}\circ(H \circ\boldsymbol{a}^{(n)})\right)_{d}\neq 0,\,d\in\{1,\ldots,4096\}\right\}. \tag{3}\]
For the LIN and MULT scenarios, each sample either contains a 'T' or an 'L' tetromino at a fixed position, corresponding to the fixed patterns \(\mathbf{a}^{\text{T}}\) and \(\mathbf{a}^{\text{L}}\). Since the absence of a tetromino at one location is just as informative as the presence of the other at another location, we augment the set of important pixels for these two settings as
\[\mathcal{F}^{+}(\mathbf{x}^{(n)})\coloneqq\left\{d\mid H\circ\boldsymbol{a}^{ \text{T}}_{d}\neq 0\lor H\circ\boldsymbol{a}^{\text{L}}_{d}\neq 0,\,d\in\{1, \ldots,4096\}\right\}. \tag{4}\]
Note that this definition is equivalent to Eq. (3) for the XOR scenario. Moreover, it is equivalent to an operationalization of feature importance put forward by Wilming et al. (2022) for the three static scenarios LIN, MULT, and XOR. Wilming et al. define any feature as important if it has a statistical dependency to the prediction target across the studied sample. In all cases, an ideal explanation method should attribute importance only to members of the set \(\mathcal{F}^{+}(\mathbf{x}^{(n)})\).
For training each model and the subsequent analyses, we divide each dataset three-fold by a \(90/5/5\) split into a training set \(\mathcal{D}_{\text{train}}\), a validation set \(\mathcal{D}_{\text{val}}\), and a test set \(\mathcal{D}_{\text{test}}\).
### Classifiers
We use three architectures to model each classification problem. Firstly, a Linear Logistic Regression (LLR) model, which is a single-layer neural network with two output neurons and a softmax activation function. Secondly, a Multi-Layer Perceptron (MLP) with four fully-connected layers, where each of the hidden layers uses Rectified Linear Unit (ReLU) activations. The two-neuron output layer is once again softmax-activated. Finally, we define a Convolutional Neural Network (CNN) with four blocks of ReLU-activated convolutional layers followed by a max-pooling operation, with a softmax-activated two-neuron output layer. The convolutional layers are specified with a progressively
Figure 1: Examples of data for each scenario, showing differences between samples of each class.
increasing amount of filters per layer \([4,8,16,32]\), a kernel size of four, a stride of one, and zero-padding. The max-pooling layers are defined with a kernel size of two and a stride of one.
We train a given classifier \(f^{\mathbf{\theta}}:\mathbb{R}^{D}\rightarrow\mathcal{Y}\) over parameterization \(\mathbf{\theta}\) and \(\mathcal{D}_{\text{train}}\). Each network is trained over 500 epochs using the Adam optimizer without regularization, with a learning rate of \(0.0005\). The validation dataset \(\mathcal{D}_{\text{val}}\) is used at each step to get a sense of how well the model is generalizing the data. Validation loss is calculated at each epoch and used to judge when the classifier has reached optimal performance, by storing the model state with minimum validation loss. This also prevents using an overfit model. Finally, the test dataset \(\mathcal{D}_{\text{test}}\) is used to calculate the resulting model performance, and is used in the evaluation of XAI methods. We consider a classifier to have generalized the given classification problem when the resulting test accuracy is at or above a threshold of \(80\%\).
Each network is implemented in PyTorch, and also in Keras with a TensorFlow backend, so to experiment over a wider variety of XAI methods implemented using either the Captum Kokhlikyan et al. (2020) or iNNvestigate Alber et al. (2018) frameworks. The main text focuses on the former.
### XAI methods and performance baselines
We compare sixteen popular XAI methods in our analysis. The main text focuses on the results of four: Local Interpretable Model Explanations (LIME) Ribeiro et al. (2016), Layer-wise Relevance Propagation (LRP) Bach et al. (2015), SHapley Additive exPlanations (SHAP) Lundberg and Lee (2017) and Integrated Gradients Sundararajan et al. (2017).
The full list is detailed in Appendix A.5. This briefly summarizes each method, and provides the details of which library was used for implementation, Captum Kokhlikyan et al. (2020) or iNNvestigate Alber et al. (2018), as well as the specific parameterization for each method. Generally, we follow the default parameterization for each method. Where necessary, we specify the baseline \(\mathbf{b}\) as the zero input \(\mathbf{b}=\mathbf{0}\), a common choice in the field Mamalakis et al. (2022).
The input to an XAI method is a model \(f^{\mathbf{\theta}}:\mathbb{R}^{D}\rightarrow\mathbb{R}\), trained according to parameterization \(\mathbf{\theta}\) over \(\mathcal{D}_{\text{train}}\), the \(n\)-th test sample to be explained \(\mathbf{x}_{\text{test}}^{(n)}\), as well as the baseline reference point \(\mathbf{b}=\mathbf{0}\) for relevant methods. The method produces an 'explanation' \(\mathbf{s}(f^{\mathbf{\theta}},\mathbf{x}_{\text{test}}^{(n)},\mathbf{b})\in \mathbb{R}^{D}\).
We include four model-ignorant methods to generate 'baseline' importance maps for comparison with the aforementioned XAI methods. Firstly, we consider the Sobel filter, which uses both a horizontal and a vertical filter kernel to approximate first-order derivatives of data. Secondly, we use the Laplace filter, which uses a single symmetrical kernel to approximate second-order derivatives of data. Both are edge detection operators, and are given for each test sample as an input. Thirdly, we use a sample from a random uniform distribution \(U((-1,1)^{D})\). Finally, we use the rectified test data sample \(\mathbf{x}_{\text{test}}^{(n)}\) itself as an importance map.
### Explanation performance metrics
Based on the well-defined ground truth set of class-dependent features for a given sample \(\mathcal{F}^{+}(\mathbf{x}^{(n)})\), we can readily form quantitative metrics to evaluate the quality of an explanation.
#### Precision
Omitting the sample-dependence in the notation, we define precision as the fraction of the \(k=|\mathcal{F}^{+}|\) features of \(\mathbf{s}\) with the highest absolute-valued importance scores contained within the set \(\mathcal{F}^{+}\) itself, over the total number of important features \(|\mathcal{F}^{+}|\) in the sample.
#### Earth mover's distance (EMD)
The Earth mover's distance (EMD), also known as the Wasserstein metric, measures the optimal cost required to transform one distribution to another. We can apply this to the cost required to transform a continuous-valued importance map \(\mathbf{s}\) into \(\mathcal{F}^{+}\), where both are normalized to have the same mass. The Euclidean distance between pixels is used as the ground metric for calculating the EMD, with \(\operatorname{EMD}(\mathbf{s},\mathcal{F}^{+})\) denoting the cost of the optimal transport from \(\mathbf{s}\) to \(\mathcal{F}^{+}\). This follows the algorithm proposed by Bonneel et al. and the implementation of the Python Optimal Transport library Flamary
et al. (2021). We define a normalized EMD performance score as
\[\mathrm{EMD\_perf}(\mathbf{s},\mathcal{F}^{+})=1-\frac{\mathrm{EMD}(\mathbf{s}, \mathcal{F}^{+})}{\delta_{max}}, \tag{5}\]
where \(\delta_{max}\) is the maximum Euclidean distance between any two pixels.
Remark.Note that the ground truth \(\mathcal{F}^{+}(\mathbf{x})\) defines the set of important pixels based on the data generation process. It is conceivable, though, that a model uses only a subset of these for its prediction, which must be considered equally correct. Our explanation performance metrics do not fully achieve invariance in that respect. However, both are designed to de-emphasize the impact of false-negative omissions of features in the ground truth on performance, while emphasizing the impact of false-positive attributions of importance to pixels not contained in the ground truth.
## 3 Experiments
Our experiments aim to answer four main questions:
**1.** Which XAI methods are best at identifying truly important features as defined by the sets \(\mathcal{F}^{+}(\mathbf{x})\)?
**2.** Does explanation performance for each method remain consistent when moving from explaining a linear classification problem to problems with different degrees of non-linearity?
**3.** Does adding correlations to the background noise, through smoothing with the Gaussian convolution filter, negatively impact explanation performance?
**4.** How does the choice of model architecture impact explanation performance?
We generate a dataset for each scenario across a range of 20 choices of \(\alpha\), finding the'sweet spot' where average test accuracy over 10 trained models is at or above 80%. Table 1 shows the resulting \(\alpha\) values as well as the average test accuracy for each scenario, over five model trainings for datasets of size \(N=40,000\) of each scenario. For training each model and the subsequent analyses, we divide each dataset three-fold by an \(90/5/5\) split into a training set \(\mathcal{D}_{\text{train}}\), a validation set \(\mathcal{D}_{\text{val}}\), and a test set \(\mathcal{D}_{\text{test}}\). From this, we compute absolute-valued importance maps \(|\mathbf{s}|\) for the intersection of test data \(\mathcal{D}^{\text{test}}\) correctly predicted by every appropriate classifier. The full table of training results for finding appropriate SNRs can be seen in Appendix A.5.1. Experiments were run on an internal CPU and GPU cluster, with total runtime in the order of a matter of hours.
## 4 Results
Figure 2 depicts examples of absolute-valued importance maps produced for a random correctly-predicted sample for each scenario and model. Shown are results for four XAI methods (Gradient SHAP, LIME, LRP, and PatternNet respectively) for each of the three models (LLR, MLP, CNN
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{3}{c}{WHITE} & \multicolumn{3}{c}{CORR} & \multicolumn{2}{c}{IMAGENET} \\ & & \(\alpha\) & ACC & \(\alpha\) & ACC & \(\alpha\) & ACC \\ \hline \multirow{3}{*}{LIN} & LLR & \(0.03\) & \(89.7\) & \(0.02\) & \(100.0\) & \(0.1\) & \(87.5\) \\ & MLP & \(0.03\) & \(87.9\) & \(0.02\) & \(100.0\) & \(0.1\) & \(86.2\) \\ & CNN & \(0.03\) & \(90.1\) & \(0.02\) & \(99.9\) & \(0.1\) & \(93.9\) \\ \multirow{3}{*}{MULT} & MLP & \(0.64\) & \(85.8\) & \(0.04\) & \(89.2\) & \(0.3\) & \(91.2\) \\ & CNN & \(0.64\) & \(100.0\) & \(0.04\) & \(98.5\) & \(0.3\) & \(91.3\) \\ \multirow{3}{*}{RIGID} & MLP & \(0.575\) & \(88.9\) & \(0.375\) & \(99.5\) & \(0.6\) & \(92.0\) \\ & CNN & \(0.575\) & \(100.0\) & \(0.375\) & \(100.0\) & \(0.6\) & \(99.9\) \\ \multirow{3}{*}{XOR} & MLP & \(0.1\) & \(99.9\) & \(0.1\) & \(100.0\) & \(0.2\) & \(99.9\) \\ & CNN & \(0.1\) & \(100.0\) & \(0.1\) & \(100.0\) & \(0.2\) & \(100.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of the model training process for each classification setting, model architecture, and background type. These results are depicted as chosen Signal-to-noise ratios (SNRs), parameterized by \(\alpha\), as well as the average test accuracy (ACC, %).
respectively) followed by the model-ignorant Laplace filter. Appendix A.6.1 expands on the qualitative results of the main text, and Figure 6 shows the absolute-valued _global_ importance heatmaps for the LIN, MULT, and XOR scenarios, given as the mean of all explanations for every correctly-predicted sample of the given scenario and XAI method. As the RIGID scenario has no static ground truth pattern, calculating a global importance map is not possible. Figure 3 shows explanation performance of individual sample-based importance maps produced by the selected XAI and baseline methods, across five models trained for each scenario-architecture parameterization in terms of the \(\mathrm{EMD}\_\mathrm{perf}\) metric. This is shown here for 500 test samples for each scenario. Appendix A.6.2 expands on the quantitative results of the main text, detailing results for all 16 methods studied and for our Precision metric, over the full test set. In a few cases, performance tends to decrease as model complexity increases (from the simple LLR to the complex CNN architecture). One notable exception is for the RIGID scenario, where the CNN outperforms other models as expected. However, in this setting nearly all XAI methods are outperformed by a simple Laplace edge detection filter for correlated backgrounds results. The CNN also performs well in the case of the more-complicated IMAGENET backgrounds.
Within most scenario-architecture parameterizations, the performances of the studied XAI methods are relatively homogeneous, with a few exceptions. In most cases, correlated backgrounds (CORR) lead to worse explanation performance than their white noise (WHITE) counterparts, suggesting that suppressors in the smoothed background are difficult to distinguish from the class-dependent variables for most XAI methods.
Baseline methods tend to perform similarly to one another. Interestingly, their performance is on par or even superior to various XAI methods in certain scenarios. Most notably, a simple Laplace
Figure 2: Absolute-valued importance maps obtained for a random correctly-predicted data sample, for selected XAI methods and baselines. Recovery of the ground truth pattern across all scenarios is best shown by XAI methods applied to a Linear Logistic Regression (LLR) model. The Multi-Layer Perceptron (MLP) tends to focus on noise in the case of ImageNet backgrounds, and LIME often fails to produce sensical explanations across all model architectures.
edge detection filter outperforms nearly all other methods in the RIGID as well as the XOR scenarios, when used in combination with correlated backgrounds (CORR).
## 5 Discussion
Experimental results confirm our main hypothesis that explanation performance is lower in cases where the class-specific signal is combined with a highly auto-correlated class-agnostic background (CORR) compared to a white noise background (WHITE). The difficulty of XAI methods to correctly highlight the truly important features in this setting can be attributed to the emergence of suppressor variables. Importantly, the misleading attribution of importance by an XAI method can lead to misinterpretations regarding the functioning of the predictive model, which could have severe consequences in practice. Such consequences could be unjustified mistrust in the model's decisions, unjustified conclusions regarding the features related to a certain outcome (e.g., in the context of medical diagnosis), and a reinforcement of such false beliefs in human-computer interaction loops.
We have also seen that when multiple ML architectures can be used interchangeably to appropriately solve a classification problem - here with classification accuracy required to be above 80% - they may still produce disparate explanations. Architectures not only differed with respect to the selection of pixels within the correct set of important features, but also showed different patterns of false positive attributions of importance to unimportant background features. If one cannot produce consistent and sensical results for multiple seemingly appropriate ML architectures, the risk of model mistrust may be especially pronounced.
A recent survey showed that one in three XAI papers evaluate methods exclusively with anecdotal evidence, and one in five with user studies Nauta et al. (2023). Other work in the field tends to focus on secondary criteria (such as stability and robustness Rosenfeld et al. (2021-03-27); Hedstrom et al. (2022)) or subjective or potentially circular criteria (such as fidelity and faithfulness Gevert et al. (2022); Nauta et al. (2023)). We doubt that such validation approaches can fully replace metrics assessing objective notions of 'correctness' of explanations, considering that XAI methods are widely intended to be used as means of quality assurance for machine learning systems in critical applications. Thus, the development of specific formal problems to be addressed by XAI methods, and the theoretical and empirical validation of respective methods to address specific problems, is necessary. In practice, a stakeholder may often (explicitly or implicitly) expect that a given XAI method identifies features that are truly related to the prediction target. In contrast to other notions
Figure 3: Quantitative explanation performance of individual sample-based feature importance maps produced by various XAI approaches and baseline methods on correctly-predicted test samples, as per the \(\mathrm{EMD\_perf}\) metric. Depicted are boxplots of median explanation performance, with upper and lower quartiles as well as outliers shown. The white area (left) shows results for white background noise (WHITE), whereas the light gray shaded area (middle) shows results for the correlated background noise (CORR) scenarios and the darker gray (right) for ImageNet (IMAGENET) backgrounds.
of faithfulness, this is an objectively quantifiable property of an XAI method, and we here propose various non-linear types of ground-truth data along with appropriate metrics to directly measure explanation performance according to this definition. While our work is not the first to provide quantitative XAI benchmarks (see, Tjoa & Guan, 2020; Li et al., 2021; Zhou et al., 2022; Arras et al., 2022; Gevaert et al., 2022; Agarwal et al., 2022), our work differs from most published papers in that it allows users to quantitatively assess potential misinterpretations caused by the presence of suppressor variables in data.
### Limitations
One potential limitation of our work is the strictness of limiting the ground truth feature set \(\mathcal{F}^{+}\) to the specific pixels of tetrominoes \(\mathbf{a}^{\text{T/L}}\) compared to, say, the set of features outlining \(\mathbf{a}^{\text{T/L}}\). Alternative definitions of \(\mathcal{F}^{+}\) could be conceived, as well as new metrics, to more flexibly adapt to different potential 'explanation strategies'. While we compare a total of 16 XAI methods, the space of possible neural network architectures is too vast to be represented; therefore we only compared one MLP and one CNN architecture here. However, our experiments hopefully serve as a showcase for our benchmarking framework, which can be easily extended to other architectures. Finally, our framework serves much needed validation purposes for methods that are conceived to themselves play a role in the quality assurance of AI. As such, we expect that the benefits of our work far outweigh potential negative implications on society, if any. A possible risk, even if far-fetched, would be that one may reject a fit-for-purpose XAI method based on empirical benchmarks such as ours, which do not necessarily reflect the real-world setting and may hence be too strict.
## 6 Conclusion
We have used a data-driven generative definition of feature importance to create synthetic data with well-defined ground truth explanations, and have used these to provide an objective assessment of XAI methods when applied to various classification problems. Furthermore, we have defined new quantitative metrics of explanation performance and demonstrated that many popular XAI methods do not behave in an ideal way when moving from linear to non-linear scenarios. Our results show that XAI methods can even be outperformed by simple model-ignorant edge detection filters in the RIGID use case, in which the object of interest is not located in a static position. Further, we show that XAI methods may provide inconsistent explanations when using different model architectures under equivalent conditions. Future work will be to develop dedicated performance benchmarks in more complex and application-specific problem settings such as medical imaging.
|
2308.04500 | Predicting Pathogenicity Of nsSNPs Associated With Rb1 -- An In Silico
Approach | Single nucleotide polymorphisms (SNPs) are variations at specific locations
in DNA. Sequence responsible for marking genes associated with diseases or
tracking inherited diseases within The family. These variations in the Rb1 gene
can cause Retinoblastoma and cancer in the retina Of one eye or both,
Osteosarcoma, Melanoma, Leukemias, Lungs, and Breast cancer. First of all,The
SNP database hosted by NCBI was used to extract some principal data. The
association of Rb1 to Other genes were analyzed by GeneMANIA. Ten different
computational tools, i.eSIFT,Polyphen-2, I-Mutant 3.0,PROVEAN, SNAP2, PHD-SNP,
PMut, SNPs&GO were used for the screening of damaging SNP for the estimation of
conserved regions of amino acids Consurf Server was used for the evaluation of
the structural stability of both native and mutant proteins, Project Hope was
used to examine the structural effects of mutant protein.GeneMANIA predicted
that RB1 Gene was expected to have a strong association with 20 other genes
i.e. CCND1 and RBP2 etc. As per data retrieved from dbSNP hosted by NCBI,the
Rb1 gene probed in this study carried a total of 36,358 SNPs. 345 were found in
3'UTR, 65 in 5'UTR, and 34,543 were found in the intron region. 844 were coding
SNPs, and out of 844, 199 were synonymous And 450 were non-synonymous,
including 425 missense, five nonsense, and 20 frameshift mutations. And
remaining all are other types of SNPs. We took 425 missense SNPs for our
investigation. A total of 17 mutations i.e. D332G, R445Q, E492V, P515T, W516G,
V531G, E533K, E539K, M558R,W563G, L657Q, A658T, R661Q, D697H, D697E, P796L and
R798W were predicted to have Damaging effects on structure and function of Rb1
protein.. | Anum Munir | 2023-07-16T18:55:30Z | http://arxiv.org/abs/2308.04500v1 | # "Predicting Pathogenicity Of nsSNPs Associated With Rb1- An In Silico Approach"
###### Abstract
Single nucleotide polymorphisms (SNPs) are variations at specific locations in DNA. Sequence responsible for marking genes associated with diseases or tracking inherited diseases within The family. These variations in the **Rb1** gene can cause Retinoblastoma and cancer in the retina Of one eye or both, Osteosarcoma, Melanoma, Leukemia, Lungs, and Breast cancer. First of all, The SNP database hosted by NCBI was used to extract some principal data. The association of Rb1 to Other genes were analyzed by GeneMANIA. Ten different computational tools, i.e., SIFT,Polyphen-2, I-Mutant 3.0, PROVEAN, SNAP2, PHD-SNP, PMut, SNPs&GO were used for the screening of damaging SNP for the estimation of conserved regions of amino acids Consurf Server was used for the evaluation of the structural stability of both native and mutant proteins, Project Hope was used to examine the structural effects of mutant protein. GeneMANIA predicted that RB1 Gene was expected to have a strong association with 20 other genes i.e. CCND1 and RBP2 etc. As per data retrieved from dbSNP hosted by NCBI,the Rb1 gene probed in this study carried a total of 36,358 SNPs. 345 were found in 3UTR, 65 in 5UTR, and 34,543 were found in the intron region. 844 were coding SNPs, and out of 844, 199 were synonymous And 450 were non-synonymous, including 425 missense, five nonsense, and 20 frameshift mutations. And remaining all are other types of SNPs. We took 425 missense SNPs for our investigation. A total of 17 mutations i.e. D332G, R445Q, E492V, P515T, W516G, V531G, E533K, E539K, M558R,W563G, L657Q, A658T, R661Q, D697H, D697E, P796L and R798W were predicted to have Damaging effects on structure and function of Rb1 protein..
Rb1Gene,nsSNPs,GeneMANIA,insilicoanalysis,coding,mutation, Retinoblastoma
## 1 Introduction
The work of this research has been presented in two parts as this is based on bioinformatics.The Foremost goal of the first part is to fetch data from the database and the purpose of the second part is to Extract accurate results using different bioinformatics tools by utilizing the same data taken from the database. **(first part of this study)** Rb1 stands for Retinoblastoma, which is cancer in the eye's retina, and it often develops in early Childhood. It may produce sporadically or genetically but is much more hostile if untreated. Here we Discuss the in-a-silico way out of treating this genetic disorder through a number of Computational tools. RB1 is a tumor suppressor gene found on chromosome no 13 that is responsible for the regulation of cell growth and prevents the cell from excessive growth in an uncontrolled way by Hindering cell cycle progression until cell division and when the cell is about to divide, Rb1 is phosphorylated to pRb, which leads to the inactivation of the Retinoblastoma protein activity [1]. This phase permits cells to enter into the cell cycle state, which may lead to the Mutation of this gene [2]. However, if Rb1 is activated chronically, then it redirects to the decline of required DNA replication factors, and after 72-96 hours of this chronicle activation then, all targeted Proteins portray declined factors of DNA replication, and sometimes, it may lead to a grave condition in which hindrance of DNA replication in cells takes place [3]. Rb1 is a component of "pocket protein family". Retinoblastoma protein **(RB)**, Retinoblastoma-like protein 1**(p107),** and Retinoblastoma-like protein 2**(p130)** is included in the pocket protein family. All three members Of this family are able the binding its functions to at least 100 further proteins, or shortly we can say that Rb1 is a multitasking protein with many Phosphorylation and binding sites, especially with E2F family. [4]
As this is a matter of fact that the total number of nucleotides in the Human Genome is 32,00,000,000, and they are split up into 24 linear molecules. As our study is on the RB1 gene Located on chromosome# 13, and the study reveals that chromosome # 13 has almost 114 Million base pairs comprise 3.5%-4.0% of the total genome in cells. Besides it, there are also,some variations in these nucleotides are called **"single nucleotide |
2301.08631 | Superconductivity in type II layered Weyl semi-metals | Novel quasi two dimensional typically layered semimetals offer a unique
opportunity to control the density and even the topology of the electronic
matter. In intercalated MoTe2 type II Weyl semimetal the tilt of the dispersion
relation cones is so large that topologically of the Fermi surface is distinct
from a more conventional type I. Superconductivity observed recently in this
compound [Zhang et al, 2D Materials 9, 045027 (2022)] demonstrated two puzzling
phenomena: the gate voltage has no impact on critical temperature, Tc, in wide
range of density, while it is very sensitive to the interlayer distance. The
phonon theory of pairing in a layered Weyl material including the effects of
Coulomb repulsion is constructed and explains the above two features in
MoTe2.The first feature turns out to be a general one for any type II
topological material, while the second reflects properties of the intercalated
materials affecting the Coulomb screening. | Baruh Rosenstein, B. Ya. Shapiro | 2023-01-20T15:28:05Z | http://arxiv.org/abs/2301.08631v1 | # Superconductivity in type II layered Weyl semi-metals
###### Abstract
Novel "quasi two dimensional" typically layered (semi) metals offer a unique opportunity to control the density and even the topology of the electronic matter. In intercalated \(MoTe_{2}\) type II Weyl semi - metal the tilt of the dispersion relation cones is so large that topologically of the Fermi surface is distinct from a more conventional type I. Superconductivity observed recently in this compound [Zhang et al, 2D Materials **9,** 045027 (2022)] demonstrated two puzzling phenomena: the gate voltage has no impact on critical temperature, \(T_{c}\), in wide range of density, while it is very sensitive to the inter - layer distance. The phonon theory of pairing in a layered Weyl material including the effects of Coulomb repulsion is constructed and explains the above two features in \(MoTe_{2}\). The first feature turns out to be a general one for any type II topological material, while the second reflects properties of the intercalated materials affecting the Coulomb screening.
Introduction.
The 3D and 2D topological quantum materials, such as topological insulators and Weyl semi - metals (WSM), attracted much interests due to their rich physics and promising prospects for application in electronic and spinotronic devices. The band structure in the so called type I WSM like graphene[1], is characterized by appearance linear dispersion relation (cones around several Dirac points) due to the band inversion. This is qualitatively distinct from conventional metals, semi - metals or semiconductors, in which bands are typically parabolic. In type-II WSM [2], the cones have such a strong tilt, \(\kappa\), so that they exhibit a nearly flat band and the Fermi surface "encircles" the Brillouin zone, Fig.1b, Fig.1c. It is topologically distinct from conventional "pockets", see Fig.1a. This in turn leads to exotic electronic properties different from both the those in both the conventional and in the type I WSM. Examples include the collapse of the Landau level spectrum in magnetoresistance [3], and novel quantum oscillations [4].
The type II topology of the Fermi surface was achieved in particular in transition metal dichalcogenides [5]. Very recently\(MoTe_{2}\) layers intercalated by ionic liquid cations were studied[6]. The tilt value was estimated to as high as \(\kappa=1.3\) that places it firmly within the type II WSM class. The measurements included the Hall effect and the resistivity at low temperatures demonstrating appearance of superconductivity. They discovered two intriguing facts that are currently under discussion. First changing the gate voltage (chemical potential) surprisingly has no impact on critical temperature, \(T_{c}\), in wide range of density of the electron gas. Second \(T_{c}\) turned out to be very sensitive to the inter - layer distance \(d\): it increases from \(10.5A\) to \(11.7A\), while the critical temperature jumps from \(4.2K\) to \(7K\). In the present paper we propose a theoretical explanation of these observations based on appropriate generalization of the conventional superconductivity theory applied to these materials.
Although early on unconventional mechanisms of superconductivity in WSM have been considered, accumulated experimental evidence points towards the conventional phonon mediated one [7; 8; 9]. In the previous paper[11] and a related work[10] a continuum theory of conventional superconductivity in WSM was developed. Magnetic response in the superconducting state was calculated[10][12]. The model was too "mesoscopic" to describe the type II phase since the _global_ topology of the Brillouin zone was beyond the scope of the continuum approach. Therefore we go beyond the continuum model in the present paper by modeling a type II layered WSM using a tight binding approach. The in-plane electron liquid model is similar to that of graphene oxide[13] and other 2D WSM. It possesses a chiral symmetry between two Brave sublattices for all values of the tilt parameter \(\kappa\), but lacks hexagonal symmetry. The second necessary additional feature is inclusion of Coulomb repulsion.
It turns out that the screened Coulomb repulsion significantly opposes the phonon mediated pairing. Consequently a detailed RPA theory of screening in a layered material[14] is applied. We calculate the superconducting critical temperature taking into consideration the modification of the Coulomb interaction due to the dielectric constant of intercalator material and the inter-layered spacing \(d\). The Gorkov equations for the two sublattices system are solved without resorting to the mesoscopic approach. Moreover since screening of Coulomb repulsion plays a much more profound role in quasi 2D materials the pseudo-potential simplification developed by McMillan[15] is not valid.
Rest of the paper is organized as follows. In Section II the microscopic model of the layered WSM is described. The RPA calculation of both the intra- and inter - layer screening is presented. In Section III the Gorkov equations for the optical phonon mediated intra- layer pairing for a multiband system including the Coulomb repulsion is derived and solved numerically. In Section IV the phonon theory of pairing including the Coulomb repulsion for a layered material is applied to recent extensive experiments on \(MoTe_{2}\). The effect of intercalation and density on superconductivity is studied. This explains the both remarkable features of \(T_{c}\) observed[6] in \(MoTe_{2}\). The last Section contains conclusions and discussion.
## II A "generic" lattice model of layered Weyl semi-metals
### Intra- layer hopping
A great variety of tight binding models were used to describe Weyl (Dirac) semimetals in 2D. Historically the first was graphene (type I, \(\kappa=0\)), in which electrons hope between the neighboring cites of the honeycomb lattice. We restrict the discussion to systems with the minimal two cones of opposite chirality and negligible spin orbit coupling. The two Dirac cones appear in graphene at \(K\) and \(K^{\prime}\) crystallographic points in BZ. Upon modification (more complicated molecules like graphene oxide, stress, intercalation) the hexagonal symmetry is lost, however a discrete chiral symmetry between two sublattices, denoted by \(I=A,B\), ensures the WSM. The tilted type I and even type II (for which typically \(\kappa>1\)) crystals can be described by the same Hamiltonian with the tilt term added. This
2D model is extended to a layered system with inter - layer distance \(d\). Physically the 2D WSM layers are separated by a dielectric material with inter - layer hopping neglected, so that they are coupled electromagnetically only[14].
The lateral atomic coordinates are still considered on the honeycomb lattice are \(\mathbf{r_{n}}=n_{1}\mathbf{a}_{1}+n_{2}\mathbf{a}_{2}\), where lattice vectors are:
\[\mathbf{a}_{1}=\frac{a}{2}\left(1,\sqrt{3}\right);\ \mathbf{a}_{2}=\frac{a}{2} \left(1,-\sqrt{3}\right), \tag{1}\]
despite the fact that hopping energies are different for jumps between nearest neighbors. Each site has three neighbors separated by \(\delta_{1}=\frac{1}{3}\left(\mathbf{a}_{1}-\mathbf{a}_{2}\right),\delta_{2}=- \frac{1}{3}\left(2\mathbf{a}_{1}+\mathbf{a}_{2}\right)\) and \(\delta_{3}=\frac{1}{3}\left(\mathbf{a}_{1}+2\mathbf{a}_{2}\right)\), in different directions. The length of the lattice vectors \(a\) will be taken as the length unit and we set \(\hbar=1\). The hopping Hamiltonian including the tilt term is[13; 16]:
\[K=\frac{\sqrt{3}}{4}\sum\nolimits_{\mathbf{n}l}\left\{\gamma\left(\psi_{ \mathbf{n}l}^{sA\dagger}\psi_{\mathbf{r_{n}}+\delta_{1},l}^{sB}+\psi_{ \mathbf{n}l}^{sA\dagger}\psi_{\mathbf{r_{n}}+\delta_{2},l}^{sB}+t\psi_{ \mathbf{n}l}^{sA\dagger}\psi_{\mathbf{r_{n}}+\delta_{3},l}^{sB}\right)+ \mathrm{h.c.}-\kappa\psi_{\mathbf{n}l}^{sI\dagger}\psi_{\mathbf{r_{n}}+ \mathbf{a}_{1},l}^{sI}-\mu n_{\mathbf{n},l}\right\}. \tag{2}\]
Here an integer \(l\) labels the layers. Operator \(\psi_{\mathbf{n}l}^{sA\dagger}\) is the creation operators with spin \(s=\uparrow,\downarrow\), while the density operator is defined as \(n_{\mathbf{n}l}=\psi_{\mathbf{n}l}^{sI\dagger}\psi_{\mathbf{n}l}^{sI}\). The chemical potential is \(\mu\), while \(\gamma\) is the hopping energy for two neighbors at \(\delta_{1},\delta_{2}\). Since the the system does not possesses hexagonal symmetry (only the chiral one), the third jump has the different hopping[13]\(t\gamma\). Dimensionless parameter \(\kappa\) determines the tilt of the Dirac cones along the \(\mathbf{a}_{1}\)direction[16]. In the 2D Fourier space, \(\psi_{\mathbf{n}l}^{sA}=N_{s}^{-2}\sum_{\mathbf{k}}\psi_{\mathbf{k}l}^{sA}e^ {-i\mathbf{k}\cdot\mathbf{r}_{n}}\), one obtains for Hamiltonian (for finite discrete reciprocal lattice \(N_{s}\times N_{s}\)):
\[K=N_{s}^{-2}\sum\nolimits_{\mathbf{k}l}\psi_{\mathbf{k}l}^{s\dagger}M_{ \mathbf{k}}\psi_{\mathbf{k}l}^{s}. \tag{3}\]
Here \(\mathbf{k}=\frac{k_{1}}{N_{s}}\mathbf{b}_{1}+\frac{k_{2}}{N_{s}}\mathbf{b}_{2}\) (reciprocal lattice vectors are given in Appendix A) and matrix \(M_{\mathbf{k}}=d_{x}\sigma_{x}+d_{y}\sigma_{y}+d_{0}I\) in terms of Pauli matrices has components:
\[d_{x} = \frac{2t}{\sqrt{3}}\cos\left[\frac{2\pi}{3N_{s}}\left(k_{1}-k_{2 }\right)\right]+\frac{4}{\sqrt{3}}\cos\left[\frac{\pi}{N_{s}}\left(k_{1}+k_{2 }\right)\right]\cos\left[-\frac{\pi}{3N_{s}}\left(k_{1}-k_{2}\right)\right]; \tag{4}\] \[d_{y} = -\frac{2t}{\sqrt{3}}\sin\left[\frac{2\pi}{3N_{s}}\left(k_{1}-k_{2 }\right)\right]+\frac{4}{\sqrt{3}}\cos\left[\frac{\pi}{N_{s}}\left(k_{1}+k_{2 }\right)\right]\sin\left[\frac{\pi}{3N_{s}}\left(k_{1}-k_{2}\right)\right];\] \[d_{0} = \frac{2}{\sqrt{3}}\left\{-\kappa\cos\left[\frac{2\pi}{N_{s}}k_{1 }\right]-\mu\right\}.\]
Figure 1: Two distict topologies of the Fermi surface in 2D. Topology of the 2D Brillouin zone is that of the surface of 3D torroid. On the left the “conventional” type I pocket is shown. In the ceter and on the right the type II topology is shown schematically. The filled states are in blue and envelop the torus. Despite the large difference in density of the two the Fermi surface properties like density of states are the same.
Using \(\gamma\) as our energy unit from now on, the free electrons part of the Matsubara action for Grassmanian fields \(\psi_{\mathbf{k}ln}^{*sI}\) is:
\[S^{e}=\frac{1}{T}\sum\nolimits_{\mathbf{k}ln}\psi_{\mathbf{k}ln}^{*sI}\left\{ \left(-i\omega_{n}+d_{\mathbf{k}}^{0}\right)\delta^{IJ}+\sigma_{i}^{IJ}d_{ \mathbf{k}}^{i}\right\}\psi_{\mathbf{k}ln}^{*J}. \tag{5}\]
where \(\omega_{n}=\pi T\left(2n+1\right)\) is the Matsubara frequency. The Green Function of free electrons has the matrix form
\[g_{\mathbf{k}n}=\left\{\left(-i\omega_{n}+d_{\mathbf{k}}^{0}\right)I+\sigma_{ i}d_{\mathbf{k}}^{i}\right\}^{-1}=\frac{\left(-i\omega_{n}+d_{\mathbf{k}}^{0} \right)I-\sigma_{i}d_{\mathbf{k}}^{i}}{\left(i\omega_{n}-d_{\mathbf{k}}^{0} \right)^{2}-d_{\mathbf{k}}^{*2}-d_{\mathbf{k}}^{*2}}. \tag{6}\]
Now we turn to the spectrum of this model.
### The range of the topological type II phase at large \(\kappa\)
The spectrum of Hamiltonian of Eqs.(4) consists of two branches. The upper branch for \(\mu=0.9eV\) is given in Fig. 2. The lower branch for a reasonable choice of parameters appropriate to \(MoTe_{2}\) is significantly below the Fermi surface and is not plotted. Blue regions represent the filled electron states. One observes a "river" from one boundary to the other of the Brillouin zone (in coordinates \(k_{1}\) and \(k_{2}\), in terms of the original \(k_{x},k_{y}\) it is a rhomb) characteristic to type II Fermi surface. Topologically this is akin to Fig.1b.
In Fig. 3 the Fermi surfaces in a wide range of densities \(n=7.\times 10^{13}-4.5\times 10^{14}cm^{-2}\) are given. Topologically they separate into three phases. At chemical potentials below \(\mu_{c}^{1}=0.796\)\(eV\), corresponding to densities \(n<n_{c}^{1}=8.\times 10^{13}\)\(cm^{-2}\), the Fermi surface consists of one compact electron pocket similar to Fig.1a, so that the electronic matter is of the ("customary") topological type I. The density is determined from the (nearly linear) relation between the
Figure 2: The topological phase diagram of the Weyl semimetal at large tilt parameter (\(\kappa=1.3\)). Chemical potential (in units of \(\gamma=500\) meV) is marked on each contour. The electron type I topology at low values of \(\mu\) undergoes transition to the type II at \(\mu=\mu_{c}^{1}=0.8\) meV. At yet larger \(\mu>\mu_{c}^{2}=1.35\). the Fermi surface becomes again type I. This time the excitations are hole rather than electrons.
chemical potential and density given in Fig. 4 (blue line, scale on the right). In the range \(\mu_{c}^{1}<\mu<\mu_{c}^{2}=1.35eV\) the Fermi surface consists of two banks of a "river" (blue color represents filled electron states) in Fig.2 and can be viewed topologically as in Fig.1b and Fig1c. The second critical density is \(n_{c}^{2}=3.6\times 10^{14}\ cm^{-2}\). In this range the shape of both pieces of the Fermi surface largely does not depend on the density that is proportional to the area of the blue part of the surface.
To make this purely topological observation quantitative, we present in Fig. 4 (green line, scale on the left) the density of states (DOS) as a function of chemical potential. One observes that it nearly constant away from the two topological I to II transitions where it peaks.
### Coulomb repulsion
The electron-electron repulsion in the layered WSM can be presented in the form,
\[V=\frac{e^{2}}{2}\sum\nolimits_{\mathbf{n}\mathbf{l}\mathbf{n}^{\prime}l^{ \prime}}n_{\mathbf{n}\mathbf{l}}v^{C}_{\mathbf{n}-\mathbf{n}^{\prime},l-l^{ \prime}}n_{\mathbf{n}^{\prime}l^{\prime}}=\frac{e^{2}}{2N_{z}^{2}}\sum\nolimits _{\mathbf{q}ll^{\prime}}n_{\mathbf{q}l}n_{-\mathbf{q}l^{\prime}}v^{C}_{\mathbf{ q},l-l^{\prime}}, \tag{7}\]
where \(v^{C}_{\mathbf{n}-\mathbf{n}^{\prime},l-l^{\prime}}\) is the "bare" Coulomb interaction between electrons with Fourier transform \(v^{C}_{\mathbf{q},l-l^{\prime}}=v^{2D}_{\mathbf{q}}e^{-dq|l-l^{\prime}|}\), \(v^{2D}_{\mathbf{q}}=2\pi e^{2}/q\epsilon\). Here \(\epsilon\) is the dielectric constant of the intercalator material
The long range Coulomb interaction is effectively taken into account using the RPA approximation.
## IV Screening in layered WSM.
The screening in the layered system can be conveniently partitioned into the screening within each layer described by the polarization function \(\Pi_{\mathbf{q}n}\) and electrostatic coupling to carriers in other layers. We start with the former.
Figure 3: Dispersion relation of WSM with \(\kappa=1.3\). The blue plane corresponds to chemical potentia l\(\mu\) = 0.8 eV so that the Fermi surface has the type II topology.
### Polarization function of the electron gas in Layered WSM
In a simple Fermi theory of the electron gas in normal state with Coulomb interaction between the electrons in RPA approximation the Matsubara polarization is calculated as a simple _minus_ "fish" diagram [14] in the form:
\[\Pi_{\mathbf{q}n}=-\left(-2T\mathrm{Tr}\sum\nolimits_{\mathbf{p}m}g_{\mathbf{p} m}g_{\mathbf{p}+\mathbf{q},m+n}\right). \tag{8}\]
Using the GF of Eq.(6), one obtain:
\[\Pi_{\mathbf{q}n}=4T\sum\nolimits_{\mathbf{p}m}\frac{\left(i\omega_{m}+A \right)\left(i\omega_{m}+B\right)+C}{\left[\left(i\omega_{m}+A\right)^{2}- \alpha^{2}\right]\left[\left(i\omega_{m}+B\right)^{2}-\beta^{2}\right]}, \tag{9}\]
where
\[A =-d_{\mathbf{p}}^{0};B=i\omega_{n}-d_{\mathbf{p}+\mathbf{q}}^{0}; \quad C=d_{\mathbf{p}}^{x}d_{\mathbf{p}+\mathbf{q}}^{x}+d_{\mathbf{p}}^{y}d_{ \mathbf{p}+\mathbf{q}}^{y}; \tag{10}\] \[\alpha^{2} =d_{\mathbf{p}}^{x2}+d_{\mathbf{p}}^{y2};\quad\beta^{2}=d_{ \mathbf{p}+\mathbf{q}}^{x2}+d_{\mathbf{p}+\mathbf{q}}^{y2}.\]
Performing summation over \(m\), one obtains:
\[\Pi_{\mathbf{q}n}=-\sum\nolimits_{\mathbf{p}}\left\{\begin{array}{l}\frac{ \alpha^{2}-\alpha(A-B)+C}{\alpha[(A-B)-\beta^{2}-\beta^{2}]}\tanh\frac{\alpha -A}{2T}+\frac{\alpha^{2}+\alpha(A-B)+C}{\alpha[(A-B+\alpha)^{2}-\beta^{2}]} \tanh\frac{\alpha+A}{2T}\\ +\frac{\beta^{2}+\beta(A-B)+C}{\beta[(A-B+\beta)^{2}-\alpha^{2}]}\tanh\frac{ \beta-B}{2T}+\frac{\beta^{2}-\beta(A-B)+C}{\beta[(A-B-\beta)^{2}-\alpha^{2}]} \tanh\frac{\beta+B}{2T}\end{array}\right\}. \tag{11}\]
Now we turn to screening due to other layers.
### Screening in a layered system
Coulomb repulsion between electrons in different layers \(l\) and \(l^{\prime}\) within the RPA approximation is determined by the following integral equation:
Figure 4: Electron density and DOS as function of the chemical potential \(\mu\).of WSM with \(\kappa=1.3\). DOS has cusps at both I to II transitions. Between the transitions it is nearly constant in the range of densities from \(1.1\times 10^{14}/cm^{2}\) to \(4.\times 10^{14}/cm^{2}\).
\[V^{RPA}_{{\bf q},l-l^{\prime},n}=v^{C}_{{\bf q},l-l^{\prime}}+\Pi_{{\bf q}n}\sum _{l^{\prime\prime}}v^{C}_{{\bf q},l-l^{\prime\prime}}V^{RPA}_{{\bf q},l^{\prime \prime}-l^{\prime},n}. \tag{12}\]
The polarization function \(\Pi_{{\bf q}n}\) in 2D was calculated in the previous subsection. This set of equations is decoupled by the Fourier transform in the \(z\) direction,
\[V^{RPA}_{{\bf q},q_{z},n}=\frac{v^{C}_{{\bf q},q_{z}}}{1-\Pi_{{\bf q}n}v^{C}_{{ \bf q},q_{z}}}\, \tag{13}\]
where
\[v^{C}_{{\bf q},q_{z}}=\sum_{l}v^{2D}_{{\bf q}}e^{iq_{z}l-qd|l|}=v^{2D}_{{\bf q }}\frac{\sinh{[qd]}}{\cosh{[qd]}-\cos{[dq_{z}]}}. \tag{14}\]
The screened interaction in a single layer therefore is is given by the inverse Fourier transform [14]:
\[V^{RPA}_{{\bf q},l-l^{\prime},n}=\frac{d}{2\pi}\int^{\pi/d}_{q_{z}=-\pi/d}e^{ iq_{z}d(l-l^{\prime})}\frac{v^{C}_{{\bf q},q_{z}}}{1-\Pi_{{\bf q}n}v^{C}_{{\bf q},q_{z}}}. \tag{15}\]
Considering screened Coulomb potential at the same layer \(l=l^{\prime}\), the integration gives,
\[V^{RPA}_{{\bf q}n}=\frac{v^{2D}_{{\bf q}}\sinh{[qd]}}{\sqrt{b^{2}_{{\bf q}n}-1}}, \tag{16}\]
where \(b_{{\bf q}n}=\cosh{[dq]}-v^{2D}_{{\bf q}}\Pi_{{\bf q}n}\sinh{[dq]}\). This formula is reliable only away from plasmon region \(b_{{\bf q}n}>1\). It turns out that to properly describe superconductivity, one can simplify the calculation at low temperature by considering the static limit \(\Pi_{{\bf q}n}\simeq\Pi_{{\bf q}0}\). Consequently the potential becomes static: \(V^{RPA}_{{\bf q}}\equiv V^{RPA}_{{\bf q},n=0}\).
## IV Superconductivity
Superconductivity in WSM is caused by a conventional phonon pairing. The leading mode is an optical phonon mode assumed to be dispersionless. with energy \(\Omega\). The effective electron-electron attraction due to the electron - phonon attraction opposed by Coulomb repulsion (pseudo - potential) mechanism creates pairing below \(T_{c}\). Further we assume the singlet \(s\)-channel electron-phonon interaction and neglect the inter-layers electrons pairing. In order to describe superconductivity, one should "integrate out" the phonon and the spin fluctuations degrees of freedom to calculate the effective electron - electron interaction. We start with the phonons. The Matsubara action for effective electron-electron interaction via in-plane phonons and direct Coulomb repulsion calculated in the previous Section. It important to note that unlike in metal superconductors where a simplified pseudo - potential approach due to McMillan and other [15], in 2D and layered WSM, one have to resort to a more microscopic approach.
### Effective attraction due to phonon exchange opposed by the effective Coulomb repulsion
The free and the interaction parts of the effective electron action ("integrating phonons"+RPA Coulomb interaction) in the quasi - momentum - Matzubara frequency representation, \(S=S^{e}+S^{int}\),
\[S^{int}=\frac{1}{2T}\sum\nolimits_{{\bf q}ll^{\prime}mm^{\prime}}n_{{\bf q}l n}\left(\delta_{ll^{\prime}}V^{ph}_{{\bf q},m-m^{\prime}}+V^{RPA}_{{\bf q},l-l^{ \prime}}\right)n_{-{\bf q},-l^{\prime},-n^{\prime}}. \tag{17}\]
Here \(n_{{\bf q}ln}=\sum_{{\bf p}}\psi^{s\star I}_{{\bf p}ln}\psi^{sI}_{{\bf q}-{\bf p },l,n}\) the Fourier transform of the electron density and \(S^{e}\) was defined in Eq.(5). The effective electron - electron coupling due to phonons is:
\[V^{ph}_{{\bf q}m}=-\left(\frac{\sqrt{3}}{2}\right)^{2}\frac{g^{2}\Omega}{\omega ^{b2}_{m}+\Omega^{2}}, \tag{18}\]
where the bosonic frequencies are \(\omega^{b}_{m}=2\pi mT\).
### Gorkov Green's functions and the s-wave gap equations
Normal and anomalous (Matsubara) intra - layer Gorkov Green's functions are defined by expectation value of the fields, \(\left\langle\psi^{Is}_{{\bf k}nl}\psi^{ss^{\prime}J}_{{\bf k}nl}\right\rangle= \delta^{ss^{\prime}}G^{IJ}_{{\bf k}n}\) and \(\left\langle\psi^{Is}_{{\bf k}nl}\psi^{J^{\prime}}_{-{\bf k},-n,l}\right\rangle= \varepsilon^{ss^{\prime}}F^{IJ}_{{\bf k}n}\), while the gap function is
\[\Delta^{IJ}_{{\bf q}n}=\sum\nolimits_{{\bf p}m}V_{{\bf q}-{\bf p},n-m}F^{IJ}_{ {\bf p}m}, \tag{19}\]
where \(V_{{\bf q}n}=V^{ph}_{{\bf q}n}+V^{RPA}_{{\bf q}n}\) is a sublattice scalar. The gap equations in the sublattice matrix form are derived from Gorkov equations in Appendix B:
\[\Delta_{{\bf q}n}=-\sum\nolimits_{{\bf p}m}V_{{\bf q}-{\bf p},n-m}g_{{\bf p}m }\left\{I+\Delta_{{\bf p}m}g^{t}_{-{\bf p},-m}\Delta^{*}_{-{\bf p},-m}g_{{\bf p }m}\right\}^{-1}\Delta_{{\bf p}m}g^{t}_{-{\bf p},-m}. \tag{20}\]
In numerical simulation the gap equation was solved iteratively. Relatively large space cutoff \(N_{s}=256\) is required. The frequency cutoff \(N_{t}=128\) was required due to low temperatures approached. Typically \(15-25\) iterations were required. The parameters used were \(\Omega=16meV\). The electron - phonon coupling \(g=20meV\). Now we turn to results concentrating on two puzzling experimental results of ref.[6].
### Independence of \(T_{c}\) on density in topological type II phase
In Fig.5 the critical temperature for various values of density are plotted. The blue points are for dielectric constant[6], \(\varepsilon=16\), describing the intercalated imidazole cations \([C_{2}MIm]\)[17]. The inter - layer distance was kept at \(d=10.5A\).
The significance and generatiozation of the observation are discussed below.
### Increase of \(T_{c}\) with dielectric constant of intercalator materials
The main idea of the paper is that the difference in \(T_{c}\) between different intercalators is attributed not to small variations in the inter - layer spacing \(d\), but rather to large differences in the dielectric constant of the intercalating materials due to its effect on the screening. In experiment of ref.[6] the imidazole cations \([C_{2}MIm]^{+}\) (1- ethyl - 3 - methyl - imidazolium) are short molecules[17] have \(\epsilon=16\), while \([C_{6}MIm]^{+}\) (1- hexy 1 - 3 - methyl - imidazolium) are long molecules[18] with a larger value \(\epsilon\simeq 50\). The inter - layer distance \(d\) is slightly dependent intercalators
Figure 5: Critical temperature of transition to superconducting state in type II layered WSM is shown as function of chemical potential (can be translated into carrier density via Fig.4). Three values of dielectric constant of the intercalant for fixed interlayer distance are shown. Parameters of the electron gas are the same as in previous figures.
changing from \(10.5\AA\) to \(11.7\AA\). The blue points in Fig 5 describe a material with dielectric constant \(\varepsilon=16\) should. This is contrasted[18] with the \(\varepsilon=50\) material, see the red point. Neglecting the Coulomb repulsion, see the green points, critical temperature (a much simpler calculation of \(T_{c}\) in this case similar to that in ref.[11] is needed in this case) becomes yet higher. This demonstrate the importance of the Coulomb repulsion in a quasi 2D system. Superconductivity is weaker for monolayer on substrate since both air and substrate have smaller dielectric constants and hence weaker screen the Coulomb repulsion.
## Discussion and conclusion
To summarize we have developed a theory of superconductivity in layered type II Weyl semi-metals that properly takes into account the Coulomb repulsion. The generalization goes beyond the simplistic pseudo - potential approach due to McMillan[15] and others and depends essentially on the intercalating material. The theory allows to explain the two puzzling phenomena observed recently in layered intercalated \(MoTe_{2}\) WSW compound [6]
The first experimental observation is that the gate voltage (changes in the chemical potential or equivalently in density) has no impact on critical temperature \(T_{c}\). For the 3D density range \(8.\times 10^{20}cm^{-3}-3.6\times 10^{21}cm^{-3}\) the temperature changes within 5%. For the intercalating material \([C_{2}MIm]^{+}\) with inter - layer distance \(d=10.5A\) the 2D density range translates into \(8.4\times 10^{13}cm^{-2}-3.8\times 10^{14}cm^{-2}\) wt slightly larger spacings \(d=11.7A\) shown in Figs 2-5. This feature is explained purely topologically, see schematic Fig.1. In the type II density range the shape of both pieces of the Fermi surface (the blue - yellow boundaries in Fig1b and Fig1c) largely does not depend on the density (that is proportional to the area of the blue part of the surface) leading, see Fig. 4 to approximate independence of the density of states (DOS) \(N\left(0\right)\) of chemical potential \(\mu\). This feature is akin to the DOS independence on \(\mu\) for a parabolic (topologically type I like in Fig.1a) band in purely 2D materials, but has completely difficult origin.
Using the somewhat naive BCS formula
\[T_{c}\simeq\Omega\ e^{-N(0)g_{eff}^{2}}. \tag{21}\]
Here \(\Omega\) is the phonon frequency and \(g_{eff}\) the effective electron - phonon coupling. Assuming that both \(\Omega\) and \(g\) do not depend on the density one arrives at a conclusion that in the type II topological phase the critical temperature is density independent.
The second experimental observation[6] was that \(T_{c}\) is in fact very sensitive to the intercalating material. For imidazole cations \([C_{2}MIm]^{+}\) the critical temperature is \(T_{c}=4.2K\), while for \([C_{6}MIm]^{+}\) the temperature jumps to \(T_{c}=6.6K\) or \(6.9K\) depending on the intercalation method. The inter - layer distance \(d\) is slightly dependent intercalators increasing from \(10.5\AA\) to \(11.7\AA\). Our calculation demonstrates that the difference in \(T_{c}\) between different intercalators cannot be attributed to small variations in the inter - layer spacing \(d\). On the contrary there are large differences in the dielectric constant of the intercalating materials. While \([C_{2}MIm]^{+}\) have[17] a relatively small dielectric constant \(\epsilon=16\), \([C_{6}MIm]^{+}\) is estimated[18] in the range \(\epsilon=40-60\). Our theory accounts the difference in \(T_{c}\) due to changes in the screening of the Coulomb potential due to the inter - layer insulator.
## Acknowledgements.
This work was supported by NSC of R.O.C. Grants No. 101-2112-M-009-014-MY3.
## Appendix A Details of the model
The system considered in the paper is fitted for the following values of the hopping and the tilt parameter. The dimensionless tilt parameter was taken from ref.[6]\(\kappa=1.3\). The hopping \(\gamma=500\ meV\) and \(t=2\). The calculations were performed on the discrete reciprocal lattice \(k_{1},k_{2}=1,...N_{s}\) with \(N_{s}=256\). Reciprocal lattice basis vectors are,
\[{\bf b}_{1}=2\pi\left(1,\frac{1}{\sqrt{3}}\right);\ \ \ \ {\bf b}_{2}=2\pi \left(1,-\frac{1}{\sqrt{3}}\right), \tag{22}\]
so that a convenient representation is \({\bf k}=\frac{k_{1}}{N_{s}}{\bf b}_{1}+\frac{k_{2}}{N_{s}}{\bf b}_{2}\) with
\[k_{x}=\frac{2\pi}{N_{s}}\left(k_{1}+k_{2}\right),\quad k_{y}=\frac{2\pi}{\sqrt{ 3}N_{s}}\left(k_{1}-k_{2}\right). \tag{23}\]
## Appendix B. Derivation of the two sublattice gap equation
### Green's functions and the s-wave Gorkov equations
We derive the Gorkov's equations (GE) within the functional integral approach[19] starting from the effective electron action for grassmanian fields \(\psi^{*X},\psi^{Y}\).
\[S=\frac{1}{T}\left\{\psi^{*X}\left(G_{0}^{-1}\right)^{XY}\psi^{Y}+\frac{1}{2} \psi^{*Y}\psi^{Y}V^{YX}\psi^{*X}\psi^{X}\right\}, \tag{24}\]
where \(X,Y\) denote space coordinate, sublattices (pseudospin) and spin of the electron. Finite temperature properties of the condensate are described at temperature \(T\) by the normal and the anomalous Matsubara Greens functions for spin singlet state.
The GE in functional form are:
\[\left\langle\psi^{A}\psi^{*B}\right\rangle\frac{\delta}{\delta\psi^{*C}} \left\langle\frac{\delta S}{\delta\psi^{*B}}\right\rangle+\left\langle\psi^{A }\psi^{B}\right\rangle\frac{\delta}{\delta\psi^{*C}}\left\langle\frac{\delta S }{\delta\psi^{B}}\right\rangle=0; \tag{25}\]
\[\left\langle\psi^{A}\psi^{*B}\right\rangle\frac{\delta}{\delta\psi^{C}} \left\langle\frac{\delta S}{\delta\psi^{*B}}\right\rangle+\left\langle\psi^{A }\psi^{B}\right\rangle\frac{\delta}{\delta\psi^{C}}\left\langle\frac{\delta S }{\delta\psi^{B}}\right\rangle=\delta^{AC}. \tag{26}\]
Performing the calculations and using the normal and anomalous Green functions in the form \(F^{AB}=\left\langle\psi^{A}\psi^{B}\right\rangle;G^{AB}=\left\langle\psi^{A} \psi^{*B}\right\rangle,\) one obtains:
\[F^{AX}\left\{\left(G_{0}^{-1}\right)^{CX}-v^{XC}G^{CX}+v^{CX}G^{XX}\right\}+G ^{AX}v^{XC}F^{XC}=0. \tag{27}\]
Skipping second and third terms in bracket in this expression and defining, superconducting gap \(\Delta^{AB}=v^{AB}F^{AB}\), one rewrites as a matrix products:
\[\left(G_{0}^{-1}\right)^{CX}F^{XA}=G^{AX}\Delta^{XC}. \tag{28}\]
The first GE (multiplied from left by \(G_{0}\)) is,
\[F^{AB}=-G^{AX}G_{0}^{BY}\Delta^{XY}, \tag{29}\]
while the second GE similarly is:
\[G^{AB}-G^{AX}\Delta^{XY}G_{0}^{ZY}\Delta^{*ZU}G_{0}^{UB}=G_{0}^{AB}. \tag{30}\]
### Frequency-quasi-momentum and the spin-sublattice decomposition
The generalized index \(A\) contains the space variables (space + Matsubara time, \(a\)), spin \(s\) and the sublattice \(I\). After performing the Fourier series with combined quasi - momentum - frequency \(\alpha\):
\[F_{ab}^{s_{1}s_{2}IJ} = \epsilon^{s_{1}s_{2}}\sum_{\alpha}e^{i\alpha(a-b)}F_{\alpha}^{IJ} ;\ \ \Delta_{ab}^{s_{1}s_{2}IJ}=\epsilon^{s_{1}s_{2}}\sum_{\alpha}e^{i\alpha(a-b)} \Delta_{\alpha}^{IJ}; \tag{31}\] \[G_{ab}^{s_{1}s_{2}IJ} = \delta^{s_{1}s_{2}}\sum_{\alpha}e^{i\alpha(a-b)}g_{\alpha}^{IJ}; \ \ V_{ab}^{s_{1}s_{2}IJ}=\sum_{\alpha}e^{i\alpha(a-b)}v_{\alpha}.\]
Substituting spins into Eq.(29,30), one obtains in the sublattice matrix form
\[F_{\alpha}=-G_{\alpha}\Delta_{\alpha}g_{-\alpha}^{t}; \tag{32}\] \[G_{\alpha}=G_{0\alpha}\left\{I+\Delta_{\alpha}g_{-\alpha}^{t} \Delta_{-\alpha}^{*}G_{0\alpha}\right\}^{-1},\]
Convoluting the first GE by \(v_{\nu}\) one obtains:
\[\Delta_{\omega}=-\sum\nolimits_{\nu}v_{\omega-\nu}G_{\nu}\Delta_{\nu}g_{-\nu} ^{t}. \tag{33}\]
The solution of the second GE for \(G\) is:
\[G_{\alpha}=g_{\alpha}\left\{I+\Delta_{\alpha}g_{-\alpha}^{t}\Delta_{-\alpha}^{ *}g_{\alpha}\right\}^{-1}. \tag{34}\]
Substituting into the first GE one obtain Eq.(20) in the text.
|
2303.06065 | Symmetry-Preserving Coupling Method for Topological Acoustic
Metamaterials | In this paper we investigate different types of couplings used in acoustic
metamaterials requiring preservation of symmetries. For testing we use the SSH
model to test whether topologically edge and interface modes are supported with
the different types of connection. We observed that a modular platform where
the resonators are coupled through the bottom is the simplest method that is
accurate and flexible. | Ssu-Ying Chen, Camelia Prodan | 2023-02-21T17:50:58Z | http://arxiv.org/abs/2303.06065v1 | # Symmetry-Preserving Coupling Method for Topological Acoustic Metamaterials
###### Abstract
In this paper we investigate different types of couplings used in acoustic metamaterials requiring preservation of symmetries. For testing we use the SSH model to test whether topologically edge and interface modes are supported with the different types of connection. We observed that a modular platform where the resonators are coupled through the bottom is the simplest method that is accurate and flexible.
## I I. Introduction
Topology opened a gate to realize new mechanical and acoustic systems. In recent years, phononic crystals have grasped the attention for offering inspiring opportunities to manipulate sound in new and unanticipated ways based on topological concepts.
Periodic acoustic systems, such as the topological boundary states based on the analogue of quantum Hall effect[4; 5], the analogue of quantum spin Hall effect[6], the Floquet topological insulator[7], and the valley Hall effect[8] have been successively proposed and experimentally verified. Besides periodic systems, topological systems that lack periodicity can also achieve topological edge states, such as topological quasi-crystals [9; 10; 11] where the topological structure is caused by disorder. The idea of topological acoustic insulators provides new schemes for designing devices with advanced functionalities. For example, the potential improvement of leaky-wave acoustic antennal[13], directional topological acoustic antenna controlling sound for versatile applications[14], specific signal filtering achieved by adding random disorder to clean structures. [15], various sound proof strategies[16; 17; 18], and for a growing discussion of different fractal geometries.[19; 20; 21] In essence, researchers have been and continue to intensely explored various topological acoustic systems, and among these researches, topological acoustic meta-material consisting of numerous phononic crystals is one of the focuses.
In view of the fact that topological insulators are also called symmetry-protected topological phases of matter, these gapped phases have topological properties relying on the presence of symmetries. This result is from a global property of the topological insulator's band structure: local perturbations cannot alter or damage this surface state. Since the properties of topological insulators and their surface states highly depend on the dimension of the material and its symmetries, they can be categorized using the periodic table of topological insulators[12]. From the experimental perspective, it is crucial to satisfy certain structures and to display the symmetry, moreover, preserving the symmetry is equivalent to preserving the topological properties. To preserve symmetry, coupling methods for topological acoustic experiments must be carefully chosen. The following question then arises: in discrete acoustic resonant models, how can the resonators be efficiently coupled to form a desired structure while preserving the symmetry so that it complies with the idea of topological metamaterial?
Some methods preserve the symmetry using double side connection, meaning they added a coupling bridge that is also on the side and made the structure symmetric[22; 23; 24; 25]. The resonators are 3D printed using photosensitive resins or other types of materials as coupling bridges must be fabricated with the resonators which makes as manufacture more complicated and time consuming. On top of that, if the dimension of the coupling bridge has to be changed, everything has to be made from scratch all over again.
In this article, we test an acoustic coupling method to connect the resonators through the bottom to preserve the symmetry and create a simple, flexible and Lego-like platform. We will compare two types of coupling methods throughout the paper: side coupling and bottom coupling(Fig. 1**b**). We will start by showing that bottom coupling preserves symmetry in simple periodic acoustic models while single side coupling fails to do so. Via the simulations of resonant modes of dimer, trimer, pentamer, and a classic Su-Schrieffer-Heeger model (SSH model) consisting of 14 resonators, as well as another SSH model with 28 resonators and a domain boundary. Furthermore, we address the corresponding experimental results of the dimer, the SSH model, and the SSH model with a domain boundary. All experimental results clearly show great agreement to the simulations, proving the effectiveness and simplicity of bottom coupling method.
## II II. Results and Discussion
The dimensions of the resonators that were used throughout the simulations and experiments are shown in (Fig. 1**a,b**).
### Side Connection vs. Bottom Connection
To demonstrate and to compare the effect of side connection and bottom connection for topological acoustic resonators, numerical simulations were done using COMSOL Multiphysics software.
The simulated results is reported in Fig. 1 where the structures of dimer, trimer and pentamer of resonators with both side bridge coupling and bottom bridge coupling (Fig. 1**b**). The band spectrum (Fig. 1**c,d,e**) were generated by sweeping the width of coupling bridge from 1 mm to 20 mm with 1 mm steps and plotting corresponding eigenfrequencies versus widths. The wider coupling bridge means stronger coupling strength. In the band spectrum, the second mode split and the gap became larger when the coupling strength went up. With red dash lines as symmetry axes, it is clear that with side connection, the splitting modes are not symmetrical. Once we moved the coupling bridges to the bottom, the symmetry was restored.
### Dimer experiment
The experiment of dimer coupling was done to confirm the simulation results. The assembly process is shown in Fig. 2**a**, and the experimental set up is shown in Fig. 2**c**. The speaker and the microphone were placed on top of the same resonator to give off and collect sound, respectively. The frequency of the input signal sent to the speaker was swept from 4 kHz to 5 kHz in intervals of 10 Hz. Experimental results of dimer with bottom coupling bridges show a great agreement to the simulations (Fig. 2**b**). The middle of 2 peaks is about 4.3 kHz, same as where the 2 modes start to split in the band spectrum (around 18.5 \(kHz^{2}\)). However, with the side connection, the peaks shifted from the spectrum. Additionally, the height difference in peaks for bottom coupling is noticeably smaller than that of the side coupling (Fig. 2**b**). Comparing with the results from bottom coupling method, the symmetry was distinctly absent for side coupling.
### SSH acoustic model
To further explore how the position of connection between acoustic crystals would influence topological gaps in band spectrum, we started by simulating 3 types of connection for SSH model of 14 resonators. The first type is to couple them with bridges connected through side, the second one is double side coupling bridges, and the third one is bottom connection. Fig. 3**a** shows the top view of SSH coupling bridges. \(r_{1}\) and \(r_{2}\) are the widths of coupling bridges, therefore they are related to the alternate coupling strength. Light blue dash lines indicate where the resonators were placed. The geometries of 3 coupling types are shown in Fig. 3**b, c, d**. The resonant spectrum were generated by sweeping S - half of the difference in widths between strong and weak coupling
Figure 1: **Simulations showing that the bottom connection preserves the symmetry.****a** Dimensions of the resonators and coupling bridges. H = 40 mm, r = 5 mm. The coupling bridge has the width = 5 mm, the length d = 26 mm, the thickness t = 3 mm. **b** The height position of the coupling bridge h1 = 5 mm for side connection, h1 = 0 mm for bottom connection. **c** Dimer structure and the band spectrum of both side and bottom coupling methods. **d** Trimer structure and the band spectrum of both side and bottom coupling methods. **e** Pentamer structure and the band spectrum of both side and bottom coupling methods. It is clear in the band spectrum that in all three cases, the symmetry is preserved when the resonators are connected through bottom. Red dash lines indicates the symmetrical axis.
bridge - from -9 mm to 9 mm. When S equals to zero, all coupling bridges have an identical width of 10 mm.
As one can see in the spectrum (Fig. 3**b**, **c**, **d**, there is no gap for a uniform coupling connection (S = 0 mm, \(r_{1}=r_{2}\) = 10 mm). A bulk spectral gap opens once the connecting channels are set in an alternating strong/weak coupling strength (\(S\neq 0\ mm\)). Furthermore, the bulk spectrum remains symmetric with respect to the middle of the bulk spectral gap (the small deviations are less than 5% when compared with the overall width of the bulk spectrum).
In (Fig. 3**b**), SSH model with bottom connection, one can observe the expected edge resonant modes, whose energies are pinned in the middle of the bulk spectral gap around 18.5 \(kHz^{2}\), where the second mode of resonant frequency of a single resonator is. We can say that SSH model with bottom connection, COMSOL-simulated spectra displays an exact chiral symmetry. In Fig. 3**c**, SSH model with double side coupling bridges gives the same spectra, meaning both types of coupling approaches achieve the goal of protecting the chiral symmetry.
On the other hand, the spectra of SSH model connected through sides does not display the same symmetry. In Fig. 3**d**, not only do the edge modes merge into lower bulk bands, but the middle point of the spectrum shifted to around 2 \(kHz^{2}\), demonstrating the same impracticality as in dimer experiments.
We also checked to see if the height position (h1 in Fig. 1**b**) of side coupling plays a role in preserving the symmetry. The height of side coupling bridge was swept from bottom to top to generate the spectra in Fig. 3**e**, while the coupling width were fixed at \(r_{1}=15\) and \(r_{2}=5\) (S = -5 mm). When h1 = -21.5 and 21.5 mm, the resonators are coupled from the bottom or from the top which resulted in bulk modes symmetric with respect to 18.5 \(kHz^{2}\), and the edge modes appear at middle. Red parts represent when the coupling bridges protrude from the bottom or the top, meaning part of them are outside of the resonators, there are still edge modes existing in the gap but the symmetry shifts further and further away from the dash line. One can see that once the coupling bridges enter the body of resonators, the edge modes merge into bulk and the symmetry disappears.
Next, we did experiments with SSH model connected through bottom to confirm if the symmetry is also seen in the experimental results. Fig. 4**a** shows how the setup was made. The COMSOL-simulated acoustic pressure fields of the edge resonant modes are shown in Fig. 4**b**. The simulated band structure in Fig. 4**c** is added as a reference (same as Fig. 3**b**).
In these measurements, same as dimer experiments, the speaker and the microphone were inserted in the same resonator via two holes open at the top and the speaker's frequency was swept from 4 kHz to 5 kHz and the microphone picked up corresponding signals. The measurements were repeated for all resonators and the collected data was assembled in the local density of states plot shown in Fig. 4**d**. It is worth mentioning here that all resonators are removable and interchangeable, so the one with holes can be placed at any probe position desired for measurements. Panel **e** provides an alternative depiction of the same data. The spectral gap as well as the expected edge and interface modes can be clearly identified and they are well aligned with the simulation in panel **c**.
The density of states reported in Fig. 4**d** was obtained by integrating the local density of states acquired from resonators whose index are the same as the position of
Figure 2: **Dimer experimental setups as well as the comparison of simulation and experiment results.****a** Assembly process of dimer connected through side. The dimer was 3D-printed together with a side coupling bridge, the height position of the top of the side coupling bridge h1 is equal to 7 mm. **b** Assembly process of dimer connected through bottom. Two resonators were 3D-printed separately, and the bottom coupling bridge is grooved in the middle acrylic sheet as depicted. Red boxes in both band spectrum indicate the coupling width (5 mm) used in the experiment. **c** The experimental setup.
the probe. The same instrumentation was used. The measurements were repeated while moving the position of the probe. For each measurement, the frequency was scanned from 4 kHz to 5 kHz in 20 Hz steps.
Finally, we test and compare two coupling methods in a SSH model with a domain boundary(interface), both simulationally and experimentally. A SSH model with a domain boundary separates two topologically distinct SSH insulating phases, with one non-trivial edge(left) and one trivial(right). It is shown in Fig. 5**a** where \(r_{1}=15\) mm and \(r_{2}=5\) mm. Fig. 5**b** and Fig. 5**c** show the band spectrum of side connection and bottom connection, respectively. Red vertical boxes include resonant modes when S = -5 mm. Red and blue stars label where interface mode and edge modes appear, and the acoustic pressure field maps are shown below. Similar to the previous results, with bottom coupling, both simulation and experiment verified that the system contains topological resonant modes at the non-trivial edge as well as the domain boundary as expected, and they are located in the middle of the bulk band gap. In opposition, with side coupling, the edge mode disappears and the interface mode is close to the bulk, besides, it is noticeable that the energy is not as concentrated at interface from the acoustic pressure field distribution.
We also did experiments for bottom-connected SSH model with a domain boundary, the same protocol was applied and frequency was swept from 4 kHz to 5 kHz. The measurements were repeated for all 28 resonators and the collected data was then used to plot the local density of states which is shown in Fig. 5**d**. Panel **e** shows the collapse of the data in panel **d** on the frequency axis. The spectral gap as well as the expected edge modes can be clearly identified and are well aligned with the simulation in panel **c**. The COMSOL-simulated acoustic pressure field maps of the edge resonant modes are shown in Fig. 5**b**, **c**.
## III III. Conclusion
In this paper we investigate acoustic coupling that preserve symmetries through Comsol simulations and experiments. We tested couplings through the bottom, and with bridges at different height through the side. The first test was done on dimmers, then on single SSH set up, as well as one with interface. We observed that the simplest way for such coupling is to connect the resonators through the bottom. Coupling through the side requires a second connection. The advantages of coupling through the bottom is the modular structure of the set up. The resonators and coupling are printed individually, allowing for an easy variation in the coupling strength when the experiment requires. This platform has an exceptional flexibility since the resonators can be stored for further use.
Figure 3: **Band spectrum of different types of connection in SSH model of 14 resonators, sweeping S. S is half of the difference in widths between strong and weak coupling bridge.****a** Top view of SSH coupling bridges. \(r_{1}\) and \(r_{2}\) are related to the alternate coupling strength. Light blue dash lines are where the resonators were placed. **b** SSH model with bottom connection. **c** SSH model with double connection from the sides. **b** and **c** has very similar spectrum. Blue boxes includes edge resonant modes. **d** SSH model with side connection, the edge band merges into bulk. **e** Band spectrum of SSH model sweeping the height of the coupling bridges. Red parts represent when the coupling bridges protrude from the bottom or top. Dark blue dash line marks 18.5 \(k\)Hz[2].
## IV IV. Material and Methods
### Simulation
The simulations reported in all figures were performed with the COMSOL Multiphysics pressure acoustic module. The wave propagation domain shown in Fig. 1 was filled with air with a mass density 1.3 kg/m\({}^{3}\) and the sound's speed was set at 343 m/s, which is appropriate for room temperature. We shall consider the 3D printing UV resin material as hard boundary because of the huge acoustic impedance mismatch comparing with air.
### Experiment
The resonators were 3D-printed using Anycubic Photon 3D printer, which uses UV resin and has 47 um XY-resolution and 10 um Z-resolution. The thickness of the walls is 2 mm, which ensures a high Q factor and justifies the rigid boundaries in the simulations. The inner dimensions of the resonators are shown in Fig. 1a.
A dimer with a coupling bridge on the sides is 3D-printed as a whole. The width, length and the position of the coupling bridge are as labeled in Fig. 1**a**. One side was left open for ethanol rinsing and UV-curing. The dimer was then placed on a base of two layers of acrylic plates (top layer: 2 mm in thickness, bottom layer: 3 mm in thickness) to create a closed space for wave propagating(Fig. 2**a**). The reason that the top layer was 2 mm is to accommodate the side bridge. The bottom connection is achieved by assembling the supporting base, which consists of three layers of 3 mm thick acrylic plates(Fig. 2**b**). The middle layer of a groove is to account for acoustic coupling. The acrylic plates with patterns of the supporting bases were cut by the Boss Laser-1630 Laser Engraver. The nominal tolerance of the laser-cutter is 250 um.
For the SSH model connected from bottom, the same method was utilized. 14 resonators were placed and coupled through the channels with alternating widths grooved in the acrylic plates of the base. These resonators are detachable and interchangeable so that they can be moved around, thus acoustic crystals with different probe positions can be generated, and the resonators can be taken apart, stored and reassembled for new projects or designs.
The protocol for the acoustic measurements shown in Fig. 2, Fig. 4 and Fig. 5 was as follows: Sinusoidal signals of duration 1 s and amplitude of 0.5 V generated by a Rigol DG 1022 function generator were sent out to a speaker placed in a porthole opened on top of a resonator. A dbx RTA-M Reference Microphone with a Phantom Power was inserted in a porthole next to the previous one and was used to acquire the acoustic signals (Fig. 2**c**). The signals were then read by a custom LabVIEW code via National Instruments USB-6122 data acquisition box and the data was stored for graphic renderings.
Figure 4: **SSH experiments demonstrate the simplicity and efficiency of bottom coupling method.****a** The assembling process of SSH model of 14 resonators connected through bottom. **b** Acoustic pressure field distribution for the edge modes marked as red dot in panel c when S = -5 mm. **c** COMSOL simulated SSH model resonant spectrum. The red vertical box indicates S = -5 mm which makes \(r_{1}\) = 15 mm and \(r_{2}\) = 5 mm, the parameters were used in the experiments. **d** Experimentally measured local density of states, assembled from normalized microphone readings from the top of the block resonators. The bright dispersive modes indicates the bulk and edge modes. **e** Collapse on the frequency axis of the intensity plot in **d**. The spectral gap is clearly recognized and the edge modes that show up in the gap are marked with a red star.
## Acknowledgment
The authors acknowledges support from the National Science Foundation, grant CMMI-2131759.
|
2303.11519 | Efficient generation of axial magnetic field by multiple laser beams
with twisted pointing directions | Strong laser-driven magnetic fields are crucial for high-energy-density
physics and laboratory astrophysics research, but generation of axial multi-kT
fields remains a challenge. The difficulty comes from the inability of a
conventional linearly polarized laser beam to induce the required azimuthal
current or, equivalently, angular momentum (AM). We show that several laser
beams can overcome this difficulty. Our three-dimensional kinetic simulations
demonstrate that a twist in their pointing directions {enables them to carry
orbital AM and transfer it to the plasma, thus generating a hot electron
population carrying AM needed to sustain the magnetic field.} The resulting
multi-kT field occupies a volume that is tens of thousands of cubic microns and
it persists on a ps time scale. The mechanism can be realized for a wide range
of laser intensities and pulse durations. Our scheme is well-suited for
implementation using {multi-kJ PW-class lasers, because, by design, they have
multiple beamlets and because the scheme requires only linear-polarization. | Yin Shi, Alexey Arefiev, Jue Xuan Hao, Jian Zheng | 2023-03-21T00:42:58Z | http://arxiv.org/abs/2303.11519v1 | # Efficient generation of axial magnetic field
###### Abstract
Strong laser-driven magnetic fields are crucial for high-energy-density physics and laboratory astrophysics research, but generation of axial multi-kT fields remains a challenge. The difficulty comes from the inability of a conventional linearly polarized laser beam to induce the required azimuthal current or, equivalently, angular momentum (AM). We show that several laser beams can overcome this difficulty. Our three-dimensional kinetic simulations demonstrate that a twist in their pointing directions enables them to carry orbital AM and transfer it to the plasma, thus generating a hot electron population carrying AM needed to sustain the magnetic field. The resulting multi-kT field occupies a volume that is tens of thousands of cubic microns and it persists on a ps time scale. The mechanism can be realized for a wide range of laser intensities and pulse durations. Our scheme is well-suited for implementation using multi-kJ PW-class lasers, because, by design, they have multiple beamlets and because the scheme requires only linear-polarization.
Recently, magnetic field effects in high energy density physics (HEDP) have attracted significant interest [1, 2, 3]. These can range from guiding of relativistic electron beams [4] to affecting the shape of inertial fusion implosions [5]. Despite significant progress, generation of sufficiently strong and controllable macroscopic fields at the laser facilities used for HEDP research [6, 7, 8, 9, 10, 11] remains an outstanding challenge.
Various approaches to magnetic field generation using high-power lasers have been explored in search of an optimal mechanism and field configuration. Initial efforts were focused on leveraging a circularly polarized (CP) laser beam to generate an axial quasi-static plasma magnetic field [12, 13, 14]. The emergence of capabilities to create Laguerre-Gaussian (LG) high-intensity beams has stimulated research into generation of the axial field using such beams as well [15, 16, 17, 18]. The strength of the plasma field is limited by the laser's ability to drive a strong azimuthal current, so it is insightful to interpret the process as a transfer of the laser's angular momentum \(\mathbf{L}\) to the plasma. Here \(\mathbf{L}=\varepsilon_{0}\int\mathbf{r}\times\left[\mathbf{E}\times\mathbf{B}\right]d^{3}\mathbf{r}\), where \(\varepsilon_{0}\) is the dielectric permittivity, \(\mathbf{E}\) and \(\mathbf{B}\) are the electric and magnetic fields, respectively.
Setups involving conventional linearly polarized (LP) laser beams have also received attention, because additional optics is required to make CP or LG beams from the conventional beams. A large-scale uniform magnetic fields can be created by a ns laser irradiating a capacitor-coil [3, 19, 20, 21] or a snail target [22]. This field can then be amplified inside a plasma by a high intensity ps or sub-ps laser pulse [23, 24, 25, 26]. Relativistic electrons generated by high-intensity laser pulses can also generate surface or bulk azimuthal magnetic fields when streaming through a solid density target [27, 28, 29, 30], and these fields are beneficial for hot electron transport and electron beam collimation [13, 14, 31, 32, 33, 34, 35, 36]. Applications for longitudinal fields include guiding of relativistic electron beams [37, 3, 19], laser-driven ion acceleration [38, 39], magnetized atomic physics [40, 3], and laboratory astrophysics [41].
Generation of a large-volume strong magnetic field requires significant energy that must be delivered by the laser. Multi-kJ PW-class laser systems like LFEX [9], NIF ARC [10], and Petal [11] offer the highest energy that can be delivered on a ps time scale. These lasers are all composed of multiple LP beamlets. The multi-beamlet configuration is not just an essential feature of the laser system design,
but also the key to advanced laser-plasma interaction regimes [42]. The number of multi-beamlet facilities will increase, as SG-II UP [8] is due to be upgraded to have multiple kJ-class ps laser beams.
This Letter presents a new multi-beam approach for efficient laser-to-plasma angular momentum (AM) transfer resulting in magnetic field generation. The approach, illustrated in Fig. 1 for four linearly-polarized Gaussian beams, is motivated by the capability of multi-kJ PW-class laser systems to provide multiple beamlets [9, 10, 43, 11]. Our scheme eliminates the need for CP or LG beams while offering a method for generating a field above 10 kT in a \(10^{4}\)\(\upmu\)m\({}^{3}\) volume. Our scheme provides a plasma that can potentially be used for studies of astrophysical objects involving strong magnetic fields beyond the dynamic range of previous laboratory settings [44, 45] and to mimic a rotating plasma environment in astrophysics [46, 47].
The role of the twist in the pointing direction, Fig. 1(a), can be illustrated using geometrical optics. Each laser beam is represented by a ray directed along the wave vector \(\mathbf{k}_{i}\), where \(i\) is the index numbering the beam. The photon momentum in the \(i\)-th beam is \(\mathbf{p}_{i}=\hbar\mathbf{k}_{i}\). Consider a pair of tilted rays, \(\mathbf{k}_{1,2}=(k_{x},k_{\perp}^{(1,2)},0)\), that intersect the \((y,z)\)-plane at \(z_{1,2}=\pm D_{f}/2\) and \(y_{1,2}=0\), where \(D_{f}/2\) is the beam offset. The axial AM of a photon is \([\mathbf{r}\times\mathbf{p}]_{x}\), so the total AM of the two beams is \(L_{x}\approx-N\hbar(k_{\perp}^{(1)}-k_{\perp}^{2})D_{f}/2\), where \(N\) is the number of photons in each beam. The AM can be doubled by adding two rays offset in \(y\). The rays appear twisted, so it is appropriate to refer to the calculated AM as orbital angular momentum (OAM). They carry OAM, a distinct form of AM, even though each beam has no intrinsic AM [48]. There are parallels to \(\gamma\)-ray beams carrying OAM [49, 50, 51] composed of photons with a twisted distribution of \(\mathbf{p}\).
To investigate the transfer of the OAM carried by four laser beams, we have performed a series of three-dimensional (3D) particle-in-cell (PIC) simulations using a relativistic PIC code EPOCH [52]. Each beam is a linearly polarized Gaussian beam. The duration of each pulse is 450 fs and the peak intensity is \(2.1\times 10^{20}\) Wcm\({}^{-2}\). Our target is a flat foil with sub-wavelength diameter nanowires whose purpose is to increases the interaction volume between the laser beams and the plasma produced by the target and thus enhances the number of hot electrons [53]. The front and rear surfaces of the foil are located at \(x_{f1}=0\)\(\upmu\)m and \(x_{f2}=4\)\(\upmu\)m. The spacing between the wires is 2 \(\upmu\)m, the wire length is 5 \(\upmu\)m, and the wire width is 0.4 \(\upmu\)m. The entire target is initialized as a fully ionized cold carbon plasma with an electron density of \(50n_{c}\), where \(n_{c}=1.8\times 10^{21}\) cm\({}^{-3}\) is the critical density corresponding to a laser wavelength \(\lambda=0.8\)\(\upmu\)m. All simulation parameters are listed in the Supplemental Material.
The orientation of the four beams is set according to Fig. 1. Their axes intersect a given plane perpendicular to the \(x\)-axis with the intersection points forming vertices of a square. We use two planes: the emitter plane (\(x_{e}=-20\)\(\upmu\)m), which is the left boundary of the simulation box, and the focus plane (\(x_{f}=-16\)\(\upmu\)m), which is the plane where the beams have the smallest transverse size. The twist is set by angle \(\theta\). There is no twist for \(\theta=0\), so that the axis of each beam and the \(x\)-axis form a plane. We use \(\phi=\arctan(-D_{p}/S)\) to set the beam convergence, where \(S\) is the distance between the emitter plane and the focus plane and \(D_{p}\) is the transverse shift of the beam axes between the two planes, as shown in Fig. 1(b). We use \(\phi=-0.27\pi\) in all simulations.
Figure 1(c) shows the magnetic field (B-field) for \(\theta=-0.28\pi\) at \(t=20\) fs. We define \(t=0\) fs as the time when the laser pulses leave the simulation box. The laser-plasma interaction takes place at \(t\in(-510,-60)\) fs. The longitudinal B-field exceeds 10 kT. The volume is around \(10^{4}\)\(\upmu\)m\({}^{3}\). The three surfaces show \(B_{x}/B_{0}=-0.1\), \(-0.2\), and \(-0.8\), where \(B_{0}=13.4\) kT. Note that \(B_{0}\equiv 2\pi m_{e}c/|e|\lambda_{L}\), where \(\lambda_{L}=0.8\)\(\upmu\)m is laser wavelength in vacuum, \(c\) is the speed of light, and \(e\) and \(m_{e}\) are the electron charge
Figure 1: (a) Setup for axial magnetic field generation using four linearly-polarized Gaussian laser beams with twisted pointing directions, shown with solid lines, and a structured target. The size of each beam is shown with a color-coded ellipse in the emitter plane (left side of the simulation box) and in the focus plane. (b) Projections of the two planes on to the \((y,z)\)-plane. The parameters setting up the beam orientation are defined in the text. (c) Surface plots of the axial magnetic field \(B_{x}\) after the lasers have left the simulation box (\(t=20\) fs). The green, blue, and red surfaces represent \(B_{x}/B_{0}=-0.1\), \(-0.2\), and \(-0.8\), where \(B_{0}=13.4\) kT.
and mass. To make its profile more clear, \(B_{x}\) is averaged temporally over a 20 fs interval and spatially using a box with stencil size \(0.4\)\(\upmu\)m\(\times 0.4\)\(\upmu\)m\(\times 0.4\)\(\upmu\)m.
Owing to the approximately axisymmetric profile of \(B_{x}\), we can examine its 2D distributions in Fig. 2 without missing too much information. Figure 2(a) shows the global distribution in the \((x,y)\)-plane. The nanowires are between \(x_{wire}=-5\)\(\upmu\)m and \(x_{f1}=0\)\(\upmu\)m. The foil is between \(x_{f1}=0\)\(\upmu\)m and \(x_{f2}=4\)\(\upmu\)m. Figures 2(b)&(c) show \(B_{x}\), averaged over the azimuthal angle, as a function of \(x\) and \(r\) in front of and behind the target (note the different color-scale ranges). We find that \(|B_{x}|\) can be as high as \(1.5B_{0}\) in front of the wires because the lasers generate a higher concentration of hot electrons in front of the foil. Reaching this amplitude is noteworthy because new phenomena of laser beam transport through a plasma can arise at \(|B_{x}|\gtrsim B_{0}\)[54; 55]. Even though \(B_{x}\) is weaker behind the target, it is in the range of kT. This confirms that our scheme indeed produces electrons carrying AM, as the lasers are unable to reach behind the target to generate the B-field locally.
Figure 2(d) shows the time evolution of the average magnetic field strength \(\langle B_{x}\rangle\) in a box with \(-15\)\(\upmu\)m \(<x<-5\)\(\upmu\)m, \(|y|<5\)\(\upmu\)m, and \(|z|<5\)\(\upmu\)m. The ps time scale is comparable to that in Ref. [56], but the region containing the magnetic field moves axially outward (away from the target). In terms of the energy content within a region with \(|y|\), \(|z|<15\)\(\upmu\)m, we find that the energy in the magnetic field (\(\varepsilon_{B}=\int B_{x}^{2}/(2\mu_{0})dV\approx 3.0\) J) is much smaller than the kinetic energy of electrons (\(\varepsilon_{e}\approx 40.0\) J). The energy of the four beams is \(\varepsilon_{laser}\approx 580\) J. The energy conversion efficiency from laser to hot electrons and from hot electrons to the magnetic field are both around 10%. The overall conversion efficiency is two orders of magnitude higher than that for a laser-driven coil in Ref. [2].
To confirm that the twist angle \(\theta\) rather than the polarization is the key parameter, we performed a simulation without the twist (\(\theta=0\)) and a simulation with an opposite twist to the original direction (\(\theta=0.28\pi\)). We found that no axial magnetic field is generated without the twist and that \(B_{x}\) reverses its direction when we reverse the twist. The azimuthal B-field is generated in all three cases due to the ubiquitous axial current driven by the laser pulses. We also performed a simulation with the original setup but randomly selected direction of the E-field polarization in each laser beam. The angle-averaged \(B_{x}\) is similar to the \(B_{x}\) in Fig. 2, confirming that laser polarization has only a secondary effect on the magnetic field generation in our setup.
In the remainder of this letter we focus on the region in front of the target. We start with an analysis of the azimuthal current density \(j_{\theta}\) that is thought to be responsible for the axial magnetic field generation. Figure 3(a) shows \(j_{\theta}\), averaged over the azimuthal angle, in the \((x,r)\)-plane at \(t=20\) fs. The direction of \(j_{\theta}\) alternates in the nanowire region [\(x\in(-5,0)\)\(\upmu\)m]. The underlying cause is the presence of strong nonuniformities in the ion density associated with the original nanowires.
Transverse distributions of \(j_{\theta}\) in the \((y,z)\)-plane at different \(x\) positions (\(x_{b}\), \(x_{c}\), \(x_{d}\)) are shown in Fig. 3(b-d). These positions are marked by dashed
Figure 3: (a) Angle-averaged azimuthal current density \(j_{\theta}\) at \(t=20\) fs as a function of \(x\) and \(r\). (b), (c), and (d) \(j_{\theta}\) in the \((y,z)\)-plane at three different locations with \(x=x_{b}\), \(x_{c}\), and \(x_{d}\). Note the significant difference in color-scales between the three panels introduced to improve visibility. The current density is normalized to \(j_{0}=-|e|cn_{c}=-8.25\times 10^{16}\) A/m\({}^{2}\).
Figure 2: (a) Axial magnetic field in the \((x,y)\)-plane at \(t=20\) fs. (b)&(c) Angle-averaged axial magnetic field as a function of \(x\) and \(r\) at \(t=20\) fs. The nanowire region is at \(x_{wire}<x<x_{f1}\). The foil is at \(x_{f1}\leq x\leq x_{f2}\). (d) Time evolution of the volume-averaged magnetic field within a box with \(-15\)\(\upmu\)m \(<x<x_{wire}\), \(|y|<5\)\(\upmu\)m, and \(|z|<5\)\(\upmu\)m.
lines in Fig. 3(a). In agreement with Fig. 3(a), \(|j_{\theta}|\) is the biggest at \(x_{d}\) and the smallest at \(x_{b}\). To perform an order of magnitude estimation for the maximum value of \(|B_{x}|\), we assume that \(j_{\theta}\) is uniform inside a cylinder of radius \(R\) and length \(\Delta x\). Then the Biot-Savart law [57] yields
\[\max|B_{x}| \approx \frac{\mu_{0}}{2}\int_{0}^{R}\int_{-\Delta x}^{\Delta x}\frac{|j_ {\theta}|r^{2}\mathrm{d}x\mathrm{d}r}{(r^{2}+x^{2})^{3/2}} \tag{1}\] \[= \mu_{0}|j_{\theta}|\Delta x\,\mathrm{arsinh}(R/\Delta x),\]
where \(\mu_{0}=1.26\times 10^{-6}\) H/m is permeability in vacuum. According to Fig. 3(a), we can set \(R\approx\Delta x\approx 5\) um. In Fig. 3(d), the current density reaches \(|j_{\theta}|\approx 0.05|j_{0}|\), where \(j_{0}\equiv-|e|cn_{c}\). Using this value, we obtain \(\max|B_{x}|\sim 20\) kT, which is close to the peak magnetic field, \(B_{x}\sim 1.5B_{0}\), in Fig. 2(b).
To quantify the rotating effect of the plasma, we computed the density of the axial AM for electrons and ions. Due to the significant difference in mass, the ratio of the axial AM absorption between electrons and ions is \(\eta_{ei}\approx 0.01\). We can estimate the AM density of hot electrons using the azimuthal current density. We write the AM density of electrons as \(L_{xe}\approx r\gamma_{a}m_{e}n_{e}v_{\theta}\), where \(\gamma_{a}\) is the relativistic gamma-factor, \(n_{e}\) is the number density, and \(v_{\theta}\) is the effective azimuthal velocity. We set \(v_{\theta}\approx-j_{\theta}/|e|n_{e}\) to find that \(L_{xe}\approx r\gamma_{a}m_{e}j_{\theta}/|e|\). For \(r=w_{0}\) and \(\gamma_{a}\approx\sqrt{1+a_{0}^{2}}\approx 10\), we have \(L_{xe}\approx 1.6\) kg/m-s. Using the electron density from simulations, \(n_{e}\approx 10^{27}\) m\({}^{-3}\), we find that the rotating velocity is around \(v_{\theta}\approx 0.1c\). Our setup produces a rotating plasma environment with electron density and rotation velocity two orders of magnitude higher than an LG beam in [16].
The OAM transfer from the laser beams to the electrons can be determined using the conservation of AM [12, 15, 56]. The OAM of absorbed laser photons is transferred to electrons and ions, with the electron fraction equal to \(\eta_{ei}\). Then, based on the photon absorption, the axial AM density of the electrons is roughly
\[L_{ex}(x,r) \approx \frac{\eta\eta_{ei}}{x_{e}}\frac{0.75\tau_{g}I_{0}}{c}\sin(\phi) \sin(\theta)D_{xr},\] \[D_{xr} = re^{x/x_{e}-2(r-D_{f}/2)^{2}/w_{0}^{2}}, \tag{2}\]
where \(I_{0}\) is the peak intensity of the incident laser pulses and \(\tau_{g}\) is their duration. For simplicity, we assume that the absorption coefficient \(f_{abs}\) of the laser intensity over the axial distance is \(f_{abs}=f_{0}\exp(x/x_{e})\). We find from the simulation that \(\eta=\int_{x=x_{e}}^{x=0}f_{abs}dx\approx x_{e}f_{0}\approx 0.1\) (\(x_{e}\approx 3\) um). We use \(r=D_{f}/2=6\) um and \(x=0\) um to find the peak AM density of the electrons, \(L_{ex}\approx 2.4\) kg/m-s. This result is on the same scale as the peak AM density (\(\approx 6.9\) kg/m-s) in our simulations. It is also close to the result (\(\approx 1.6\) kg/m-s) calculated using \(j_{\theta}\) in Fig. 3. The peak AM density in the simulation exceeds our model's prediction, which may be due to the locally positive AM density in the nanowire region. According to Eq. (2), the axial B-field can be controlled by changing the sign of twist angle \(\theta\), which has been confirmed in the Supplemental Material. Our model ignores the dependence of \(f_{abs}\) on such parameters like \(I_{0}\), \(\phi\), and \(\theta\), but the actual absorption mechanism may be more complex [58, 59, 60]. Using \(L_{ex}\), we can obtain the azimuthal current density and the associated axial B-field,
\[B_{x}\propto\frac{j_{\theta}(x,r)}{j_{0}}\propto(\eta\eta_{ei})\frac{a_{0}^{2 }}{\gamma_{a}}\frac{c\tau_{g}}{x_{e}}\frac{D_{xr}}{r}\sin(\phi)\sin(\theta). \tag{3}\]
To investigate the robustness of this mechanism to the choice of laser parameters, we perform scans over laser peak intensity \(I_{0}\) and pulse duration \(\tau_{g}\). The dependence of the volume-averaged longitudinal field on \(I_{0}\), shown in Fig. 4 with asterisk markers, matches well the dependence given by Eq. (3) and shown with the blue dashed line. The blue dashed line is \(|\langle B_{x}\rangle|[\mathrm{kT}]=0.85a_{0}^{2}/\sqrt{1+a_{0}^{2}}\). Even at \(I_{0}\approx 3\times 10^{19}\)W/cm\({}^{2}\) (\(a_{0}=4\)), the axial magnetic field strength can be as high as 5 kT. The pulse duration scan, shown in Fig. 4 with square markers, is performed for a fixed peak intensity of \(I_{0}\approx 2.1\times 10^{20}\)W/cm\({}^{2}\) (\(a_{0}=10\)). The red dashed line, \(|\langle B_{x}\rangle|\) [kT] = 0.024\(\tau_{g}\)[fs], has the same dependence on \(\tau_{g}\) as that given by Eq. (3). The laser pulse duration is believed to affect the number of hot electrons and, as a result, the magnetic field generation. For \(\tau_{g}\) as small as 30 fs, we can still get a volume-averaged magnetic field of 1.3 kT. Additional simulations with a laser wavelength of 1.053 um produce similar results, confirming that our scheme is applicable to both Ti:Sa and neodymium-based lasers.
In summary, we have demonstrated via 3D kinetic simulations a novel mechanism for generating a multi-kT axial magnetic field using multiple reg
Figure 4: Volume-averaged magnetic field \(|\langle B_{x}\rangle|\) as a function of peak laser intensity \(I_{0}\) (blue asterisk markers) and laser pulse duration \(\tau_{g}\) (red square markers). The averaging is performed within a box with \(-15\) μm \(<x<x_{wire}\), \(|y|<5\) μm, and \(|z|<5\) μm. The dashed curves (blue and red) show the fits based on Eq. (3).
ular laser pulses. The twist in the pointing direction of the pulses is the key to driving an azimuthal plasma current that sustains the magnetic field. The twist angle is a convenient control knob for adjusting the direction and magnitude of the axial magnetic field. The field occupies a volume that is tens of thousands of cubic microns and it persists on a ps time scale. The mechanism can be realized for a wide range of laser intensities and pulse durations. Our scheme requires just regular linearly-polarized laser beams, which makes it suitable for implementation at existing and future multi-kJ PW-class laser facilities that, by design, have to have multiple beamlets [9, 10, 11, 42, 61], including the SG-II UP facility [8] that is expected to have multiple kJ-class ps pulses in the near future.
## Acknowledgements
Y S acknowledges the support by USTC Research Funds of the Double First-Class Initiative, Strategic Priority Research Program of CAS (Grant No. XDA25010200), CAS Project for Young Scientists in Basic Research (Grant No. YSBR060) and Newton International Fellows Alumni follow-on funding. Y S also acknowledge Rui Yan and Robert Kingham for enthusiastic discussions. A. Arefiev's research was supported under the National Science Foundation-Czech Science Foundation partnership by NSF Award No. PHY-2206777. Simulations were performed with EPOCH (developed under UK EPSRC Grants EP/G054950/1, EP/G056803/1, EP/G055165/1, and EP/M022463/1). The computational center of USTC and Hefei Advanced Computing Center are acknowledged for computational support.
|
2301.07502 | Multimodal Side-Tuning for Document Classification | In this paper, we propose to exploit the side-tuning framework for multimodal
document classification. Side-tuning is a methodology for network adaptation
recently introduced to solve some of the problems related to previous
approaches. Thanks to this technique it is actually possible to overcome model
rigidity and catastrophic forgetting of transfer learning by fine-tuning. The
proposed solution uses off-the-shelf deep learning architectures leveraging the
side-tuning framework to combine a base model with a tandem of two side
networks. We show that side-tuning can be successfully employed also when
different data sources are considered, e.g. text and images in document
classification. The experimental results show that this approach pushes further
the limit for document classification accuracy with respect to the state of the
art. | Stefano Pio Zingaro, Giuseppe Lisanti, Maurizio Gabbrielli | 2023-01-16T11:08:03Z | http://arxiv.org/abs/2301.07502v2 | # Multimodal
###### Abstract
In this paper, we propose to exploit the side-tuning framework for multimodal document classification. Side-tuning is a methodology for network adaptation recently introduced to solve some of the problems related to previous approaches. Thanks to this technique it is actually possible to overcome model rigidity and catastrophic forgetting of transfer learning by fine-tuning. The proposed solution uses off-the-shelf deep learning architectures leveraging the side-tuning framework to combine a base model with a tandem of two side networks. We show that side-tuning can be successfully employed also when different data sources are considered, e.g. text and images in document classification. The experimental results show that this approach pushes further the limit for document classification accuracy with respect to the state of the art.
## 1 Introduction
Notwithstanding the many technological advances in computer vision and artificial intelligence, which are contributing to the "digital transformation" of many companies and industrial processes, there still exist a surprising number of tasks which are almost completely carried out by humans. In particular, many tasks in different industries, from administrative procedures to archival of old manuscripts, involve the human elaboration of a huge number of paper documents, with consequent high costs for the companies and, ultimately, for their clients. There are two main reasons for this situation: one is deeply connected to the internal rules and processes of some companies, banks in particular, which have an important number of legacy procedures and have big inertia for innovation. The second reason, that we consider in this paper, is the lack of completely satisfactory (automatic) tools for document classification, especially when documents contain different source of information such as text, images, and handwritten parts. While some paper documents could be replaced by electronic means, one cannot eliminate paper documentation, hence efficient and trustworthy tools for document classification are essential.
As we discuss in the next section, document classification has been widely investigated and methods can be roughly divided into three categories: those that are based on the textual content of the document, often obtained from _Optical Character Recognition_ (OCR), those based on the visual structure of the image, and multimodal methods that use both text and image. The latter family of solutions [1, 2, 3, 4, 5, 6, 7, 8] have provided significant advances, yet dealing with both textual and visual content in full generality remains an open problem [8].
In this paper, we tackle the challenge by exploiting _side-tuning_[9] -- a recent methodology for network adaptation -- in multimodal document classification. In general, network adaptation is a common technique that allows updating the weights of a pre-trained model on a different task. This technique is opposed to training from scratch and allows, among other benefits, a faster convergence. However, existing adaptation solutions may suffer from catastrophic forgetting that is, the tendency of a network to abruptly lose previous knowledge when learning new information. Side-tuning [9] addresses the problem of adaptation by using a second network whose weights are never updated, so as to preserve the classification capability of the original task. The output of the base network and the side network are then merged into a specific layer. The fusion takes place using an appropriate sum operation of the single outputs1. Similarly to other additive learning approaches, side-tuning does not change the base model, rather it adapts it to a new target task by adding new parameters. However, differently from other approaches, side-tuning does not impose any constraints on the structure of the side network, whose complexity can be scaled to the difficulty of the problem at hand, thus allowing also tiny networks when the base requires minor updates. This provides an extreme flexibility of the model and it is one of the reasons for its good results.
Footnote 1: Several notions of summation can be used, details can be found in [9].
Our research idea is to exploit side-tuning also in the field of multimodal document classification, based on the intuition that this enhanced flexibility could allow one to precisely tune the model on different sources (i.e., textual and visual), while avoiding catastrophic forgetting and model rigidity. We implement our idea by proposing a new method for multimodal learning with a deep neural network model, more precisely we present a side-tuned architecture that uses off-the-shelf networks and consists of one base model with a tandem of two side networks. Our experimental results show that this architecture is effective in common document classification scenarios and pushes further the limit for document classification accuracy.
The remaining of the paper is organized as follows, Section 2 reviews related work and discuss the contributions of our solution. Section 3 explains the methodology and provides details concerning the model implementation. In Section 4, we provide the results of the experimental procedure used to assess the model validity and compare those results with previous works discussing the implementation choices. Finally, Section 5 summarizes the contributions and addresses some future directions for the presented work.
## 2 Related Work
Document classification has been widely investigated and several solutions have been proposed over the years. These solutions can be categorized considering whether they analyze the textual content of a document, its visual structure or both. A complete analysis on text classification methods before the rise of deep learning solutions can be found in [10]. Recently, Kim [11] proposed to use Convolutional Neural Networks (CNNs) on top of a pre-trained embedding to perform sentence classification providing an effective and portable solution that has been widely used in many subsequent work [11, 12, 13, 14, 15]. In [16] the authors give a thorough review on pre-trained models for natural language processing.
In the past, classification of a document based on its visual content has always been addressed with the design of hand-crafted features. These features were used to extract
meaningful information about the image content or the document structure and then used as input to classic machine learning techniques for classification. A thorough analysis of these solutions can be found in the survey by Chen and Blostein [17]. However, the recent advances in document image classification have been mostly led by solutions exploiting CNNs [18, 19, 20, 21, 22, 23, 24, 25]. Kang et al [18] proposed the first solution based on CNNs for document images classification. They designed a shallow architecture composed by two convolutional layers, max pooling and two fully connected layers, with ReLU activations and dropout regularization. The network was trained from scratch and the final results showed the superior performance of CNNs compared to classic solutions [17]. The solutions proposed in [19, 20] demonstrated that it is possible to further improve this performance by exploiting transfer learning. In both articles, the authors successfully fine-tuned a state-of-the-art architecture, such as AlexNet [26] (previously trained on ImageNet [27]), to recognize the document type. Successively, the authors of [22] performed a thorough analysis on how different image pre-processing steps and architecture hyper-parameters may affect the final classification performance. They performed several tests, training each networks from scratch, and obtained results comparable to the previous solutions. In [23] several state-of-the-art very deep architectures, such as VGG16, GoogLeNet and ResNet-50 have been trained and/or fine-tuned for recognizing document images, achieving a huge boost in performance. Differently from the previous approaches, the solution in [24] exploited pre-trained CNNs just to extract the features from document images and then used extreme learning machines (ELMs) for classification. The solution in [25] performed two steps of fine-tuning. In particular, given a pre-trained VGG16 architecture, a first fine-tuning is performed exploiting the whole visual content of document images. Then a second transfer learning is performed on specific image regions. Finally, the results is obtained as the combination of the predictions from all these neural network models.
Several papers proposed to combine both textual and visual features for documents classification [1, 2, 3, 4, 5, 6, 7, 8]. The method in [1] combined bag-of-words and bag-of-visual-words representations exploiting SVM and a late fusion scheme. Similarly, in [2] the authors used a bag-of-words representation with latent semantic analysis for the text and the visual descriptor from [28] for images. Different classifiers with both early and late fusion schemes have been used to combine the text and visual features in order to correctly classify a page stream. In [3] the document was first processed by an OCR. Successively, the extracted words were highlighted in the original document image through colored bounding boxes, following a ranking algorithm. These newly generated images were used to train a CNN for classification. The solution proposed in [6] tested two different fusion schemes, in particular, a spatial fusion and a features fusion scheme. In the spatial fusion, text and images are concatenated and given as input to a VGG16 network for training. Whereas, in the features fusion, the image feature obtained from a VGG16 network and text feature obtained through a text ensemble network are stacked and fed to a fully connected layer for classification. Similarly, the authors of [7, 8] proposed two solutions, which differ mainly in the embedding used for text and the CNN architectures used for images, InceptionV3 and MobileNetV2 respectively. As in [6], the features extracted from text and image networks are concatenated and fed to a fully connected layer for classification. Both architectures have been trained end-to-end and have achieved state-of-the-art performance.
### Contributions
Differently from the previous approaches, we combine incremental learning and multimodal features training to jointly learn from both representations, visual and textual. The resulting model presents great flexibility and keeps high performance when used on both small and large datasets. To the best of our knowledge, our approach is the first that successfully attempts to apply side-tuning by using different sources of input during training. We thoroughly evaluate our approach on two publicly available datasets [19, 29] and two different deep learning architectures in order to assess the validity of the proposed model. The final model performance is competitive with state-of-the-art solutions on both datasets.
## 3 Methodology
In this section, we provide the details of a multimodal document classification model that takes advantage of side-tuning to properly combine visual and textual features. In the side-tuning framework, architectural elements are combined to produce a new representation of the target [9]. A side-tuning architecture generally presents a base model with fixed weights and a side model whose weights are unlocked to allow updating. In principle, different architectures can be selected for the base and side models to allow modularity of the components. For example, the authors of [9] use the concept of knowledge distillation for neural networks [30] to properly initialize the weights of the side component architecture.
In the implementation discussed in this paper, the base model consists of a Convolutional Neural Network (CNN) for image classification, pre-trained on the ImageNet dataset. The side component presents two different networks: the first one is identical to the base model but with unlocked weights to allow update during training, while the second network is a CNN for text classification. In defining the final model, we can rely on two strategies. The first involves the distillation of a network, while the second uses networks as they are. We choose the latter and select small network architectures so that we do not have to compress the model for image classification.
In the remaining of the section we provide the networks details of the baseline models for both images and text and then we describe the multimodal combination process.
### Model for visual features
Deep Convolutional Neural Networks (DCNN) have proven to be effective when pre-trained on large dataset and successively fine-tuned for a different task using a smaller set of data [31]. We considered two DCNNs pre-trained on the ImageNet dataset as the reference architectures for the document image classification. As a first attempt in the definition of the model, we choose the MobileNetV2 [32] neural network. The MobileNet networks family has been originally designed to exploit Deep Learning on resource-constrained devices. Its relatively simple architecture presents a smaller number of trainable parameters (about 3.5M) and yet it achieves competitive classification performance with respect to more complex and resource-greedy models [33, 34, 26, 35]. The learning process of MobileNetV2 is based on the principle of learning residuals and uses the combination of expansion levels and bottleneck blocks to effectively encode the image features. Despite of the specific reasons to select the
MobileNetV2 architecture, we have also considered the ResNet50 model. In principle, we could have employed any other popular DCNN, e.g. VGG16, InceptionV3, to accomplish the image classification task.
We pre-process the network input by resizing the image to \(384\times 384\) and by replicating the grayscale to respect the original network input that is, three channels RGB images. As a consequence of the adoption of the ImageNet pre-trained model, we centre the input by applying standardization using mean and variance values from the training dataset.
### Model for textual features
The classification of documents from scans presenting hybrid text/image content involves the creation of a corpus including the textual version of each input image. The corpus should then be coded in an appropriate format using, for instance, an approach similar to the _word2vec_ model [36]. It is appropriate to carefully select the specific model to be used for words vectorization since different implementation strategies could affect the quality of learning. Such choices comprise the measure for calculating the similarity distance between the words or the method for the vectors initialization. The analysis of corpora generation strategies lays beyond the scope of this work, nevertheless, previous work addressed that this procedure is key to obtain good results [8]. Furthermore, since this problem has already been addressed in the reference literature, text versions of the datasets considered in this work already exist: _QS-OCR-Small_ for _Tobacco3482_ and _QS-OCR-large_ for _RVL-CDIP_[8], obtained using the _Tesseract_ OCR 4.0 engine, which is based on LSTM [37].
In Natural Language Processing (NLP) practice, vector form encoding involves a tokenization procedure followed by the creation of a lookup table that associates a unique numeric identifier to each word in the resulting vocabulary. This embedding procedure aims to represent a text with a real-valued vector of numbers that is used in an end-to-end
Figure 1: Multimodal side-tuning classifier for hybrid text and image classification. Base model (a) and side model (b) reflects the same MobileNetV2 architecture, while (c) is a CNN inspired from sequence text classification task. The final merge architecture combines the output of the three networks into one new encoding as shown in (d).
training to learn similarities among different words. In our case, the tokenization is carried out separating words by white spaces without ignoring punctuation, nor removing digits or OCR-produced artifacts. This way we aim at exploiting, on the one hand, the OCR "noise" as a regularization factor for the training procedure and, on the other hand, the consistency of the OCR to recognize similar patterns.
Similarly to the image case, text classification also benefits of weights initialization from a large corpora of pre-trained models. In fact, the creation of the lookup table can be replaced with an already existing vocabulary, which contains information on words similarity previously computed with proper distance measurements, e.g. Levenshtein in the case of _GloVe_[38] and _ELMo_[39]. Therefore, we choose a pre-trained model that contains embeddings for each word of our corpus, we combine all the vectors representing the words of a single text document, and we 0-pad the encodings which contain less than 500 words, as in [8]. Considering the characteristics described, we select _FastText_[12] among the models in the literature. FastText is pre-trained on the _Common Crawl_ dataset [14] and generates embeddings of \(k=300\) real values per word. Remarkably, it is able to encode every token in the datasets considered in this work. Indeed, we believe that avoiding models with _Out-Of-Vocabulary_ (OOV) words is crucial to exploit the embeddings in the procedure.
We carry out the baseline training for the text classification model with a simple architecture (about 1.8M parameters) inspired by a CNN for sentence classification [11]. The network consists of three convolutional layers of dimension \(h\times k\), each starting from the same input and acting in parallel. The convolutional layers use a window size of \(h=3,4,5\) words, no padding and a stride of 1. Each layer has 512 filters, uses _ReLU_ activation function and a resizing step with one-dimensional _max-pooling_. The resulting tensors are concatenated and fed to a classification layer with Softmax activation. We also apply a dropout regularization with a fixed probability of 0.5. As in the case of the model for visual features, we could have chosen any other off-the-shelf architecture.
### Combined model
To benefit from both representations we choose to combine image and text in a single, new, encoding. In our setup, we use a network with locked weights and a side model, which is composed of the two architectures described in Subsections 3.1 and 3.2 without the final classification layer. The base and side networks that take the image as input are pre-trained on ImageNet, while the weights of the side network for text classification are randomly initialized. The combination of the three encodings can be addressed with different methods, of which we list the two most significant. First, we can concatenate the outputs, delegating the task of selecting the most significant weights to the fusion network. Second, we can linearly combine the encodings so as to align the feature space and select the best coefficients.
The first concatenation method have been exploited in several works [7, 8, 40], all reporting an increase in accuracy performance with respect to the single baseline models. The second method is less explored and advocates for a linear merging of the encodings. Concretely, the combination of the base and side models in our architecture is performed as:
\[R(x)=\alpha_{0}B(x)+\sum_{i=1}^{N}\alpha_{i}S_{i}(x), \tag{1}\]
where \(R\) is the new representation for the given task, \(B\) and \(S_{i}\) are respectively the base and sides model encodings, and \(\alpha_{i}\) are coefficients of the equation, subject to the constraint \(\sum_{i=0}^{N}\alpha_{i}\!=\!1\). In our case, where \(N=2\), the overall combination assumes the form \(\alpha_{0}B(x)+\alpha_{1}S_{1}(x)+\alpha_{2}S_{2}(x)\). It is worth noting that some specific values for the alpha coefficients lead to well-known training procedures, that in our case corresponds to just the image feature extraction (\(\alpha_{0}\!=\!1\), \(\alpha_{1}\!=\!0\), and \(\alpha_{2}\!=\!0\)), to the fine-tuning of the image architecture (\(\alpha_{0}\!=\!0\), \(\alpha_{1}\!=\!1\), and \(\alpha_{2}\!=\!0\)), and finally to the training from scratch of the text network (\(\alpha_{0}\!=\!0\), \(\alpha_{1}\!=\!0\), and \(\alpha_{2}\!=\!1\)). Setting properly these coefficients allows to easily switch between the different modalities with a gain in flexibility and the possibility to explore their combination.
In order to perform the weighted sum of the network outputs each resulting vector must have the same dimension. In case of different input sources, it may be necessary to use an adaptation layer to make the output shapes compatible. In our case, we use such layer to adapt the text output with the image one. Finally, the result of the linear combination is passed to a classification layer. In addition to the architecture just described, we have performed experiments by adding a fully connected layer after the fusion and before the classification, to analyze the behavior of the model as the parameters increase. An overview of the architecture is shown in Figure 1.
## 4 Experimental Results
We performed an analysis of the multimodal side-tuning architecture to assess the quality of our methodology and to better understand how it contributed to the classification accuracy. In the following, we first introduce the datasets used in our experiments, then we detail the training procedure, and finally we provide a comparison of the performance with respect to the state-of-the-art. We also give a brief analysis of the inference process running time.
### Datasets
The Tobacco3482 dataset [29] comprises 3482 greyscale scans of documents divided unevenly in 10 categories, e.g. resume, email, letter, memo. The documents distribution among the classes spans from 120 for the resume category to 620 for memo. It is a small subset of the Truth Tobacco Industry Documents and collects many hybrid content documents. The textual version of this dataset, namely _QS-OCR-Small_[8] reflects the same structure of the original image dataset. In our setting, we random sampled three subsets to be used for train, validation and test, fixing their cardinality to 800, 200, and 2482 respectively, as in [8, 19].
The Ryerson Vision Lab Complex Document Information Processing (RVL-CDIP) dataset [19] contains 399828 images divided into 16 categories from the Truth Tobacco Industry Documents, e.g. scientific publication, scientific report, handwritten. Its textual counterpart, _QS-OCR-Large_, was developed in the same work that released _QS-OCR-Small_[8]. Differently from Tobacco3482, RVL-CDIP comes with pre-built subsets for train, validation and test that have respectively dimension of 319837, 39995, and 39996.
### Training details
All the models are implemented using the _PyTorch_ framework, version 1.4.0, and trained using an _NVIDIA Titan XP_ GPU. The hyper-parameters are selected from the experiments performed on the Tobacco3482 dataset.
We set the maximum number of epochs to 100 and the batch size to 16 documents for Tobacco3482 experiments while we chose to train, validate, and test with batches of 40 for
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **\#Params** & **OA** \\ \hline _Text_ & \(\approx\) 1.8M & 67.8\% \\ _Image_ (fine-tuning) & \(\approx\) 3.5M & 84.0\% \\ _Image_ (side-tuning) & \(\approx\) 7M & 88.0\% \\ \hline _Multimodal_ (side-tuning) & \(\approx\) 12M & **90.5**\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Baseline models and multimodal overall accuracy for Tobacco3482 using MobileNetV2 (visual features) and 1D CNN (textual features) architectures. Best result in bold.
Figure 2: Plot of the accuracy on the Tobacco3482 dataset for twelve combinations of the multimodal coefficients. Base and first side component use MobileNetV2 (a) and ResNet50 (b) architectures. Each trend corresponds to a different configuration of the side-tuned network.
RVL-CDIP for 10 epochs. We used the cross-entropy loss function in all the experiments.
We performed all tests using the _Stochastic Gradient Descent_ (SGD) optimizer with a momentum of 0.9 and an initial learning rate of 0.1, subject to a scheduled update at each iteration that follows the scheme proposed in [23]:
\[\mathsf{LearningRate}_{i}=0.1*\sqrt{\frac{\mathsf{Epoch}_{i}}{\mathsf{MaxEpoch }}} \tag{2}\]
### Ablation Study
Table 1 reports the _Overall Accuracy_ (OA) obtained on the Tobacco3482 dataset. As shown in the table, the multimodal network outperforms the other combinations, proving that we are able to combine efficiently the features space and benefit from side-tuning. The side-tuning architecture used in Table 1 uses the MobileNetV2 as base model and side component for visual features. The model is pre-trained on ImageNet, coefficients are \(\alpha_{0}=0.5\) for image-only side-tuning, while \(\alpha_{0}=0.2\), \(\alpha_{1}=0.3\), and \(\alpha_{2}=0.5\) for the multimodal version.
The second analysis explores the behavior of multimodal side-tuning with respect to different coefficients for the linear combination. Indeed, each \(\alpha_{i}\) plays a central role in the balancing of the learning process for side-tuning. To assess their impact in our setting, we train several models following twelve different alpha configurations. We select values ranging from 0.1 to 0.5 to always be able to exploit each component of the framework without excessively lowering the weights of the other networks. We also consider two architectures for the image input (MobileNetV2 and ResNet50) and for both we tested two different network configurations. The first inputs directly the combination of the base model and side models to the classification layer, while the second considers an additional FC layer before the classification one. We test two different dimensions for the latter. In Figure 2, we analyze the behaviors of this set of experiments.
The coefficients are ordered so that the linear combination in the merging layer gives incrementally more importance to the model component exploiting textual features (\(\alpha_{2}\)). The accuracy increases with the progressive shift from the model that favors visual features, with \(\alpha_{0}\) or \(\alpha_{1}\) greater than \(\alpha_{2}\), to a more text-centered classifier. Small changes in the coefficients affect the training for all the architectures. Nevertheless, those models with the additional fully connected layers, both in MobileNetV2 and in ResNet50 show the best trends. In particular, the best accuracy is reached by model with a dense layer of dimension 1024 for MobileNetV2 with the configuration \(\alpha_{0}=0.2\), \(\alpha_{1}=0.3\), and \(\alpha_{2}=0.5\) and the one with a dense layer of dimension 512 for ResNet50 with the configuration \(\alpha_{0}=0.3\), \(\alpha_{1}=0.3\), and \(\alpha_{2}=0.4\).
In Table 2 we present the best results for MobileNetV2 and ResNet50 architectures on the Tobacco3482 dataset. First, we tested the two architecture in the side-tuning framework using one side component and the same alpha configuration (\(\alpha_{0}=0.5\) and \(\alpha_{1}=0.5\)). Next, we take advantage of the second side component -- the text classifier -- to perform the multimodal side tuning. Although very similar, both the experiments present better results when the MobileNetV2 architecture is selected.
In Table 3 we report the experiments for RVL-CDIP dataset, presenting a different trend with respect to Tobacco3482 experiments, in fact, ResNet50 has the best accuracy. This is due to the fact that an architecture with a larger number of parameters (ResNet50) can benefit from a bigger dataset (RVL-CDIP) while suffering from small inter-class variability.
### Comparison with the state-of-the-art
We proved the effectiveness of multimodal side-tuning compared to the fine-tuning on images and training from scratch on textual features.
In Table 3, we compare five different state-of-the-art solutions with the proposed multimodal approach in terms of overall accuracy on the RVL-CDIP dataset. All the experiments have been carried out considering only the best configurations of alpha for both MobileNetV2
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline
**Model** & **OA** & **Adve** & **Email** & **Form** & **Letter** & **Memo** & **News** & **Note** & **Report** & **Resur** \\ \hline _Audebert_ & 87.8\% & 93.0\% & 98.0\% & 88.0\% & 86.0\% & 90.0\% & 90.0\% & 85.0\% & 71.0\% & 86.0\% \\ \hline Text & 67.8\% & 93.3\% & 29.5\% & 77.0\% & 58.8\% & 49.7\% & 63.6\% & 68.7\% & 52.0\% & 60.7\% \\ _Multimodal (ResNet50)_ & 90.3\% & **96.1\%** & 98.3\% & **90.8\%** & 91.7\% & **93.5\%** & **95.5\%** & 87.6\% & **76.7\%** & 89.4\% \\ _Multimodal (MobileNetV2)_ & **90.5**\% & 94.8\% & **99.1**\% & 88.7\% & **93.2**\% & 93.0\% & **95.5**\% & **89.7**\% & 76.2\% & **95.3\%** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Overall and per-class accuracy on the Tobacco3482 dataset compared with the results from [8]. The selected alpha configuration for the multimodal side-tuning is \(a_{0}\!=\!0.3\), \(a_{1}\!=\!0.2\), and \(a_{2}\!=\!0.5\) for MobileNetV2 and \(a_{0}\!=\!0.3\), \(a_{1}\!=\!0.3\), and \(a_{2}\!=\!0.4\) for ResNet50.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model (base architecture)** & **\#Params** & **OA** \\ \hline _Image (ResNet50)_ & \(\approx\!51\)M & 87.2\% \\ _Image (MobileNetV2)_ & \(\approx\!7\)M & 88.0\% \\ \hline _Multimodal (ResNet50)_ & \(\approx\!57\)M & 90.3\% \\ _Multimodal (MobileNetV2)_ & \(\approx\!12\)M & **90.5**\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overall accuracy on the Tobacco3482 image dataset using two different off-the-shelf architectures for both base and side model in the side tuning framework. Best result in bold.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **\#Params** & **Modality** & **OA** \\ \hline _CNNs_[19] & \(\approx\!62\)M & I & 89.8\% \\ _Audebert_[8] & \(\approx\!8\)M & I+T & 90.6\% \\ _AlexNet + SPP_[22] & \(\approx\!62\)M & I & 90.94\% \\ _VGG16_[23] & \(\approx\!138\)M & I & 90.97\% \\ _VGG16 + ULMFit_[6] & \(\approx\!162\)M & I+T & **93.6**\% \\ \hline _Text_ & \(\approx\!1.8\)M & T & 80.5\% \\ _Multimodal (MobileNetV2)_ & \(\approx\!12\)M & I+T & 92.2\% \\ _Multimodal (ResNet50)_ & \(\approx\!57\)M & I+T & 92.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overall accuracy on the RVL-CDIP dataset compared with the results from previous works. Modalities of the data source are image (I), text (T), or both (I+T). The selected alpha configuration for the multimodal side-tuning is \(a_{0}\!=\!0.3\), \(a_{1}\!=\!0.2\), and \(a_{2}\!=\!0.5\) for MobileNetV2 and \(a_{0}\!=\!0.3\), \(a_{1}\!=\!0.3\), and \(a_{2}\!=\!0.4\) for ResNet50. Best result in bold.
and ResNet50.
The works considered are the CNN implementation of [19], the multimodal solution from [8], the VGG16 network in [23], the AlexNet implementation of [22], and the VGG16 \(+\) UMLFit of [6]. As it is possible to observe, performance on the RVL-CDIP dataset highlights that the proposed solution slightly improves the classification performance with respect to the methods proposed in [8, 19, 22, 23] but obtains slighter lower results compare to [6]. This is related to the difference in the networks complexity between our solution and the method from [6], which has \(+150\)M parameters.
Finally, among these solutions, we select the one from [8], the most similar approach to what we propose, and compare the per-class accuracy on the Tobacco3482 dataset. In fact, the authors in [8] strived to use lightweight architecture as in our case but concatenated the output of the networks used for the images and the text before the classification. This also give us the chance to provide insights on the performance for the classes of interest in the Tobacco3482 dataset. Table 4 shows that the gain of our model is consistent over all the classes except for the scientific class. When compared with the solution proposed in [8], the side-tuning model improves the overall accuracy of 2.7%.
### Processing time
We now provide a discussion about execution time of our algorithm to analyze the performance of the document classification system. Although some document analysis could be conducted offline, critical applications require low latency in order to be performed as close to real time as possible. We then averaged the timings of the multimodal MobileNetV2 version over five classification runs. The full inference process of our model on a single document is carried on a Intel Xeon Silver 4208 CPU takes \(\approx\) 1595ms. Of those, \(\approx\) 910ms (57%) are spent for Tesseract OCR image processing and text extraction2, \(\approx\) 166ms (10.4%) for evaluation of the base model, \(\approx\) 202ms (12.7%) for the side model exploiting image features, and \(\approx\) 16ms (1%) are spent in the inference of the side component fed with textual features. The timings for the image (\(\approx\) 119ms) and text load (\(\approx\) 182ms) from disk occupy the remaining time (18.9%). On NVIDIA Titan Xp GPU, the side-tuning model runs in \(\approx\) 1224ms -- whit Tesseract OCR occupying 74.3% of the time. The base model is evaluated in \(\approx\) 8ms, \(\approx\) 15ms for the image side model, and \(\approx\) 5ms for the text model.
Footnote 2: Average timing for Tesseract has been computed using four threads as in [8].
Compared to models with more complex architectures, the proposed system is able to be used in real-time applications with latencies around the second. If the selected components were to be replaced with heavier models, this would lead to an inevitable performance impoverishment.
## 5 Conclusion
In this work we presented a multimodal approach for document classification that takes into consideration both visual ant textual features classify a document. We leverage the work done in the last state-of-art solutions for incremental learning and take advantage of the side-tuning framework to develop an hybrid architecture that performs on par with existing more
complex solutions and outperforms similar lightweight approaches. To further improve the performance, we aim at automatically tuning the coefficients used in the linear combination of both the base and sides models. We also want to investigate the possibility of exploiting an ensemble of text embeddings and combine them using the side-tuning framework.
## Acknowledgments
The Titan Xp GPU used for this research was donated by the NVIDIA Corporation.
|
2301.11360 | The Power of Linear Combinations: Learning with Random Convolutions | Following the traditional paradigm of convolutional neural networks (CNNs),
modern CNNs manage to keep pace with more recent, for example
transformer-based, models by not only increasing model depth and width but also
the kernel size. This results in large amounts of learnable model parameters
that need to be handled during training. While following the convolutional
paradigm with the according spatial inductive bias, we question the
significance of \emph{learned} convolution filters. In fact, our findings
demonstrate that many contemporary CNN architectures can achieve high test
accuracies without ever updating randomly initialized (spatial) convolution
filters. Instead, simple linear combinations (implemented through efficient
$1\times 1$ convolutions) suffice to effectively recombine even random filters
into expressive network operators. Furthermore, these combinations of random
filters can implicitly regularize the resulting operations, mitigating
overfitting and enhancing overall performance and robustness. Conversely,
retaining the ability to learn filter updates can impair network performance.
Lastly, although we only observe relatively small gains from learning $3\times
3$ convolutions, the learning gains increase proportionally with kernel size,
owing to the non-idealities of the independent and identically distributed
(\textit{i.i.d.}) nature of default initialization techniques. | Paul Gavrikov, Janis Keuper | 2023-01-26T19:17:10Z | http://arxiv.org/abs/2301.11360v2 | # Rethinking 1\(\times\)1 Convolutions: Can we train CNNs with Frozen Random Filters?
###### Abstract
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise (\(1\times 1\)) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
**Code:**[https://after-accept.com](https://after-accept.com).
Machine Learning, ICML, ICML
## 1 Introduction
Convolutional Neural Networks (CNN) are building the backbone of state-of-the-art neural architectures in a wide range of learning applications on \(n\)-dimensional array data, such as standard computer vision problems like 2D image classification, semantic segmentation, or scene understanding. In order to solve these tasks, modern CNN architectures are learning the entries (=weights) of millions of convolutional filter kernels. This process is not only very compute and data intensive, but apparently also mostly redundant as CNNs are learning kernels that are bound to the same distribution, even when training different architectures on different datasets for different tasks (Gavrikov & Keuper, 2022a). Yet if - in oversimplified terms - all CNNs are learning the "same" filters, one could raise the fundamental question if we actually need to learn them at all.
In order to investigate if and how the training of a CNN with non-learnable filters is possible, we retreat to a setup that eliminates any possible bias in the choice of the filters: we simply set random filters. This is not only practically feasible since random initializations of kernel weights are part of the standard training procedure, but also theoretically justified by a long line of prior work investigating the utilization of random feature extraction (e.g. see (Rahimi & Recht, 2007) for a prominent example) prior to the deep learning era.
Another cornerstone of our analysis is the so-called point-wise (\(1\times 1\)) convolution, which is increasingly used in modern CNNs. Despite its name and similarities in the implementation details, we will argue that this learnable operator differs significantly from spatial \(k\times k\) convolutions and learns linear combinations of non-learnable (random) spatial filters.
By applying only minor changes to common CNN architectures, we show that networks are able to learn how to combine non-learnable, randomly initialized spatial filters
Figure 1: Validation accuracy of LCResNet-20-16x\(\{E\}\) on CIFAR-10 with \(\bullet\) frozen random or \(\bullet\) learnable _spatial_ convolutions under increasing **LC expansion**\(\{E\}\) in the LC-Blocks. The size of the marker indicates the variance in the validation accuracy over several runs.
for the extraction of meaningful features.
We summarize our key contributions as follows:
* We show empirically, that a certain type of randomly initialized CNNs (with specific \(1\times 1\) configurations) can be trained to high validation accuracies on 2D image classification tasks without the need to learn the weights of spatial convolution filters.
* Based on this finding, we introduce a novel convolution block, computing learnable linear combinations (LC) of (frozen random) filters. Using the resulting _LCResNets_, we are investigating the properties of networks that are limited to using random spatial filters.
* Our empirical results not only show that LCResNets with frozen random spatial convolutions and high LC rates are able to outperform their conventionally trained counterparts, but we also show favorable properties of linear combined filters in terms of robustness, sparsity, and model size.
* Further, we introduce novel weight sharing methods, which allow the re-usage of the same random weights in all layers, massively reducing the number of weights in CNN architectures.
## 2 Related Work
Random Model Parameters.Modern neural network weights are commonly initialized with values drawn _i.i.d._ from uniform or normal distributions. To improve the gradient flow (Hochreiter, 1991; Kolen and Kremer, 2001), the standard deviation is adjusted according to the channel fan, based on proposed heuristics by (He et al., 2015; Glorot and Bengio, 2010).
Rudi and Rosasco (2017) provide an analysis of generalization properties of random features and conclude that many problems exist, where exploiting random features can reach significant accuracy, at a significant reduction in computation cost.
Based on the _Lottery Ticket Hypothesis_ (LTH) (Frankle and Carbin, 2019) observations that deep neural network can be trained with extremely small parameter subsets to the same accuracy both Zhou et al. (2019); Ramanujan et al. (2019) propose methods that prune weights of randomly initialized CNNs that achieve good (albeit well beyond trained) performance on ImageNet. Both approaches rely on unstructured weight pruning.
Frankle et al. (2021) study freezing all network parameters during training except the \(\beta\) and \(\gamma\) parameters of Batch-Normalization layers (Ioffe and Szegedy, 2015) and reveal that models are still able to learn highly non-trivial performances, only via affine transformations of features. This is somewhat orthogonal to our research. However, we study linear combinations in weight space instead and obtain significantly higher performances even with off-the-shelf architectures.
Zhang et al. (2022) show that entire weights of specific convolution layers can be reset to _i.i.d._ initializations _after training_ without significantly hurting the accuracy. The number of such layers decreases with increased dataset complexity.
Ulyanov et al. (2018) demonstrated that randomly weighted CNNs generate good priors for standard inverse problems such as super-resolution, inpainting, or denoising.
Convolution Filters from Linear Combinations.A different line of work explores learning filters as linear combinations as different (frozen) bases such as DCT (Ulicny et al., 2022), Wavelets (Liu et al., 2019), Fourier-Bessel (Qiu et al., 2018), or eigenimages of pretrained weights (Tayyab and Mahalanobis, 2019). These bases can be seen as a set of fixed filters and therefore similar to our approach. However, most bases-approaches enforce the same amount of filters in every layer, whereas, naturally, the amount of filters varies per layer (as defined by the architecture). Furthermore, the number of bases is finite, which limits the amount of possible linear combinations. Contrary, there are infinitely many random filters. This "overcompleteness" may in fact be necessary as suggested by the LTH (Frankle and Carbin, 2019).
Analysis of Convolution Filters.A long thread of research (Olah et al., 2020;b,c; Cammarata et al., 2020, 2021; Schubert et al., 2021; Voss et al., 2021;b; Petrov et al., 2021) extensively analyzed the features, connections, and their organization of a trained InceptionV1 (Szegedy et al., 2014) model. Among others, the authors claim that different CNNs will form similar features and circuits even when trained for different tasks. This is backed by a large-scale analysis of learned \(3\times 3\) convolution kernels (Gavrikov and Feuper, 2022), which additionally reveals that CNNs generally seem to learn highly similar convolution kernel distributions, independent of training data or task. Further, the majority of kernels seem to be randomly distributed or defunct, and only a small rate seems to be performing useful transformations.
Pointwise Convolutions.Lin et al. (2014) first introduced the concept of "network in network" in which pointwise (\(1\times 1\)) convolutions are used to "enhance the model discriminability for local receptive fields". Although implemented similarly to spatial convolutions, pointwise convolutions do not aggregate the local neighborhood but instead compute linear combinations of the inputs and can be seen as a kind of fully-connected layer rather than a traditional convolution. Modern CNNs often use pointwise convolutions (e. g. (He et al., 2015; Sandler et al., 2018; Liu et al., 2022)) to reduce
the number of channels before computationally expensive operations such as spatial convolutions or to approximate the computation of regular convolutions using depthwise filters (_depthwise separable convolutions_(Chollet, 2017)). Interestingly, spatial convolutions can also learn to mimic this behavior: Gavrikov & Keuper (2022b) reported that CNNs trained with \(\ell_{\infty}\)-adversarial training (Madry et al., 2018) primarily learn the center weight in initial convolution layers with \(3\times 3\) kernels, while other weights are (close to) zero. Due to a lack of local neighborhood aggregation by these kernels, they effectively act as pointwise convolutions.
## 3 Preliminaries
Convolutions.We define a 2D convolution layer by a function \(\mathcal{F}_{conv2d}(X;W)\), \(\mathcal{F}\) transforming an input tensor \(X\) with \(c_{\mathrm{in}}\) input-channels into a tensor with \(c_{\mathrm{out}}\) output-channels using convolution filters with a size of \(k_{0}\times k_{1}\). Without loss of generality, we assume square kernels with \(k=k_{0}=k_{1}\) in this paper. Further, we denote the learned weights by \(W\in\mathbb{R}^{c_{\mathrm{out}}\times c_{\mathrm{in}}\times k\times k}\). The outputs \(Y_{i}\) are then defined as:
\[Y_{i}=W_{i}*X=\sum_{j=1}^{c_{\mathrm{in}}}W_{i,j}*X_{j},\ \mathrm{for}\ i\in\{1, \ldots,c_{\mathrm{out}}\}. \tag{1}\]
Note how the result of the convolution is reduced to a linear combination of inputs with a now **scalar**\(W_{i,j}\) for the special case of \(k=1\) (pointwise convolution):
\[Y_{i}=\sum_{j=1}^{c_{\mathrm{in}}}W_{i,j}*X_{j}=\sum_{j=1}^{c_{\mathrm{in}}}W_ {i,j}\cdot X_{j} \tag{2}\]
The _PyTorch_ default initialization of model weights is _Kaiming Uniform_(He et al., 2015a). Here, every kernel weight \(w\in W\) is drawn _i.i.d._ from a uniform distribution bounded by a heuristic derived from the _input fan_ (inputs \(c_{\mathrm{in}}\times\) kernel area \(k^{2}\)). At default values, this is equivalent to:
\[w\sim\mathcal{U}_{[-a,a]}\ \mathrm{with}\ a=\frac{1}{\sqrt{c_{\mathrm{in}}k^{2}}} \tag{3}\]
**Linear combinations.**
**Definition 3.1**.: A pointwise convolution applied over the outputs of spatial convolutions computes linear combinations of previous outputs, which is equivalent to a convolution with a linear combination of previous filters with the same coefficients.
Proof.: Assume that the \(l\)-th layer is a regular convolution with \(k>1\), and inputs into a \(k=1\) pointwise convolution layer (\(l+1\)). \(X\) is the input. Then setting Equation (1) as input for Equation (2) results in:
\[Y_{i}^{(l+1)} =\sum_{j=1}^{c_{\mathrm{in}}^{(l+1)}}W_{i,j}^{(l+1)}\cdot X_{j}^{ (l+1)}\] \[=\sum_{j=1}^{c_{\mathrm{in}}^{(l+1)}}W_{i,j}^{(l+1)}\cdot\left(W_ {i}^{(l)}*X^{(l)}\right) \tag{4}\] \[=X^{(l)}*\sum_{j=1}^{c_{\mathrm{in}}^{(l+1)}}\left(W_{i,j}^{(l+1) }\cdot W_{i}^{(l)}\right)\]
As such, any (learned) filter can be approximated by a (learned) linear combination of sufficiently many random filters (Figure 2).
## 4 Experiments
In the following, we conduct experiments on models with spatial convolution weights frozen to their initial random values. Therefore, spatial convolution weights are never updated and do not require gradients. For simplicity, we will refer to such models as _frozen random_ through the remainder of the paper.
Due to a large number of experiments needed for our analysis, we mostly experiment on CIFAR-10 (Krizhevsky et al., 2009) and later show that our observations in-principle scale to ImageNet (Deng et al., 2009).
Training setup.We train all CIFAR models with the same hyperparameters (see Appendix C) for all experiments as they produce reasonable (although not SOTA) results on many architectures optimized for CIFAR and ensure a fair comparison within our experiments. Hence, it should be noted that individual hyperparameter tuning would increase the individual model performance in most cases. All results are reported over at least 4 runs unless stated otherwise.
Figure 2: Linear combinations of random filters are able to reconstruct learned spatial filters.
### Baseline Experiments
We start by training common off-the-shelf architectures1 such as ResNet-14/18/34/50/101 (He et al., 2015), a special ResNet-variant modified for CIFAR called ResNet-20 (He et al., 2015), Wide-ResNet-50x2 (Zagoruyko and Komodakis, 2016), and MobileNet v2 (Sandler et al., 2018) on CIFAR-10. Although all models achieve an approximately similar validation accuracy when trained normally, we observe two kinds of frozen random behavior (Figure 3): ResNet-50/101, Wide-ResNet-50x2, and MobileNet v2 approximately converge to similar accuracy (1.6-1.9% difference), while the other models show heavy drops of at least 16% in accuracy. An explanation for this effect can be found in common architectural elements of the first set of models: contrary to the Basic-Blocks in other models, they all use Bottleneck-Blocks or variants thereof (Sandler et al., 2018) which complement the traditional spatial convolutions by pointwise (\(1\times 1\)) convolutions, outside the common usage in downsampling operations in residual skip-connections (He et al., 2015). As shown in Equation (4), the linear combination computed by pointwise convolutions also applies to weights of spatial convolutions, and, therefore, these models are able to approximate learned convolutions from linear combinations of random filters.
Footnote 1: Some are slightly modified to operate on low-resolution images. We will release these architectures with the rest of the code.
### Increasing Linear Combinations
To study the effect of linear combination capabilities on the ability of CNNs to learn with frozen random filters in more detail, we introduce LCResNets specifically designed to allow tweaking of the linear combinations of convolution filters without affecting other layers. We build these based on the (Basic-Block) CIFAR-ResNet-variant introduced by (He et al., 2015). Compared to regular ResNets, the CIFAR-specialized architecture has a drastically lower number of parameters and is, therefore, more suitable for large-scale studies such as this one. In the architecture, we replace every spatial convolution with an LC-Block: a spatial convolution with \(c_{\text{in}}\) input channels and \(c_{\text{out}}\) output channels then becomes a spatial convolution with \(c_{\text{out}}\times E\) filters fed into a pointwise convolution with \(c_{\text{out}}\) outputs (Figure 5).
We denote these models by LCResNet-\(\{D\}\)-\(\{W\}\)x\(\{E\}\), where \(\{D\}\) is the network depth i. e. the number of spatial convolution and fully-connected layers, \(\{W\}\) the network width (default 16) i. e. the initial number of channels communicated between Basic-Blocks, and \(\{E\}\) an LC expansion factor (default 1) which increases the spatial filters in LC-Blocks and, therefore, computed linear combinations without increasing the block's number of outputs.
Figure 4: **Robust** (FGSM, \(\ell_{\infty},\epsilon=1/255\)) validation accuracy of LCResNet-20-16x\(\{E\}\) on CIFAR-10 with frozen random or learnable _spatial_ convolutions under increasing **LC expansion**\(\{E\}\).
Figure 5: **Basic-Block with LC-Blocks**. We replace all convolutions with LC-Blocks which consist of a spatial and pointwise convolution. The expansion factor \(E\) allows increasing the number of spatial filters/linear combinations without altering the LC-Blocks number of outputs.
Figure 3: **Validation accuracy** of different models trained on CIFAR-10 with random frozen random vs. learnable spatial convolutions. Models in the right half use blocks that integrate \(1\times 1\) convolutions after spatial convolutions and are, therefore, able to approximate learned convolution filters by linear combinations of random filters.
Closing the performance gap.Previous experiments revealed a slightly worse accuracy of frozen random models. As per our hypothesis, an increase in the number of spatial filters/linear combinations should close this gap which we test by exponentially increasing the expansion factor of LCResNet-20-16 (Figure 1). While we see a marginal increase in accuracy in regular training, we see a steady increase in the accuracy of frozen models. At an expansion factor of 8 the two training variants approximately break even, and, surprisingly, beyond that, frozen random models even outperform their counterparts. A possible explanation may be found in overfitting: due to their limited ability to learn arbitrary filter patterns, frozen models may generalize better. To strengthen this hypothesis, we measure the robust accuracy of the models via light \(\ell_{\infty}\)-FGSM-attacks (Goodfellow et al., 2015) with \(\epsilon=1/255\) ) and see similar trends in robustness (Figure 4).
Due to the accuracy gap, it seems viable to conclude that frozen random models learn different representations and hence different filters. In the following, we aim to quantify this based on _filter variance entropy_(Gavrikov and Keuper, 2022). The singular value decomposition-based metric quantifies the diversity of filter kernels, by providing a measurement in an interval between entirely random patterns (as seen in just initialized weights) and a singular pattern repeated throughout all kernels.
We apply this metric to understand the learned deviation from the random initialization. For this experiment, instead of analyzing the random spatial filters directly, we compute the resulting linear combination of spatial filters. Further, we limit ourselves to the filters in the initial convolution layer, as it is generally well-understood and studied (Figure 6). In normally trained models we measure an expected balanced diversity of patterns that does not significantly fluctuate with LC expansion. Obversely, frozen random models at default expansion produce almost random filters due to an insufficient amount of linear combinations in the initial layer (only 16). With an increasing expansion, frozen random models can diversify their patterns. Yet, even at the highest studied expansion rate, they remain more diverse, which again may limit the risk of overfitting.
Based on the results, we conclude that 1) regularly trained networks with extreme expansion are prone to overfitting; 2) the performance of frozen random models increases with the number of linear combinations but they appear to overfit less; 3) frozen random models can outperform normal training regimes. Often in addition to the massive savings in trainable parameters (Table 1).
Alternatively to expansion, reducing the performance gap can also be achieved by increasing the network width (Figure 7).
Wider networks generally reach higher accuracies, but also decrease efficiency due to increasing channels. Lastly, due to the compositional nature of deep neural networks, the gap also diminishes with increasing depth (Figure 8), yet at a slow and impractical rate compared to expansion or width (break-even at approx. \(D=260\)).
### Reducing Network Parameters
At initialization, model weights are _i.i.d._ and do not show any inherent patterns. As such, it appears intriguing to understand if a specific set of weights can be shared throughout all spatial convolution layers to decrease the total number of network parameters.
Global weight sharing.We first start with a naive approach, where we draw a random weight \(W_{s}\sim\mathcal{U}_{[-1,1]}\) to be shared. The shape of \(W_{s}\) is the maximum length of all spatial convolution weights: \(\max_{l\sim\text{spatial conv}(\theta)}c_{\text{out}_{\text{in}}R}^{(l)}k^{(l)}k ^{(l)}\). Convolution weights are then (reshaped) slices of \(W_{s}\), ac
Figure 6: **Filter variance entropy** normalized by the randomness threshold as a metric of diversity in filter patterns of the first layer. Measured on LCResNet-20-16x\(\{E\}\) trained on CIFAR-10 with frozen random or learnable spatial convolutions under increasing **LC expansion**\(\{E\}\). Values above 1 indicate a random distribution of kernel patterns, while values of 0 indicate a collapse to one specific pattern.
Figure 7: **Validation accuracy** of LCResNet-20-\(\{W\}\)x1 on CIFAR-10 with frozen random or learnable spatial convolutions under increasing **network width**\(\{W\}\).
cording to the required length in the respective layer. The slicing can be implemented as views of the \(W_{s}\) tensor and, therefore, should not consume additional memory. To counteract the vanishing/exploding gradient problem, we scale individual slices by a fixed coefficient \(s=1/\sqrt{c_{\text{in}}k^{2}}\) per layer derived from (He et al., 2015a) (see Appendix D for details).
Training LCResNets with weight sharing reveals an interesting effect: the total number of parameters drastically decreases (Table 1), although models trained with weight sharing perform approximately on par with ones without sharing (Figure 9). For some expansion factors training with weight sharing even outperforms frozen random training. At the largest evaluated expansion, weight sharing performs only \(0.31\%\) worse.
Recycled weight sharing.Since sharing weights between layers successfully decreases the total number of parameters without significant accuracy impact, we aim to understand whether more weights can be reused to further reduce this number. We test this by gradually reducing the length of \(W_{s}\). If the length of a requested slice exceeds \(W_{s}\), we stack copies of it until it becomes of sufficient length, and then take a slice from the resulting tensor. We test values that are factors of \(k^{2}\), to reshare weights corresponding to entire filter kernels. We empirically test this procedure on an LCResNet-20-64x1 (Figure 10) and observe that a \(4\times\) reduction only results in an accuracy drop of \(0.17\%\) and \(10\times\) in only approx. 1%, indicating that indeed weights can be recycled up to a certain threshold. Beyond that, accuracy significantly decreases.
Weight sparsity.Up to this point, we showed ways to reduce the number of parameters by design choices during training. Moreover, it is possible to further reduce the amount by pruning after training. Given the random nature of spatial convolution weights in frozen random training, we do not expect pruning to be successful and directly focus on the pointwise convolution weights.
To get an estimate of the expected reduction we apply unstructured global magnitude pruning (LeCun et al., 1989) on the \(1\times 1\) convolution weights without further finetuning. Although highly expanded networks already contain a large
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Total** & **Learnable** & **Val.** \\
**Method** & **Params** [M] & **Params** [M] & **Acc. [\%]** \\ \hline Baseline & 38.4 & 38.4 & 91.39 \(\pm\) 0.29 \\ \hline + Frozen & 38.4 & 4.2 & 91.89 \(\pm\) 0.10 \\ + Weight Sharing & 8.8 & 4.2 & 91.58 \(\pm\) 0.15 \\ + Recycled WS (1/16) & 4.6 & 4.2 & 91.92 \(\pm\) 0.21 \\ + \(1\times 1\) Reg. (\(\lambda=5e-5\)) & 4.6 & 4.2 & 91.57 \(\pm\) 0.28 \\ + \(1\times 1\) Pruning (\(\rho=0.7\)) & 1.7 & 4.2 & 91.34 \(\pm\) 0.24 \\ \hline
**Reduction** & 22.6\(\times\) & 9.1\(\times\) & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Successive reduction of total and learnable parameters in an LCResNet-20-16x128 by applying the techniques proposed.
Figure 8: **Validation accuracy** of LCResNet-\(\{D\}\)-16x1 on CIFAR-10 with frozen random or learnable spatial convolutions under increasing **network depth**\(\{D\}\).
Figure 10: **Validation accuracy** of frozen random LCResNet-20-64x1 on CIFAR-10 at different levels of **recycled weight sharing**. Vertical lines indicate the length of all requested slices.
Figure 9: **Validation accuracy** of frozen random LCResNet-20-16x\(\{E\}\) with **weight sharing** vs. non-shared training under increasing expansion \(\{E\}\).
amount of near-zero \(1\times 1\)-weights (see Appendix F for histograms), the ratio can be increased by a regularization term. We propose an \(\ell_{1}\) regularization over the pointwise weights to the training objective \(\mathcal{L}\) that optimizes the set of network parameters \(\theta\):
\[\min_{\theta}\mathcal{L}+\lambda\sum_{W\sim pointwise\ conv(\theta)}\|W\|_{1} \tag{5}\]
Combining approaches.Exemplarily, we show the effect of all techniques combined on an LCResNet-20-16x128 in Table 1. We achieve the final reduction of 22.6\(\times\) on the total parameters and 9.1\(\times\) on the learnable parameters by consequently applying our proposed techniques. For recycled weight sharing, we shrink the shared weight to \(1/16\) of its original size and use \(\lambda=5e-5\) for regularization. On top of that, we prune \(70\%\) of pointwise convolution weights with one of the simplest pruning techniques having almost no computational overhead. Trading-off increased training time, we expect more sophisticated pruning methods in combination with additional fine-tuning of the model to further increase these ratios. For ablation of \(\lambda\) and the impact of pruning on the accuracy see Appendix H; for an analysis of timing and memory consumption see Appendix I.
### Increasing Kernel Size
Our networks use the default \(3\times 3\) kernel size, which was dominant for the past years. However, recently proposed CNNs often increase the kernel size e. g. (Tan and Le, 2020; Liu et al., 2022; Trockman and Kolter, 2022), sometimes to as large as \(31\times 31\)(Ding et al., 2022).
To verify that linear combinations scale to larger frozen random kernels, we increase the convolution sizes in an LCResNet-20-16 to \(k\in\{5,7,9\}\) (with a respective increase of input padding) and measure the gap between training with learned and frozen random spatial convolutions (Figure 11). Our results show that the gap between frozen random and regular models increases with kernel size, but steadily diminishes with increasing expansion and eventually breaks even for all our tested expansions, except for \(k=9\) which we expect to also break even at larger expansions.
We visualize the linear combinations of \(9\times 9\) filters in the first convolution layers under increasing expansion in Figure 12. It becomes clearly visible how the reconstructed filters increase to resemble the learned filters as the expansion increases. For more comparisons, see Appendix E.
A possible explanation for the inferior performance of larger frozen random kernels can be found in the _i.i.d._ initialization of weights: kernel weights are initialized without consideration of the weight location in the filter. However, we observe that the variance in learned kernel weights is highly influenced by the weight location (Figure 13). The variance is approximately uniformly distributed for \(k\in\{3,5\}\), which allows linear combinations to reconstruct learned weights well. Yet, the variance per location increases to deviate from a uniform distribution as the kernel size increases: at larger \(k\) outer weights show significantly lower variance, while the highest variance is located in the center of the filter. Linear combinations do not change the variance of individual weights and the variance remains equally distributed. Therefore, the probability of a full reconstruction of learned weights from uniformly initialized filters decreases with increasing kernel size. Assuming that the learned filters are optimal, the reconstruction error should correlate with the accuracy gap. The increasing reconstruction error can also again be quantified by measurements of the filter variance entropy (see Appendix E for measurements).
### Scaling to ImageNet
In this section, we want to demonstrate that our results also scale to larger and more complex datasets such as ImageNet.
Figure 11: **Gap in validation accuracy** of frozen random and learnable LCResNet-20-16x\(\{E\}\) on CIFAR-10 with different convolution **kernel sizes under increasing LC expansion \(\{E\}\).
Figure 12: Visualization of the **reconstructed 9\(\times\)9 convolution filters** after linear combination of the first convolution layer in frozen random LCResNets-20-16x\(\{E\}\) with increasing expansion \(\{E\}\). Compared to random and learned weights.
Instead of repeating our previous experiments with theoretical architectures, we train off-the-shelf models that already integrate pointwise convolutions such as ResNet-50, ResNet-50d (He et al., 2019) [replaces the \(7\times 7\) convolutions in the stem by 3 layers of \(3\times 3\) convolutions], ResNeXt-50-32x4d (Xie et al., 2017) [uses depthwise separable convolutions], Wide-ResNet-50x2, and MobileNet v2/v3 (Howard et al., 2019). As a sanity check, we also train a ResNet-18 that does not compute linear combinations.
We train all models as per (Wightman et al., 2021) with automatic mixed precision training (Micikevicius et al., 2018) for 300 epochs at \(224^{2}\) px resolution without any pre-training and report top-1 and top-5 accuracy for both, learnable and frozen random training.
The results (Table 2) show larger gaps in accuracy on ImageNet than on CIFAR-10. This is not particularly surprising, as ImageNet is a more complex dataset and results in more diverse filter patterns (Gavrikov and Reuper, 2022; Zhang et al., 2022), which in turn increases the complexity of reconstruction from random filters. Additionally, most of the analyzed models contain convolutions larger than \(3\times 3\) in their initial layers, increasing the reconstruction error to learned weights e. g. we see a reduction in the gap when switching from ResNet-50 to ResNet-50d. Overall, we obtain highly non-trivial performances by simply exploiting linear combinations with as low as \(3.21\%\) gap in validation accuracy. And again, we observe that a Wide-ResNets shows a smaller gap than a traditional ResNet backing our scaling theory. Our sanity check shows an expected gap of \(35.34\%\), due to the lack of linear combinations.
Note that none of these models are as wide as the LCResNets we experimented with in previous sections. We hypothesize, that at an increased expansion or width, frozen random ImageNet models would also perform on par with their regular counterparts. Naturally, the proposed parameter reduction techniques will apply to these models as well, albeit they may be less impactful.
## 5 Conclusion, Discussion, and Future Work
We have demonstrated that networks that compute linear combinations of random convolution weights can achieve highly non-trivial performances without ever updating convolution weights. In the extremes, these frozen random models even outperform regular models. In combination with weight sharing and pruning of pointwise weights, both the number of total and learnable parameters can be reduced resulting in faster training and smaller model checkpoints on disc.
Also, we have observed, that in general, linear combinations can scale to larger kernels, albeit at an increasing reconstruction error against learnable weights. A relatively simple solution for this seems to be an adjustment of variance depending on the position in the filter leading to non-_i.i.d._ initializations. Ultimately this arises in the question of whether there is a more suitable set of filters that work for a variety of problems. Finding such a set may indeed allow training off-the-shelf architectures on-par with traditional learning without ever learning spatial convolution filters.
## 6 Limitations
Our proposed weight sharing only significantly reduces parameters if models rely on traditional convolutions. Yet, the number of saved parameters may be not that noticeable during training, as the majority of memory is consumed by gradients and intermediate computations. Further, recently there has been a trend (Sandler et al., 2018; Howard et al., 2019; Liu et al., 2022) towards depthwise convolutions where the dominant amount of parameters is allocated in pointwise weights and only a minor share in spatial convolutions. In these settings, neither freezing nor weight sharing significantly decrease the total number of parameters. Also, generally, some parameter savings techniques such as pruning may only become relevant on specialized soft- or hardware.
We have already shown that linear combinations struggle to reconstruct filters as the kernel size increases. We often assumed that learnable filters are optimal which is not nec
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**Val. Acc.**} & \multicolumn{2}{c}{**Val. Acc.**} & \multicolumn{1}{c}{**Top-1**} \\
**Model** & **LC** & **Frozen Rand [\%]** & **Learnable [\%]** & \(\Delta\) [\%] \\ \hline ResNet-18 & � & 36.54 (59.84) & 71.88 (02.27) & 35.34 \\ \hline ResNet-50 & ✓ & 74.45 (91.93) & 79.50 (94.50) & 5.05 \\ ResNet-50d & ✓ & 75.66 (92.28) & 79.78 (94.41) & 4.32 \\ ResNeXt-50-32x4d & ✓ & 76.60 (93.00) & 79.81 (94.50) & 3.21 \\ Wide-ResNet-50x2 & ✓ & 76.70 (92.83) & 80.09 (94.51) & 3.39 \\ MobileNet v2 5 1.00 & ✓ & 66.34 (86.63) & 71.69 (90.45) & 5.35 \\ MobileNet v3 1.00 & ✓ & 68.41 (87.96) & 76.60 (93.00) & 8.19 \\ \hline \hline \end{tabular}
\end{table}
Table 2: ImageNet top-1 (top-5 in brackets) validation accuracy of various models with regular and frozen random training. Results are reported over a single run.
Figure 13: **Variance of weights per element of convolution kernels** of learned (top) and linear combinations of frozen random (bottom) convolutions. Measured in LCResNet-20-16x1 with different kernel sizes. The variance was measured over all convolution kernels in a model. Kernels were normalized by the standard deviation of the entire convolution weight in their respective layer.
essarily guaranteed. For example, filters with large kernel sizes appear to only utilize a small amount of the filter volume. Yet, we have seen that LCs of frozen random filters appear to close the gap while intrinsically being limited to exploiting the full volume equally. At even higher expansion rates, they may even outperform traditional learnable filters.
|
2307.15288 | Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical
Systems using Constrained Autoencoders | Recently developed reduced-order modeling techniques aim to approximate
nonlinear dynamical systems on low-dimensional manifolds learned from data.
This is an effective approach for modeling dynamics in a post-transient regime
where the effects of initial conditions and other disturbances have decayed.
However, modeling transient dynamics near an underlying manifold, as needed for
real-time control and forecasting applications, is complicated by the effects
of fast dynamics and nonnormal sensitivity mechanisms. To begin to address
these issues, we introduce a parametric class of nonlinear projections
described by constrained autoencoder neural networks in which both the manifold
and the projection fibers are learned from data. Our architecture uses
invertible activation functions and biorthogonal weight matrices to ensure that
the encoder is a left inverse of the decoder. We also introduce new
dynamics-aware cost functions that promote learning of oblique projection
fibers that account for fast dynamics and nonnormality. To demonstrate these
methods and the specific challenges they address, we provide a detailed case
study of a three-state model of vortex shedding in the wake of a bluff body
immersed in a fluid, which has a two-dimensional slow manifold that can be
computed analytically. In anticipation of future applications to
high-dimensional systems, we also propose several techniques for constructing
computationally efficient reduced-order models using our proposed nonlinear
projection framework. This includes a novel sparsity-promoting penalty for the
encoder that avoids detrimental weight matrix shrinkage via computation on the
Grassmann manifold. | Samuel E. Otto, Gregory R. Macchio, Clarence W. Rowley | 2023-07-28T04:01:48Z | http://arxiv.org/abs/2307.15288v2 | Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical Systems using Constrained Autoencoders
###### Abstract
Recently developed reduced-order modeling techniques aim to approximate nonlinear dynamical systems on low-dimensional manifolds learned from data. This is an effective approach for modeling dynamics in a post-transient regime where the effects of initial conditions and other disturbances have decayed. However, modeling transient dynamics near an underlying manifold, as needed for real-time control and forecasting applications, is complicated by the effects of fast dynamics and nonnormal sensitivity mechanisms. To begin to address these issues, we introduce a parametric class of nonlinear projections described by constrained autoencoder neural networks in which both the manifold and the projection fibers are learned from data. Our architecture uses invertible activation functions and biorthogonal weight matrices to ensure that the encoder is a left inverse of the decoder. We also introduce new dynamics-aware cost functions that promote learning of oblique projection fibers that account for fast dynamics and nonnormality. To demonstrate these methods and the specific challenges they address, we provide a detailed case study of a three-state model of vortex shedding in the wake of a bluff body immersed in a fluid, which has a two-dimensional slow manifold that can be computed analytically. In anticipation of future applications to high-dimensional systems, we also propose several techniques for constructing computationally efficient reduced-order models using our proposed nonlinear projection framework. This includes a novel sparsity-promoting penalty for the encoder that avoids detrimental weight matrix shrinkage via computation on the Grassmann manifold.
**Reduced-order modeling involves constructing a low-dimensional approximation of a high-dimensional dynamical system in order to enable tasks such as rapid forecasting, state estimation/tracking from streaming observations, and feedback control. Autoencoders are a type of neural network that achieves dimensionality reduction by first compressing (encoding) and then reconstructing (decoding) high-dimensional state vectors. We introduce a novel autoencoder architecture that can be used to project dynamical systems onto learned low-dimensional submanifolds of the state space. Unlike prior work, we are able to learn appropriate projection fibers consisting of states that project to the same point on the manifold. These fibers are crucial for obtaining accurate forecasts for states that do not lie on the manifold. We introduce two new dynamics-aware cost functions to learn projections with appropriate fibers for reduced-order modeling of dynamical systems. Finally, we compare our approach to standard architectures and cost functions on a simple slow-fast system.**
## I Introduction
Dynamical systems arising from discretized continuum equations such as those governing fluid flows are often too high-dimensional to be used for real-time forecasting, state estimation, and control applications. Simplified reduced-order models (ROMs) can be constructed by projecting the original dynamical system, referred to as the full-order model (FOM), into lower-dimensional spaces. For reviews of existing methods, see Ghadami and Epureanu [1], Rowley and Dawson [2], Benner _et al._[3], Rozza, Stabile, and Ballarin [4].
The simplest approach is to employ linear projections onto subspaces. By far the most widely used method of this type is Proper Orthogonal Decomposition (POD), which is also known as Principal Component Analysis (PCA). As pointed out by Ohlberger and Rave [5], the effectiveness of linear projections for dimensionality reduction is limited by how closely relevant trajectories of the system can be approximated in linear subspaces. This can be quantified using various measures such as Kolmogorov \(n\)-width and the decay rates of singular values obtained by POD. Advection-dominated fluid flows exhibit spatially translating coherent structures that are notoriously difficult to model in low-dimensional subspaces. This has motivated the development of techniques for projecting dynamics of fluid flows onto low-dimensional curved manifolds. A recent approach by Lee and Carlberg [6] with subsequent extensions in Romor, Stabile, and Rozza [7] projects dynamics orthogonally onto a manifold learned from data using a convolutional autoencoder neural network. Similarly, Anderson and Farazmand [8] project dynamics orthogonally onto a user-specified manifold with an interpretable parametrization. Another approach utilized by Geelen, Wright, and Willcox [9] and Benner _et al._[10] is to project dynamics onto a manifold expressed as a graph over a POD subspace, in a direction orthogonal to that subspace.
A common feature of the above approaches for nonlinear projection-based model reduction is the use of orthogonal projection. However, the "direction of projection" determined by the projection fibers is of critical importance for modeling transient dynamics. To understand this, we first note that projection is unnecessary when modeling the dynamics of a system after transients have decayed onto an attracting sub-manifold. Indeed, one can find an embedding of the underlying manifold from post-transient data and then learn the dy
namics in the embedding space. Essentially any embedding will do since in this case one only cares about the system's behavior on the manifold. This is the principle behind successful data-driven methods for approximating dynamics near spectral submanifolds by Cenedese _et al._[11] and other low-dimensional manifolds learned from data using autoencoders as in Fresca, Dede', and Manzoni [12], Conti _et al._[13], Champion _et al._[14]. On the other hand, projection is needed to account for the ways in which perturbed trajectories settle back onto attracting manifolds. This is critical for modeling the effect of actuation and control because input signals can result in such perturbations.
To understand the importance of the projection fibers from a geometric point of view, consider the basin of an attracting normally hyperbolic invariant manifold. The basin is known to have an "asymptotic rate foliation" with leaves consisting of initial states that approach the same trajectory on the manifold [15; 16; 17]. Specifically, this is the trajectory of the base point of intersection of the leaf with the invariant manifold. Using a projection that collapses each leaf to its base point ensures that trajectories of the FOM settle onto the projected trajectories of the ROM at a rate determined by the fast time scale. If the projection fibers were different, then there could be a persistent or growing error between the FOM and the ROM. Near the manifold, the correct affine projections vary spatially according to a nonlinear partial differential equation derived by Roberts [18]. While this equation is difficult to solve analytically, Roberts [19] uses computer algebra to find series expansions for spatially varying modes defining the projection near equilibria. We illustrate the importance of the direction of projection using several examples including slow-fast systems with attracting slow manifolds [17].
The direction of projection is also important for modeling nonnormal dynamical systems such as those arising from shear-dominated fluid flows [20; 21]. Linear dynamical systems governed by nonnormal operators can give rise to phenomena including transient growth and high sensitivity to state variables that remain small along trajectories [20; 22]. The term "nonnormality" has also been used to characterize nonlinear systems exhibiting these phenomena either due to nonnormal linearized dynamics, or other nonlinear effects such as asymmetric nonlinear coupling between states. Model reduction methods for linear systems [23; 24; 25] yield oblique projections that account for nonnormality. To shed light on the utility of oblique projections, consider the oblique projections found using Balanced Truncation (BT) [26]. The BT projection coincides with state variable truncation in a coordinate system where the observability and controllability Gramians of a linear dynamical system are equal and diagonal. In nonlinear systems exhibiting nonnormal dynamics, oblique linear projections have also proven to be useful for reduced-order modeling [27; 28; 29; 30; 31; 32; 33; 34]. For example, Covariance Balancing Reduction using Adjoint Snapshots (CoBRAS) [34] replaces the controllability Gramian in BT with a covariance matrix of states along nonlinear trajectories. The observability Gramian is replaced by a gradient covariance matrix measuring the sensitivity of future outputs to state perturbations. The nonlinear balancing method introduced by Scherpen [35] yields a nonlinear oblique projection. This projection is constructed by truncating nonlinear coordinates in which functions measuring nonlinear observability and controllability are balanced in the neighborhood of a fixed point. This particular projection is computationally expensive to compute for high-dimensional systems, though significant progress on this issue has been made by Kramer, Gugercin, and Borggaard [36; 37] using local series expansion methods. In each of these cases, an oblique projection is needed in order to balance competing requirements to capture controllability and observabilty or state variance and sensitivity in nonnormal nonlinear systems.
In order to construct accurate reduced-order models of the systems described above, we introduce a large parametric class of nonlinear oblique projectons defined by constrained autoencoder neural networks. Autoencoders consist of an encoder neural network that reduces the dimension of an input vector followed by a decoder neural network that aims to reconstruct the original vector [38]. While the decoder can be trained to parametrize a manifold, the encoder does not generally recover the correct coordinates. This means that autoencoders do not generally define projections. Because of this issue, Lee and Carlberg [6] neglect the encoder after training and project the dynamics orthogonally onto the manifold defined by the decoder. A main contribution of our work is to introduce constraints on the architecture of an autoencoder so that it defines a nonlinear oblique projection. Specifically, we ensure that the process of decoding followed by encoding is always the identity. To do this, we introduce a pair of smooth activation functions which are inverses. One is used in the encoder and the other is used in the decoder. We also enforce bi-orthogonality constraints between the weight matrices defining corresponding layers of the encoder and decoder. Related architectures include neural networks with orthogonality constraints as employed by Lezcano-Casado and Martinez-Rubio [39] and the invertible neural networks developed by Dinh, Krueger, and Bengio [40], Dinh, Sohl-Dickstein, and Bengio [41], Kingma and Dhariwal [42] with applications to inverse problems by Ardizzone _et al._[43]. Using our approach, we are able to utilize the encoder and its tangent map to construct nonlinear projection-based reduced-order models with learned projection fibers. Specifically, the projection fibers can now be oblique and vary over the learned manifold.
The standard loss function used to train autoencoders minimizes the distance between data vectors and their reconstructions after applying the encoder and decoder. Minimizing this loss encourages the encoder to learn a direction of projection that is orthogonal to the learned manifold. In order to learn oblique projections for constructing accurate ROMs, we introduce two new loss functions leveraging trajectory data from the FOM and its governing equations. The first loss function we introduce combines the usual reconstruction error with the error between the time derivative of trajectories projected onto the learned manifold and the time derivative of the ROM at the projected points. This promotes learning of a manifold that lies near the training data and a direction of projection that yields correct time derivatives for the reduced-order model. The second loss function is closely related to the gradient-weighted objective minimized by CoBRAS [34]. Specifically,
we weight the differences between data vectors and their reconstructions using the autoencoder against gradients of random projections of the FOM's output along trajectories. This allows the network to learn the directions along which the state data can be safely projected onto the learned manifold while having minimal effect on future outputs of the system. We also introduce a sparsity-promoting penalty for the weight matrices of the encoder. In a similar manner to the Discrete Empirical Interpolation Method (DEIM) [44], sparsifying the encoder provides computational speedups for ROMs of systems with sparse coupling between state variables. In order to avoid the shrinkage and other biases concomitant with the standard \(\ell^{1}\) penalization, our penalty is invariant under the action of invertible matrices applied from the left to the weight matrix.
The remainder of the paper is organized as follows. In Section II we discuss how projection-based reduced-order models are constructed, and provide an example illustrating the importance of the fiber. In Section III we describe the architecture of our autoencoder, including the invertible activation functions and bi-orthogonality constraints that ensure that our autoencoder is a projection, and in Section IV we discuss the loss functions mentioned above. We provide a detailed study of a simple model problem with three states, and a two-dimensional slow manifold in Section V. Finally, in Section VI we discuss the construction of computationally efficient models.
## II Nonlinear projection-based reduced-order modeling
We consider a full-order model (FOM) described by a dynamical system
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}x&=f(x, u)\qquad x(0)=x_{0}\\ y&=g(x),\end{split} \tag{1}\]
with state variable \(x(t)\in\mathbb{R}^{n}\), output observations \(y(t)\in\mathbb{R}^{m}\), and inputs \(u(t)\) taking values in an arbitrary space. In many systems of interest the dynamics of the FOM can be accurately described on a low-dimensional submanifold \(\mathcal{M}\subset\mathbb{R}^{n}\) of the state space. This can happen when the dynamics cause states to rapidly approach \(\mathcal{M}\), or when the system's output is insensitive to state variables normal to \(\mathcal{M}\) in some coordinate system. Our goal is to identify a suitable manifold and construct a reduced-order model (ROM) of the form
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\,\hat{x}& =\hat{f}(\hat{x},u)\qquad\hat{x}(0)=\hat{x}_{0}\\ \hat{y}&=\hat{g}(\hat{x}),\end{split} \tag{2}\]
whose state \(\hat{x}\) evolves on \(\mathcal{M}\) and whose output \(\hat{y}\) approximates the output \(y\) of the FOM over some set of inputs and initial conditions of interest. One approach is to construct a smooth projection \(P:\mathbb{R}^{n}\to\mathbb{R}^{n}\), that is an idempotent map \(P\circ P=P\), and apply its tangent map to the FOM, yielding
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}\,\hat{x}& =\hat{f}_{P}(\hat{x},u):=\mathrm{d}P(\hat{x})f(\hat{x},u)\qquad \hat{x}(0)=P(x_{0})\\ \hat{y}&=g(\hat{x}).\end{split} \tag{3}\]
Geometrically, Theorem. 1.15 in Michor [45] (see Figure 1) says that if \(P\) is a smooth idempotent map on a connected manifold \(\mathcal{N}\) (in our case \(\mathcal{N}=\mathbb{R}^{n}\)) then the image set \(\hat{\mathcal{M}}=\mathrm{Range}(P)\) is automatically a smooth, closed, and connected submanifold of \(\mathcal{N}\). _We aim to find a projection whose image manifold accurately captures trajectories of interest from the FOM over a range of initial conditions and input signals._ Moreover, the theorem shows that there is an open neighborhood \(\mathcal{U}\) of \(\hat{\mathcal{M}}\) in \(\mathcal{N}\) on which the tangent map \(\mathrm{d}P(x)\) has constant rank equal to the dimension of \(\hat{\mathcal{M}}\). In any such neighborhood \(\mathcal{U}\) of \(\hat{\mathcal{M}}\) where \(\mathrm{d}P(x)\) has constant rank, the fiber \(P|_{\mathcal{U}}^{-1}(x)=\{p\in\mathcal{U}:P(p)=x\}\) of each \(x\in\hat{\mathcal{M}}\) is a closed submanifold of \(\mathcal{U}\) with dimension complementary to \(\hat{\mathcal{M}}\) in \(\mathcal{N}\) and intersecting \(\hat{\mathcal{M}}\) transversally at \(x\). The tangent map \(\mathrm{d}P(x)\) at \(x\in\hat{\mathcal{M}}\) is the linear projection on \(T_{x}\mathcal{N}\) whose range is \(T_{x}\hat{\mathcal{M}}\) and whose nullspace is tangent to the fiber, that is \(\mathrm{Null}\,\mathrm{d}P(x)=T_{x}P^{-1}(x)\). This characterization of smooth projections is depicted in Figure 1. Since the ROM in (3) is obtained by modifying the FOM along the fibers of the projection, _we aim to design the projection so that varying initial states along the fibers has little affect on the system's output signal over a desired prediction horizon._
One approach described by Lee and Carlberg [6] is to parametrize a smooth submanifold \(\hat{\mathcal{M}}\subset\mathbb{R}^{n}\) and to define a projection by mapping \(x\) to the nearest point on \(\hat{\mathcal{M}}\). Such a projection is well-defined, smooth, and has constant rank in a neighborhood of \(\hat{\mathcal{M}}\) in \(\mathbb{R}^{n}\) thanks to the tubular neighborhood theorem (Theorem 6.24 in Lee [46]). The fibers of this projection are orthogonal to \(\hat{\mathcal{M}}\) and the corresponding tangent map \(\mathrm{d}P(x)\) is the orthogonal projection onto \(T_{x}\hat{\mathcal{M}}\). While projecting the dynamics of the FOM orthogonally onto the tangent space of the learned manifold minimizes the projection error \(\left\|f(x,u)-\mathrm{d}P(x)f(x,u)\right\|_{x}\) at each \(x\in\hat{\mathcal{M}}\), it can lead to large errors in the dynamics of the ROM described by (3). Therefore, we argue that the direction of projection as determined by the fibers of \(P\) and their tangent spaces at intersections with \(\mathrm{Range}(P)\) are important ingredients for constructing accurate nonlinear projection-based reduced-order models via (3).
The following toy example illustrates why the projection fibers are important for modeling the dynamics of slow-fast systems (see Kuehn [17]) using data-driven methods.
_Example 1_ (Sources of projection error in a slow-fast system).
Figure 1: The anatomy of a smooth projection as characterized by Thm. 1.15 in Michor [45].
Consider the two dimensional system,
\[\dot{x}_{1} =\lambda x_{1}(1-x_{1}^{2}) \tag{4}\] \[\epsilon\dot{x}_{2} =x_{1}^{2}-x_{2},\]
where \(\lambda,\epsilon>0\) and \(\epsilon^{-1}\gg\lambda\). There are two asymptotically stable fixed points at \((\pm 1,1)\) and one unstable fixed point at \((0,0)\). For small \(\epsilon\), (4) has an attracting slow invariant manifold containing the fixed points and lying near the critical manifold \(x_{2}=x_{1}^{2}\). Using Theorem 11.1.1 in Kuehn [17], we can express the slow manifold as a graph \(x_{2}=h_{\epsilon}(x_{1})\) whose expansion in \(\epsilon\) is given by
\[h_{\epsilon}(x_{1})=x_{1}^{2}+2\lambda\left(x_{1}^{4}-x_{1}^{2} \right)\epsilon\\ +4\lambda^{2}\left(2x_{1}^{6}-3x_{1}^{4}+x_{1}^{2}\right) \epsilon^{2}+\mathcal{O}\left(\epsilon^{3}\right). \tag{5}\]
Here, we use the parameter values \(\lambda=0.1\) and \(\epsilon=0.1\).
In Figure 2 we consider an initial condition (blue \(+\)) not lying on the slow manifold and two initial conditions (red \(+\)) resulting from different projections onto the slow manifold. The fast dynamics of \(x_{2}\) cause the resulting trajectory to approach the slow manifold vertically. In the left panel, the trajectory of the orthogonally projected initial condition has a large phase error on the slow manifold, with the two trajectories only approaching each other at the slow rate \(\mathcal{O}(e^{-2\lambda t})\) as \(t\to\infty\), as shown in Figure 3. On the other hand, the trajectory of the vertically projected initial condition has zero phase error, with the two trajectories converging at the fast rate \(\mathcal{O}(e^{-t/\epsilon})\).
In Figure 4 we consider two methods of projecting (5) onto the tangent space of an approximate manifold lying near the true slow manifold. This mimics the typical situation when a manifold is learned from data. The vector field in (5) evaluated along the approximate manifold (black arrows) has a large vertical component due to the approximation error and fast dynamics. Orthogonally projecting this vector field onto the approximate manifold in the left panel of Figure 4 yields dynamics (red arrows) that incorrectly capture the dynamics on the nearby slow manifold (blue arrows). Even the stability types of the fixed points on the approximate manifold are the opposites of their counterparts in the true system. On the other hand, obliquely projecting the vector field onto the approximate manifold along vertical fibers cancels out the large contribution of the fast dynamics as shown in the right panel of Figure 4. The resulting projected system closely approximates the dynamics on the slow manifold and correctly captures the stability types of the fixed points.
The importance of learning the correct direction of projection, which may be oblique to the learned manifold, motivates the development of a large parametric class of nonlinear projections based on autoencoders in the next section. The choice of optimization objectives for training these autoencoders is also crucial and will be pursued in Section IV.
Figure 4: Projection onto an approximation of the slow manifold, given by \((x_{1},0.95x_{1}^{2}+0.05)\), shown in green, comparing orthogonal projection (left) with oblique projection (right). The true slow manifold is shown in violet. The direction of dynamics on the true slow manifold are shown as blue arrows, and the projected dynamics are shown as red arrows. Insets show the direction of dynamics of the full model in black, along with the corresponding projections. Note that projecting orthogonally reverses the stability types of the fixed points, even though the manifolds are so close.
Figure 3: Absolute trajectory error for projection onto the slow manifold. _Left:_ Absolute error for the orthogonally projected initial condition is drawn in red and the expected asymptotic behavior in black. _Right:_ Absolute error and expected asymptotic behavior for the obliquely projected initial condition. Note the vastly different vertical scales.
Figure 2: Projected dynamics for Example 1, comparing orthogonal projection onto the slow manifold (left) with projection along the direction of the fast dynamics (right). The initial condition is shown as a blue plus, while the projected initial condition is shown as a red plus. Equilibrium points are indicated by blue dots.
## III Autoencoder architecture
An autoencoder (in particular, an "undercomplete" autoencoder) is a neural network architecture depicted in Figure 5 commonly used for dimension reduction and feature extraction in machine learning [38]. It consists of an "encoder" \(\psi_{e}\), which maps a data vector \(x\in\mathbb{R}^{n}\) into a lower-dimensional representation or "latent state" \(z\in\mathbb{R}^{r}\), \(r<n\), and a "decoder" \(\psi_{d}\) which reconstructs an approximation of \(x\) from the extracted latent variables. By optimizing the weights defining the encoder and decoder to accurately reconstruct data from a given distribution, the encoder learns a reduced set of features that describe the data. If the encoder and decoder are smooth maps and the process of decoding and encoding through \(\psi_{e}\circ\psi_{d}\) is the identity on the latent space, then, per our discussion in discussion in Section II the autoencoder \(P=\psi_{d}\circ\psi_{e}\) is a smooth projection onto its range \(\tilde{\mathcal{M}}=\text{Range}(P)=\text{Range}(\psi_{d})\), which is a smooth manifold. Moreover, the direction of projection is determined by the preimage fibers of the encoder \(P^{-1}(\psi_{d}(z))=\psi_{e}^{-1}(z)\). In the context of model reduction, we can describe the dynamics of the projection-based reduced-order model (3) with state \(\hat{x}=\psi_{d}(z)\) in the latent space according to
\[\boxed{\frac{\text{d}}{\text{dt}}z=\tilde{f}(z,u):=\text{d}\,\psi_{e}(\psi_{d} (z))f(\psi_{d}(z),u),\quad z(0)=\psi_{e}(x_{0})} \tag{6}\] \[y=\tilde{g}(z):=g(\psi_{d}(z)).\]
In this setup, we can take advantage of the features learned by the encoder to define the crucial direction of projection for reduced-order modeling. However, the constraint
\[\psi_{e}\circ\psi_{d}=Id \tag{7}\]
has yet to be enforced in the design of autoencoders. Instead, recent projection-based reduced-order modeling methods using autoencoders have followed the approach of Lee and Carlberg [6], in which the encoder is discarded and the dynamics are projected orthogonally onto the image manifold parametrized by the decoder.
Here, we design an autoencoder architecture in which the constraint (7) is automatically satisfied. This is accomplished layer-wise, as illustrated in Figure 6 by defining the encoder and decoder as compositions of layers
\[\psi_{e}=\psi_{e}^{(1)}\circ\cdots\circ\psi_{e}^{(L)},\qquad\psi_{d}=\psi_{d} ^{(L)}\circ\cdots\circ\psi_{d}^{(1)}, \tag{8}\]
with the property that \(\psi_{e}^{(l)}\circ\psi_{d}^{(l)}=Id\) for each layer \(l\). This ensures that the composition telescopes to produce the identity, that is,
\[\psi_{e}\circ\psi_{d} =\psi_{e}^{(1)}\circ\cdots\circ\psi_{e}^{(L-1)}\circ\psi_{e}^{(L) }\circ\psi_{d}^{(L)}\circ\psi_{d}^{(L-1)}\circ\cdots\circ\psi_{d}^{(1)} \tag{9}\] \[=\psi_{e}^{(1)}\circ\cdots\circ\psi_{e}^{(L-1)}\circ\psi_{d}^{(L- 1)}\circ\cdots\circ\psi_{d}^{(1)}\] \[\vdots\] \[=\psi_{e}^{(1)}\circ\psi_{d}^{(1)}=Id.\]
We note that if \(\psi_{e}^{(l)}:\mathbb{R}^{n_{l}}\rightarrow\mathbb{R}^{n_{l-1}}\) and \(\psi_{d}^{(l)}:\mathbb{R}^{n_{l-1}}\rightarrow\mathbb{R}^{n_{l}}\), then the dimensions \(n_{l}\) of the layers must be non-decreasing with \(r=n_{0}\leq n_{1}\leq\cdots\leq n_{L}=n\).
There are two main ingredients in our approach to constructing layers with the desired properties. The first is a pair of smooth activation functions \(\sigma_{+}\) and \(\sigma_{-}\) that act elementwise on vectors and satisfy \(\sigma_{-}\circ\sigma_{+}=Id\). The second is a constraint on the weight matrices \(\Phi_{l},\Psi_{l}\in\mathbb{R}^{n_{l}\times n_{l-1}}\), such that they satisfy the biorthogonality condition \(\Psi_{l}^{T}\Phi_{l}=I\). These two ingredients are explained in the following subsections. Once these are defined, we construct the layers of the encoder and decoder according to
\[\boxed{\psi_{e}^{(l)}(x^{(l+1)})=\sigma_{-}\big{(}\Psi_{l}^{T}(x^{(l+1)}-b_{l })\big{)},} \tag{10}\] \[\psi_{d}^{(l)}(z^{(l-1)})=\Phi_{l}\sigma_{+}(z^{(l-1)})+b_{l},\]
where \(b_{l}\) are bias vectors. The resulting layer transformation then satisfies \(\psi_{e}^{(l)}\circ\psi_{d}^{(l)}=Id\), as desired.
_Remark 1_ (Parameter-dependent projections).: Intrinsic manifolds often depend on system parameters. A parameter-dependent projection can be obtained by allowing the biases \(b_{l}\) to be functions of a vector of parameters \(q\). Specifically, we can define \(b_{l}=W_{l}q+\tilde{b}_{l}\) where \(W_{l}\) and \(\tilde{b}_{l}\) are trainable weights and biases.
_Remark 2_.: By definition, the decoder reconstructs states in an affine subspace of dimension \(n_{L-1}\). Therefore, \(n_{L-1}\) should be chosen based on Kolmogorov \(n\)-width considerations so that state data from the system can be accurately reconstructed in an affine subspace of dimension \(n_{L-1}\).
Figure 5: The architecture of an autoencoder, consisting of two component neural networks, the encoder \(\psi_{e}\) and the decoder \(\psi_{d}\).
### Invertible, smooth activation functions
Here we define the smooth, invertible activation functions \(\sigma_{\pm}:\mathbb{R}\to\mathbb{R}\) to be used in the encoder and decoder. Geometrically, the condition that \(\sigma_{+}\) and \(\sigma_{-}\) are inverses is equivalent to the condition that their graphs are reflections about the line \(y=x\) in \(\mathbb{R}^{2}\). In rotated coordinates \((\tilde{x},\tilde{y})=\frac{\sqrt{2}}{2}(x+y,y-x)\) where the line \(y=x\) corresponds with \(\tilde{y}=0\), we let the graph of \(\sigma_{+}\) be the upper branch (\(\tilde{y}>0\)) of the hyperbola defined by
\[\frac{\left(\tilde{y}+\sin(\alpha)\right)^{2}}{\sin^{2}(\alpha)}-\frac{\tilde {x}^{2}}{\cos^{2}(\alpha)}=1, \tag{11}\]
where \(0<\alpha<\pi/4\). To form \(\sigma_{-}\), we flip the sign of \(\tilde{y}\). In (11), \(\tilde{y}\) is shifted by \(\sin(\alpha)\) in order to ensure that \(\sigma_{\pm}(0)=0\). By symmetry, the derivatives satisfy \(\sigma_{\pm}^{\prime}(0)=1\). As shown in Figure 7, the upper and lower branches of this hyperbola are reflections about the axis \(y=x\) with asymptotes at angle \(\alpha\) from this axis. The condition that \(0<\alpha<\pi/4\) ensures that these branches are graphs of well-defined functions \(\sigma_{\pm}\). In the results shown in Section V, we take \(\alpha=\pi/8\). Rotating back to \((x,y)\) coordinates, the activation functions are given by
\[\sigma_{\pm}(x)=\frac{bx}{a}\mp\frac{\sqrt{2}}{a\sin(\alpha)}\\ \pm\frac{1}{a}\sqrt{\left(\frac{2x}{\sin(\alpha)\cos(\alpha)}\mp \frac{\sqrt{2}}{\cos(\alpha)}\right)^{2}+2a},\\ \text{where}\quad\begin{cases}a=\csc^{2}(\alpha)-\sec^{2}(\alpha )\\ b=\csc^{2}(\alpha)+\sec^{2}(\alpha)\end{cases}. \tag{12}\]
Since \(0<a<b\), these functions are well-defined for all \(x\in\mathbb{R}\) and are infinitely continuously differentiable. Examining their graphs in Figure 7, we also observe that they resemble smooth, symmetric versions of "leaky" rectified linear units (ReLU) [47] common in deep learning applications.
### Weight matrix biorthogonality
The layers of the encoder and decoder in (10) are defined using biorthogonal weight matrices, that is, pairs of matrices \(\Phi,\Psi\in\mathbb{R}^{n\times r}\), \(n\geq r\geq 1\), satisfying \(\Psi^{T}\Phi=I\). Here, we describe how to enforce this constraint during training. In Appendix A we show that these matrices form a smooth, properly embedded submanifold \(\mathcal{B}_{n,r}\) of \(\mathbb{R}^{n\times r}\times\mathbb{R}^{n\times r}\) with dimension \(\dim\mathcal{B}_{n,r}=2nr-r^{2}\).
A simple way to optimize the weight matrices on the biorthogonal manifold using existing optimizers for Euclidean spaces is to rely on an over-parametrization. In particular, we over-parametrize \(\mathcal{B}_{n,r}\) over an open subset
\[D_{+}(\Pi_{n,r})=\left\{(\tilde{\Phi},\tilde{\Psi})\in\mathbb{R}^{n,r}\times \mathbb{R}^{n,r}\ :\ \det(\tilde{\Psi}^{T}\tilde{\Phi})>0\right\} \tag{13}\]
of the Euclidean space \(\mathbb{R}^{n,r}\times\mathbb{R}^{n,r}\) using a projection map
Figure 6: An autoencoder defines an idempotent map, i.e., a projection, as long as the process of decoding and then encoding any latent state \(z\) is the identity. This constraint can be imposed layer-wise with the last layer of the decoder being “undone” by the first layer of the encoder (in orange), the second to last layer of decoder being undone by the second layer of the encoder (in blue), and so on. The corresponding layers of the decoder and encoder form a collapsing “telescope” that produces the identity.
Figure 7: The smooth, invertible activation functions \(\sigma_{\pm}\) are constructed geometrically from a hyperbola with conjugate axis (dashed black line) parallel to \(y=x\) (black line) with asymptotes (dashed black lines) at angle \(\alpha\) from the conjugate axis. The graph of \(\sigma_{+}\) is the upper branch of this hyperbola (blue curve) and the graph of \(\sigma_{-}\) (red curve) is obtained by reflecting across the line \(y=x\).
\[\Pi_{n,r}:D_{+}(\Pi_{n,r})\rightarrow\mathcal{B}_{n,r}\text{ defined by} \tag{14}\]
Indeed, one can easily check that this map is smooth, surjective, and idempotent \(\Pi_{n,r}\circ\Pi_{n,r}=\Pi_{n,r}\). By composing an optimization objective function \(J:\mathcal{B}_{n,r}\rightarrow\mathbb{R}\) with the overparametrization we produce a new objective
\[J:=J\circ\Pi_{n,r}:D_{+}(\Pi_{n,r})\rightarrow\mathbb{R} \tag{15}\]
defined on an open subset of the Euclidean space \(\mathbb{R}^{n,r}\times\mathbb{R}^{n,r}\). Theorem 5 in Appendix A says that this is locally equivalent (by a smooth change of coordinates) to introducing \(r^{2}\) additional optimization variables on which the cost function does not depend. Consequently the over-parametrization does not introduce any new critical points into the optimization problem in the sense that the gradient of the original objective \(J(\Phi,\Psi)\) vanishes if and only if the gradient of the composition \(\nabla J(\tilde{\Phi},\tilde{\Psi})\) vanishes at every element \((\tilde{\Phi},\tilde{\Psi})\) in the preimage fiber \(\Pi_{n,r}^{-1}(\Phi,\Psi)\).
During optimization we must ensure that the representatives \((\tilde{\Phi},\tilde{\Psi})\) of the weight matrices \((\Phi,\Psi)=\Pi_{n,r}(\tilde{\Phi},\tilde{\Psi})\) remain in the domain \(D_{+}(\Pi_{n,r})\) and do not approach its boundary. To do this, regularization functions for each layer of the network are added to the cost function minimized during training. The regularization we use for each layer is given by
\[\boxed{R(\tilde{\Phi},\tilde{\Psi})=\left\|\tilde{\Psi}^{T}\tilde{\Phi}-I \right\|_{F}^{2}\left\|(\tilde{\Psi}^{T}\tilde{\Phi})^{-1}\right\|_{F}^{2}} \tag{16}\]
Evidently, this function is well-defined and smooth on \(D_{+}(\Pi_{n,r})\). It takes its minimum value of zero if and only if \((\tilde{\Phi},\tilde{\Psi})\in\mathcal{B}_{n,r}\) and it blows up to \(+\infty\) whenever \(\tilde{\Psi}^{T}\tilde{\Phi}\) approaches a singular matrix. Therefore, including this regularization term in the cost function forces the optimization iterates \(\tilde{\Phi},\tilde{\Psi}\) to remain near (in fact, to approach) \(\mathcal{B}_{n,r}\) without approaching the boundary of \(D_{+}(\Pi_{n,r})\). Note that the weight matrices \((\Phi,\Psi)=\Pi_{n,r}(\tilde{\Phi},\tilde{\Psi})\) of the autoencoder always remain in the biorthogonal manifold.
Our analysis in Appendix A also shows that the optimization domain \(D_{+}(\Pi_{n,r})\) is connected when \(n>r\). This means that restricting the optimizer to this domain does not cut off access to any part of the biorthogonal manifold by an optimization algorithm that follows a continuous path or proceeds in small steps. On the other hand, when \(n=r\), the biorthogonal manifold consists of pairs \((\Phi,\Phi^{-1})\), where \(\Phi\) are invertible \(n\times n\) matrices. In this case, \(D_{+}(\Pi_{n,n})\) and \(\mathcal{B}_{n,n}\) consist of two disjoint connected components corresponding to matrices with positive and negative determinants. However, we show in Appendix A that this is of no consequence for the optimization of the autoencoder's weights because any choice for the signs of the determinants in the square layers can be achieved without altering the projection \(P=\psi_{d}\circ\psi_{e}\). Hence, one does not have to explore other connected components during optimization.
We summarize the training procedure for our autoencoder in Algorithm 1. The specific cost functions and the types of training data we employ will be discussed in Section IV. These cost functions \(J\) can depend directly on the autoencoder \(P=\psi_{d}\circ\psi_{e}\), its derivatives, the biorthogonal weights \(\Phi_{l},\Psi_{l}\) and biases \(b_{l}\) in each layer, the data in the minibatch, the FOM, or other parameters, but not the weight matrix representatives \(\tilde{\Phi}_{l},\tilde{\Psi}_{l}\). In Section V D we discuss specific details of the training procedure for our main numerical example including the construction of minibatches and the choice of optimizer and optimization parameters such as the learning rate.
```
1:input: layer widths \(r=n_{0}\leq n_{1}\leq\dots\leq n_{L}=n\), activation function asymptote angle \(0<\alpha<\pi/4\), training data, cost function \(J\), regularization strength \(\beta>0\), number of training epochs, initial biorthogonal weight matrices \((\Phi_{l},\Psi_{l})\in\mathcal{B}_{n_{l},n_{l-1}}\), and initial bias vectors \(b_{l}\in\mathbb{R}^{n_{l}}\).
2: initialize \((\tilde{\Phi}_{l},\tilde{\Psi}_{l})=(\Phi_{l},\Psi_{l})\in D(\Pi_{n_{l},n_{l-1}})\) for \(l=1,\dots,L\)
3:for epoch \(=1,2,\dots,(\text{num. epochs})\)do
4: randomly split training data set into minibatches
5:for each minibatch do
6: construct autoencoder \(P=\psi_{d}\circ\psi_{e}\) with weights \((\Phi_{l},\Psi_{l})=\Pi_{n_{l},n_{l-1}}(\tilde{\Phi}_{l},\tilde{\Psi}_{l})\) and bias vectors \(b_{l}\) in layers \(l=1,\dots,L\) defined by (10) with activation functions in (12).
7: use \(P\) and the minibatch to compute the regularized cost \[\tilde{J}_{\beta}\left([\tilde{\Phi}_{l},\tilde{\Psi}_{l},b_{l}]_{l=1}^{L} \right)=J(P,\text{minibatch})+\beta\sum_{l=1}^{L}R(\tilde{\Phi}_{l},\tilde{ \Psi}_{l})\]
8: compute the gradient of \(\tilde{J}_{\beta}\) with respect to each \(\tilde{\Phi}_{l_{l}},\tilde{\Psi}_{l},b_{l}\)
9: use the gradients and an optimizer such as Adam[48] to update each \(\tilde{\Phi}_{l},\tilde{\Psi}_{l},b_{l}\)
10:endfor
11:endfor
12:return The autoencoder \(P\) along with the final weights \((\Phi_{1},\Psi_{1})=\Pi_{n_{l},n_{l-1}}(\tilde{\Phi}_{l},\tilde{\Psi}_{l})\) and biases \(b_{l}\) defining each layer \(l=1,\dots,L\).
```
**Algorithm 1** Autoencoder training procedure
_Remark 3_.: Another approach is to optimize the autoencoder's weights directly on the biorthogonal manifold using gradient-based techniques together with an appropriate retraction and vector transport[49]. In fact, the over-parametrization map \(\Pi_{n,r}\) yields a "projection-like retraction"[50] on \(\mathcal{B}_{n,r}\). The projection map onto the tangent space of \(\mathcal{B}_{n,r}\) given by Theorem 4 in Appendix A also yields a vector transport on \(\mathcal{B}_{n,r}\). This approach is discussed in Section 3.4 of Otto's thesis[51]. However, it is difficult to implement in existing neural network optimizers such as PyTorch[52] and TensorFlow[53], motivating the use of our simple over-parametrization instead.
### Preserving an equilibrium point
In certain cases such as in control applications, it is important for the reduced-order model to preserve a known equilibrium point of the system. To ensure that our nonlinear projection-based ROM has the same equilibrium point, it suffices to ensure that the equilibrium \(x_{\text{eq}}\) is contained in the learned manifold parametrized by the decoder. To do this, we obtain \(\psi_{d}(0)=x_{\text{eq}}\) by constraining the bias vector in the final
layer to be
\[b_{L}=x_{\text{eq}}-\Phi_{L}\sigma_{+}\circ\psi_{d}^{(L-1)}\circ\cdots\circ\psi_{ d}^{(1)}(0). \tag{17}\]
The resulting equilibrium point of the ROM (6) is located at the origin in the latent space of the autoencoder. Note that it is always possible to shift an equilibrium point to the origin \(x_{\text{eq}}=0\) by a change of coordinates in (1).
### Enforcing linear constraints on state vectors
Suppose we know that the state vectors \(x\) of the system (1) satisfy a collection of linear constraints \(\mathcal{L}x=0\). Examples included certain boundary conditions for solutions of partial differential equations as well as incompressibility constraints in fluid flows. To ensure that all projected states \(P(x)\) also satisfy these constraints, it suffices to ensure that the weight matrix and bias vector defining the last layer of the decoder satisfy \(\mathcal{L}\Phi_{L}=0\) and \(\mathcal{L}b_{L}=0\). Examining (10), we see that this yields \(\mathcal{L}\Psi_{d}^{(L)}(z^{(L-1)})=0\), which implies that \(\mathcal{L}P(x)=0\) for every \(x\). During training (see Algorithm 1), we optimize representatives \((\Phi_{L},\tilde{\Psi}_{L})\in D(\Pi_{n_{L},n_{L-1}})\) of the weight matrices \((\Phi_{L},\Psi_{L})=\Pi_{n_{L},n_{L-1}}(\Phi_{L},\tilde{\Psi}_{L})\). Enforcing the linear constraint \(\mathcal{L}\tilde{\Phi}_{L}=0\) on the representative automatically ensures that \(\mathcal{L}\Phi_{L}=0\), as one can easily verify from (14). In practice,
\[\mathcal{L}\tilde{\Phi}_{L}=0\qquad\text{and}\qquad\mathcal{L}b_{L}=0 \tag{18}\]
can be enforced either by parametrizing \(b_{L}\) and the columns of \(\tilde{\Phi}_{L}\) in a basis for \(\text{Null}(\mathcal{L})\), or by employing projected gradient descent methods to constrain the iterates within \(\text{Null}(\mathcal{L})\)
### Initialization
In Figure 7, we see that each activation function \(\sigma_{+}\) and \(\sigma_{-}\) can produce an output of larger magnitude than the input, and repeated activations in deep networks can result in much greater amplification. In addition, linear layers with operator norm greater that unity will further enlarge the output magnitude. These effects can lead to very large initial loss, which interferes with training. To address this issue, we initialize the network's weights, \(\Phi_{I}\) and \(\Psi_{I}\), such that \(\|\Phi_{I}\|_{2}=1\) and \(\|\Psi_{I}^{T}\|_{2}=1\). In particular, we randomly sample a square matrix from the orthogonal group and take the first \(n_{l}\) columns to construct \(\Phi\) and \(\Psi=\Phi\).
Regardless of whether we preserve the equilibrium point via a constraint, as discussed in Section III.3, it is usually advantageous for the network to have the property that \(P(0)=0\) at initialization. This property is satisfied if we set all biases to zero at initialization since \(\sigma_{+}(0)=\sigma_{-}(0)=0\).
## IV Optimization objectives
Choosing an appropriate optimization objective is crucial for learning projections that yield accurate reduced-order models. Typically, the parameters \(\theta\) consisting of the weights and biases in an autoencoder are optimized in order to minimize the average reconstruction error
\[J_{\text{Rec}}(P)=\,\mathbb{E}_{x}\left[\|x-P(x)\|^{2}\right] \tag{19}\]
over some distribution of states \(x\). For example, this might be an empirical distribution of states sampled along trajectories of interest from the full-order model. However, the loss function (19) encourages the projection to simply map each point \(x\) in the support of the distribution to its nearest point on the learned manifold \(\hat{\mathcal{M}}=\text{Range}(P)\). In a tubular neighborhood of \(\hat{\mathcal{M}}\) this yields an orthgonal projection in the sense that the line segment in \(\mathbb{R}^{n}\) (in the Riemannian case, the minimizing geodesic) connecting each \(x\) in the tubular neighborhood to \(P(x)\) lies in the fiber of \(P(x)\) and is orthogonal to \(T_{P(x)}\hat{\mathcal{M}}\) (see Lee [46] or Guillemin and Pollack [54]). As we discussed in Section II (see Figure 2), this is not always ideal for modeling the dynamics since the truncation does not account for coordinates that have a large influence on the future behavior of the system. In this section, we develop alternative objectives (loss functions) for training the autoencoder that account for this kind of sensitivity.
### Reconstruction and Velocity Projection (RVP) loss
One way to account for the dynamics is to penalize the difference between the time derivative of the reduced-order model (3) and the time derivative along projected trajectories of the full-order model (1). If \(x(t)\) is a trajectory of the FOM generating output \(y(t)\), then the time derivative of the projected trajectory \(x_{P}(t)=P(x(t))\) is
\[\frac{\text{d}}{\text{dt}}\,x_{P}(t)=\text{d}P(x(t))\,\frac{\text{d}}{\text{dt }}\,x(t). \tag{20}\]
At the same point \(x_{P}(t)\), the time derivative of the ROM (3) is given by
\[\hat{f}_{P}(x_{P}(t),u(t))=\text{d}P(x_{P}(t))f(x_{P}(t),u(t)). \tag{21}\]
These two quantities are equal for all \(t\) if and only if the trajectory \(\hat{x}(t)\) of the ROM agrees with the projected trajectory \(x_{P}(t)\). The following proposition shows how the integrated square error between these trajectories is bounded by a weighted integral of the square projection error for the time derivatives.
**Proposition 1** (Weighted velocity projection error).: _Let \(x(t)\), \(t\in[0,t_{f}]\) be a trajectory of (1) and let \(P:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a smooth projection map. Suppose that (3) has a unique solution \(\hat{x}(t)\) over the same time interval. If \(x_{P}(t)=P(x(t))\) and \(\hat{x}(t)\) are contained within a subset \(\mathcal{U}\subset\mathbb{R}^{n}\) over which \(x\mapsto\hat{f}_{P}(x,u(t))=\text{d}P(x)f(x,u(t))\) has Lipshitz constant \(L\) for every \(t\in[0,t_{f}]\) then_
\[\int_{0}^{t_{f}}\left\|x_{P}(t)-\hat{x}(t)\right\|^{2}\text{d}t \\ \leq\int_{0}^{t_{f}}w_{L,t_{f}}(t)\left\|\frac{\text{d}}{\text{dt }}x_{P}(t)-\hat{f}_{P}(x_{P}(t),u(t))\right\|^{2}\text{d}t, \tag{22}\]
_where_
\[w_{L,J_{f}}(t)=\frac{1}{4L^{2}}\left[\left(e^{2L_{J_{f}}}-2L_{f}\right)-\left(e^{ 2Lt}-2Lt\right)\right]. \tag{23}\]
Proof.: The result essentially follows from a Gronwall-Bellman-type inequality. We provide the details in Appendix B
The significance of this result is that it tells us how to properly weight the velocity projection error in formulating optimization objectives. While we are primarily interested in the error between the trajectory of the ROM and the projected trajectory of the FOM, velocity projection error is a more convenient quantity to optimize because it does not involve integrating the ROM forward in time. Since it is difficult to determine the Lipschitz constant \(L\) in practice, we treat it as a parameter when using Proposition 1 as a guide to formulate objective functions for optimization. In this case, our choice of \(L\) reflects the rate at which we expect nearby trajectories of the ROM to diverge. The weight function is plotted in Figure 8 over a range of values for its parameters \(L\) and \(t_{f}\). We observe that in the limit as \(L\to 0\), the weight function becomes
\[\lim_{L\to 0}w_{L,t_{f}}(t)=\frac{1}{2}\left(t_{f}^{2}-t^{2}\right). \tag{24}\]
On the other hand, the weight function increases exponentially with \(Lt_{f}\), so we must be somewhat careful that \(Lt_{f}\) is not too large.
If the projected FOM trajectory \(x_{P}(t)\) agrees with the trajectory of the ROM \(\hat{x}(t)\), then the error between the output of the ROM \(\hat{y}(t)=g(\hat{x}(t))\) and the output of the FOM \(y(t)=g(x(t))\) is due only to the difference between \(g(x_{P}(t))\) and \(y(t)\). We can measure this using a reconstruction loss resembling (19). Therefore, we combine this reconstruction error with the bound on the trajectory error from Proposition 1 along trajectories \(x(t)\) drawn from a given distribution over initial conditions and input signals. Combining the reconstruction error and a constant \(\gamma\geq 0\) times the weighted velocity projection error into a single loss function, we seek to minimize
\[\left.\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt \hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.0pt\hskip-10.
FOM, i.e., to compute \(\left(\frac{\partial}{\partial x}f(x,u)\right)^{T}v\) and \(\left(\frac{\partial}{\partial x}g(x)\right)^{T}w\) for vectors \(v\in\mathbb{R}^{n}\) and \(w\in\mathbb{R}^{m}\).
The upshot of this added complexity is that the RVP loss can account for system nonnormality, as the following example illustrates.
_Example 2_ (RVP loss for a nonnormal linear system).: We consider the problem of finding a two-dimensional linear projection for the nonnormal linear system
\[\begin{split}\dot{x}_{1}&=-x_{1}+100x_{3}+u\\ \dot{x}_{2}&=-2x_{2}+100x_{3}+u\\ \dot{x}_{3}&=-5x_{3}+u\\ y&=x_{1}+x_{2}+x_{3},\end{split} \tag{26}\]
discussed as an example in Holmes _et al._[55]. In response to an impulse, the state \(x_{3}\) decays rapidly to zero and exerts a large influence on \(x_{1}\) and \(x_{2}\), causing them to experience a large transient growth before eventually decaying. in Holmes _et al._[55] it is shown that POD, while being optimal with respect to reconstruction loss (19), yields an orthogonal projection subspace closely aligned with the \(x_{1},x_{2}\) coordinate plane and therefore ignores the important influence of \(x_{3}\). The resulting model does not experience the large transient growth present in the impulse response of (26). To see why optimizing the projection with respect to RVP loss can improve this situation, consider the orthogonal projection \(P_{1,2}\) onto the \(x_{1},x_{2}\) coordinate plane in \(R^{3}\) and a state \(x=(x_{1},x_{2},x_{3})\) along the impulse-response trajectory of (26). While the reconstruction error \(x-P_{1,2}x=(0,0,x_{3})\) is small, the velocity projection error
\[P_{1,2}f(x,0)-P_{1,2}f(P_{1,2}x,0)=\begin{bmatrix}100x_{3}\\ 100x_{3}\\ 0\end{bmatrix} \tag{27}\]
is over 100 times larger in magnitude. By adding the velocity projection term to the loss function with a positive constant \(\gamma\), we force the learned projection to account for the influence of \(x_{3}\) on the dynamics. To substantiate our claims, we recreated the results presented in Holmes _et al._ alongside an RVP loss trained ROM, where \(P=\Phi\Psi^{T}\) and \(\Psi^{T}\Phi=I\), and the results are shown in Figure 9. The weight function (23) was used in (25) with \(\gamma=\|C\|_{op}=1\), \(L=1/t_{f}\), and \(t_{f}=6\). As expected, reconstruction loss (POD) performs poorly, while RVP loss performs nearly as well as balanced truncation.
RVP loss also resembles the loss function used to train SINDy-autoencoders [14]. However, there are two main differences. First, we determine the dynamics in the latent space via nonlinear projection using (6), whereas SINDy-autoencoders fit a model of the latent space dynamics during training. Second, RVP loss measures the error between the ROM and the FOM time derivative projected onto the learned manifold. In contrast, SINDy-autoencoders use a loss term measuring the difference between ROM and FOM time derivatives directly, i.e., without projection, together with another term measuring the difference between ROM and FOM time derivatives in the latent space. While the RVP loss depends only on the projection \(P\), the SINDy-autoencoder loss depends on the latent space, which can be scaled arbitrarily depending on the weights learned during training. Using projected time derivatives to formulate RVP loss prevents the fast dynamics of the FOM from dominating the loss function, which can cause the learned manifold to become aligned with the fast dynamics, rather than capturing slow dynamics. Incorporating our neural network architecture into SINDy-autoencoders where the latent space dynamics are learned is an interesting avenue of future work. Variants of RVP loss could also be formulated in this setting and compared to the original SINDy-autoencoder loss. We do not pursue this further here.
### Gradient-Aligned Projection (GAP) loss
In order to quantify how well a given (nonlinear) projection \(P:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) on the state space of a dynamical system preserves information about future outputs, we follow Otto, Padovan, and Rowley [34] and consider the map
\[F_{u}:x_{0}\mapsto(y(t_{0}),y(t_{1}),\ldots,y(t_{L})) \tag{28}\]
defined by simulating the full-order model (1) and sampling the output at times \(0\leq t_{0}<t_{1}<\cdots<t_{L}\). We aim to find a projection so that \(F_{u}(P(x))\) closely approximates \(F_{u}(x)\) over a distribution of states \(x\) and input signals \(u\) drawn from trajectories of the full-order model. If we are willing to simulate the FOM during the process of optimizing the projection, then we could form a loss function simply by computing the mean square error of these quantities. However, this will be costly for high-dimensional systems of interest and we prefer a method that uses simulation data obtained from the FOM prior to optimizing the projection.
We construct a cost function that can be computed using a fixed set of samples from the FOM obtained ahead of time by expanding the difference \(F_{u}(x)-F_{u}(P(x))\) in a first-order Taylor series about \(x\). Under mild boundedness and continuity assumptions, the following lemma says that we can use these first-order terms to bound the square error when \(x-P(x)\) is small.
**Lemma 1**.: _Let \(\mathcal{X}\) be a compact convex subset of \(\mathbb{R}^{n}\) and let \(\mathcal{U}\) be a compact topological space containing input signals \(u\) defined on the interval \([0,t_{L}]\). We assume that \((x,u)\mapsto F_{u}(x)\) is twice continuously differentiable with respect to \(x\) on \(\mathcal{X}\times\mathcal{U}\). Then there is a constant \(C\geq 0\) so that_
\[\left\|F_{u}(x)-F_{u}(P(x))\right\|^{2}\leq\left\|\,\mathrm{d}\,F_{u}(x)(x-P(x ))\right\|^{2}+C\left\|x-P(x)\right\|^{3} \tag{29}\]
_holds whenever \(x\in\mathcal{X}\), \(P(x)\in\mathcal{X}\), and \(u\in\mathcal{U}\)._
Proof.: This is a consequence of Taylor's theorem. We give the detailed proof in Appendix B.
Taking the expectation over a distribution of states and input signals over sets satisfying the hypotheses of the lemma,
the mean square approximation error is bounded by
\[\mathbb{E}_{x,u}\left[\left\|F_{u}(x)-F_{u}(P(x))\right\|^{2}\right] \leq\underbrace{\mathbb{E}_{x,u}\left[\left\|\mathrm{d}F_{u}(x)(x- P(x))\right\|^{2}\right]}_{J_{\mathrm{GAP}}(P)}\] \[\quad+C\ \mathbb{E}_{x}\left[\left\|x-P(x)\right\|^{3}\right]. \tag{30}\]
We use a sample-based approximation of the leading-order term as a cost function for optimizing \(P\) since, at least in principle, \(\mathrm{d}F_{u}(x)\) can be computed prior to optimization given a collection of states and input signals. For reasons that will become clear, we refer to this cost function as the gradient-aligned projection (GAP) loss.
In many practical applications the dimension \((L+1)m\) of the output sequences is large enough to make computing \(\mathrm{d}F_{u}(x)\) impractical. Instead, we can rely on randomized projections of the output sequences in a similar manner to the output projection method introduced by Rowley [25]. Specifically, we select an independent, zero mean, isotropic random vector \(\xi\in\mathbb{R}^{(L+1)m}\) and compute the univariate gradients
\[g=\nabla(\xi^{T}F_{u})(x) \tag{31}\]
using the adjoint of the full-order model linearized about the time-\(t_{L}\) trajectory starting at \(x\) as described in Otto, Padovan, and Rowley [34]. These randomized univariate gradients allow us to write the GAP loss as
\[\boxed{J_{\mathrm{GAP}}(P)=\ \mathbb{E}_{x,g}\left[\left\langle g,\ x-P(x) \right\rangle^{2}\right].} \tag{32}\]
Collecting samples \(\{(x_{i},g_{i})\}_{i=1}^{s}\) drawn from the joint distribution of \((x,g)\), we can compute a projection by minimizing the empirical GAP loss
\[\mathcal{J}_{\mathrm{GAP}}(P)=\frac{1}{s}\sum_{i=1}^{s}\big{\langle}g_{i},\ x_{i}-P(x_{i})\big{\rangle}^{2}. \tag{33}\]
Minimizing GAP loss over linear projections for linear time-invariant (LTI) systems becomes equivalent to balanced truncation (BT) for certain limits and distributions of \(x\). For example, let
\[\dot{x} =Ax+Bu \tag{34}\] \[y =Cx\]
be an asymptotically stable LTI system with \(\dim(u)=d_{u}\). Suppose we sample \(x\) uniformly from impulse response trajectories with \(0\leq t\leq t_{f}\) and initial conditions \(x(0)=Be_{j}\), \(j=1,\ldots,d_{u}\). If we choose uniformly spaced sample times \(t_{k}=kt_{f}/L\) to form \(F_{u}\), then it is straightforward to show that
\[\lim_{t_{f}\rightarrow\infty}\lim_{L\rightarrow\infty}\frac{t_{f}^{2}d_{u}}{ L}J_{\mathrm{GAP}}(P)=\mathrm{Tr}\left[W_{o}(I-P)W_{c}(I-P)^{T}\right] \tag{35}\]
where \(W_{o}\) and \(W_{c}\) are the observability and controllability Gramians of (34). The quantity on the right is minimized by the balanced truncation projection [34; 57; 56]. The performance of BT on the nonnormal system in Example 2 is shown in Figure 9, providing evidence that minimizing GAP loss is appropriate for modeling such systems.
### Orthogonality-promoting regularization
Regularization is often employed in over-parametrized neural networks to prevent over-fitting. A commonly used method is to penalize the squared Frobenius norm of the weights in each layer. It turns out that applying this penalty to weights \((\Phi,\Psi)\) in the biorthogonal manifold \(\mathcal{B}_{n,r}\) drives them towards orthogonality, that is \(\Phi=\Psi\) having orthonormal columns. Specifically, we have the following result:
Figure 9: Impulse response (left) and frequency response (right) for Example 2, comparing the full model and three second-order reduced-order modeling approaches: reconstruction loss (POD), reconstruction and velocity projection loss (RVP), and balanced truncation.
Theorem 1.: _The minimum value of the function_
\[R_{F}(\Phi,\Psi)=\|\Phi\|_{F}^{2}+\|\Psi\|_{F}^{2} \tag{36}\]
_over \((\Phi,\Psi)\in\mathbb{B}_{n,r}\) is \(2r\) and this value is achieved if and only if \(\Phi=\Psi\) has orthonormal columns. Moreover, \(R_{F}(\Phi,\Psi)\to\infty\) if any of the principal angles [58; 59] between \(\operatorname{Range}(\Phi)\) and \(\operatorname{Range}(\Psi)\) approach \(\pi/2\)._
Proof.: We give a proof in Appendix B.
## V Case study of a simplified fluid model
In this section, we compare our reduced-order modeling approach to several other methods, on a highly simplified model of a fluid flow. The studied dynamical system is a three-state model of vortex shedding behind a circular cylinder, as described by Noack _et al._[60]. In particular, the system is defined by the following set of equations:
\[\dot{x}_{1} =\mu x_{1}-\omega x_{2}+Ax_{1}x_{3} \tag{37}\] \[\dot{x}_{2} =\omega x_{1}+\mu x_{2}+Ax_{2}x_{3}\] \[\varepsilon\dot{x}_{3} =-(x_{3}-x_{1}^{2}-x_{2}^{2})\]
with \(\mu=0.1\), \(\omega=1\), \(A=-0.1\), and \(\varepsilon=0.1\). This system possesses an unstable fixed point at the origin and a global asymptotically stable limit cycle of radius \(1\) about \(x_{1}=x_{2}=0\) in the plane \(x_{3}=1\). Additionally, the system's slow manifold is situated a distance \(\mathcal{O}(\varepsilon)\) away from the critical manifold \(x_{3}=x_{1}^{2}+x_{2}^{2}\). An asymptotic approximation of the slow manifold to second order in \(\varepsilon\) can be found in Otto's thesis [51] along with the recurrence relation needed to obtain the higher-order terms. In this work we use the fourth-order approximation in \(\varepsilon\) computed using this relation. The slow manifold calculation follows the same procedure discussed in Example 1. In the following section, we outline the network architectures responsible for learning the nonlinear projection described in Section II.
### Autoencoder Architectures
We compare two autoencoder architectures in this manuscript. The first architecture, which we refer to as ProjAE, is the projection-constrained autoencoder described in Section III. The second architecture, referred to as StandAE, is a standard state-of-the-art differentiable autoencoder. In particular, this architecture's encoder and decoder are modeled as fully connected networks using the GeLU activation function, denoted \(\sigma\)[61], which satisfies the requirement of differentiability discussed in Section II. The encoder and decoder layer structure, denoted as \(\psi_{e}^{(l)}\) and \(\psi_{d}^{(l)}\), follow the standard feed-forward neural network form: \(\sigma(A^{(l)}x+b^{(l)})\)[38]. As a final note, we attached a linear output layer to both the encoder and decoder of StandAE, i.e., \(\psi_{e}=W_{e}\psi_{e}^{(L)}\circ\cdots\circ\psi_{e}^{(1)}\) and \(\psi_{d}=W_{d}\psi_{d}^{(L)}\circ\cdots\circ\psi_{d}^{(1)}\) where \(W_{e}\) and \(W_{d}\) are trainable weight matrices.
To initialize the weights and biases of ProjAE, we follow the procedure outlined in Section III.5. StandAE was initialized using the procedure discussed in Section 2.2 of Hendrycks _et al._[62]. In particular, the rows of each weight matrix were uniformly sampled from unit hypersphere. StandAE's weights were then scaled by a GeLU dependent factor designed too maintain both activation and back-propagated gradients variance as one forward or backward through the network. For both architectures, the biases are set to zero at initialization. Both architectures have a \(5\) layer encoder and \(5\) layer decoder where \(\psi_{e}^{(5)}:\mathbb{R}^{3}\to\mathbb{R}^{2}\), \(\psi_{d}^{(5)}:\mathbb{R}^{2}\to\mathbb{R}^{3}\), and \(\psi_{e}^{(i)},\psi_{d}^{(i)}:\mathbb{R}^{3}\to\mathbb{R}^{3}\) for \(i=1,2,3,4\). We do not use the constraint described in Section III.3 to preserve the equilibrium at the origin.
### Autoencoder-Based Reduced-Order Models
When defining the reduced-order model, we must select a method by which we project the dynamics onto the learned manifold. As discussed in Section III, one approach is to use the encoder to define the reduced-order model (6). We denote this type of reduced-order model by EncROM. The approach used by Lee and Carlberg [6] has instead projected the dynamics orthogonally onto the tangent space of the manifold parameterized by the decoder. We denote this type of reduced-order model by DecROM.
### Data Collection
Two separate data sets were generated to examine the effect on training. The first data set, which we call the Fine Data Set, consisted of \(1000\) trajectories with initial conditions given on a \(10\times 10\times 10\) grid evenly spaced in the cube \([-1,1]^{3}\). The second data set, which we call the Coarse Data Set, consisted of \(216\) trajectories with initial conditions \(\big{\{}-1,-\frac{1}{3},-\frac{1}{9},\frac{1}{9},\frac{1}{3},1\big{\}}^{3}\). For each training set, we created a validation data set with same number of trajectories, with initial conditions sampled uniformly from the cube \([-1,1]^{3}\). The testing data set consisted of \(1000\) trajectories with initial conditions sampled uniformly in the cube.
Trajectories were generated by numerically integrating the governing equations with a \(4\)th-order Runge-Kutta method, over the time interval \([0,20]\), using a time step \(\Delta t=0.1\). In order to generate the gradient samples for GAP loss, we used the method of long trajectories discussed by Otto, Padovan, and Rowley [34] with parameters \(s_{g}=10\) and \(L=20\). The hyperparameter \(L\) was chosen such that if a gradient sample was based at the initial condition, then the adjoint would be sampled before and after transients have decayed. In this example, transients decay after about \(0.2\) time units and trajectories reach the limit cycle by about \(2\) time units. Using the aforementioned parameters, the fine data set has a total of \(201,000\) state samples and \(191,558\) gradient samples, and the coarse data set has a total of \(43,416\) state samples and \(41,307\) gradient samples. The hyperparameter \(s_{g}\) was chosen such that the
number of state and gradient samples were roughly the same to give all loss functions a fair chance to perform.
### Training Procedure
In total, 12 training sessions were carried out, with each session corresponding to a unique combination of data set, architecture, and loss function. During each session, 64 networks were trained simultaneously, each with a different choice of initial parameters (weights and biases). All 12 training sessions used the same 64 sets of initial parameters. The weights and biases of each network were saved during training if the lowest loss-function evaluation on the validation data set was achieved. Due to the computational cost of simulating the autoencoder-based reduced-order model, we used the loss function to determine which model to save, instead of simulating the reduced-order model explicitly. The computational challenge of simulating the reduced-order model is addressed in Section VI. After each session's training phase, the most effective EncROM and DecROM models were chosen from the saved networks, and this selection process was based on the true ROM prediction error (rather than the loss function), using the fine or coarse validation data sets.
To ensure a fair comparison across network architectures and loss functions, each training session employed mostly identical hyperparameters. All training sessions implemented the PyTorch ReduceLROnPlateau class with a patience of 50, an initial learning rate of \(10^{-3}\), and a validation loss equal to the loss-function evaluation on the validation data set. Using PyTorch's built-in Adam[48] optimizer with default settings, each network was trained for a total of 900 epochs.
For reconstruction loss and GAP loss, a batch size of 400 was employed. In the case of RVP loss, we utilized a prediction horizon of \(t_{f}=20\) and a trajectory batch size of 2 (with a time step \(\Delta t=0.1\), as mentioned previously), so that each mini-batch looks at the same number of sample points. Trapezoidal integration was used to discretize the integral in (25) defining the RVP loss. Since the full state is being observed, we set \(\gamma=1\) per the discussion in Section IV.1. We use \(L=1/t_{f}\) to define the weight function in (23). Finally, all ProJ AEs were trained using the regularization in (16) with a factor \(\beta=10^{-5}\).
### Results
We expect a successful reduced-order model to learn and capture three fundamental features of this example. First, the autoencoder's range should closely approximate the system's slow manifold. Second, the projected dynamics should approximate the dynamics on the slow manifold. Finally, the fibers of projection should align with the direction of fast dynamic transients.
In order to quantitatively analyze these features, we define two performance metrics. The first metric measures the proximity between the autoencoder's range and the slow manifold. In particular, _manifold reconstruction error_ is defined by
\[\frac{1}{|\tilde{\mathcal{M}}|}\sum_{x\in\tilde{\mathcal{M}}}\|x-P(x)\|_{2}^{2}, \tag{38}\]
where \(\tilde{\mathcal{M}}\) is a finite subset of the system's slow manifold. In this study, \(\tilde{\mathcal{M}}\) contains points of the form \((x_{1},x_{2},h_{e}(x_{1},x_{2}))\) where \(h_{e}(x_{1},x_{2})\) is the slow manifold's graph representation to fourth order. Furthermore, coordinates \((x_{1},x_{2})\) were sampled on a \(20\times 20\) grid evenly spaced in the square \([-1,1]^{2}\). The second metric, called _ROM prediction error_, quantifies a ROM's ability to predict an initial condition's future, and is defined by
\[\frac{1}{N}\sum_{n=1}^{N}\|\hat{x}_{n}-x_{n}\|_{2}^{2}, \tag{39}\]
where \(x_{n}\) corresponds to a state sample from either the validation or test data set and \(\hat{x}_{n}\) denotes the corresponding state predicted by the ROM. Note that the prediction error above depends on both the autoencoder \(P\) and the chosen method of projection, EncROM or DecROM. Employing these metrics, alongside other qualitative techniques, let us now examine how the various methods presented here perform, relative to existing methods.
First, let us explore how closely the autoencoder's range approximates system's slow manifold. Looking at Table 1, we find that all models trained on reconstruction loss are able to consistently capture the slow manifold, with a manifold reconstruction error of at most 0.005. The majority of models trained on GAP loss also have small manifold reconstruction error. We observe a large manifold reconstruction error for RVP loss, possibly because the models were selected on forecasting, and not reconstruction.
Next, let us explore how well the reduced-order models make forecasts from new initial conditions, as quantified by the ROM prediction error (39). As shown in the "Pred" columns of Table 1, we find that for all 12 autoencoder-loss combinations, the EncROM models exhibit a lower prediction error than the DecROM models. Therefore, at least for this example, an encoder-based ROM provides a benefit over the traditionally-used decoder-based ROM. This effect is more pronounced for our new cost functions, GAP and RVP. Furthermore, some of the DecROM models blow up or have very large error. This is because of an effect we observed in Example 1 in Section II: in particular, orthogonally projecting onto the tangent space of an approximate manifold can yield incorrect stability types for fixed points and periodic orbits.
For prediction of dynamics, the traditional Reconstruction loss performs poorly across the board, for reasons we have explained in Example 1.
The lowest error was obtained for RVP loss, with EncROM projection, and ProjAE architecture, with GAP loss having similar results. The degree to which the constraints imposed by the ProjAE architecture are beneficial depend both on the cost function and the size of the training data set. Enforcing constraints significantly improved performance when training with RVP loss, and this benefit was more pronounced when
the size of training data set was smaller. When training with GAP loss on a large data set, the standard autoencoder was able to achieve high forecasting accuracy without additional constraints. These constraints were beneficial when training with GAP loss on a smaller data set.
These observations are illustrated further in Figure 10, which shows the error on all 50 test trajectories. We start with our best architecture (ProjAE architecture, with RVP loss and EncROM projection), and change one component at a time.
Figure 11 shows a typical test trajectory in both the 3-dimensional state space, as well as the latent space, for ProjAE architecture and EncROM projection, comparing the three loss functions (Reconstruction, GAP, and RVP). When reconstruction loss is used, the projection approximates an orthogonal projection, while the other loss functions result in oblique projection, accounting for the fast dynamics.
## VI Assembling efficient ROMs
The example discussed in the previous section began with a system that was already low dimensional, with only 3 states. For higher dimensional systems, significant computational challenges arise when simulating the reduced-order model. In this section, we discuss three possible methods for addressing these challenges.
After training the autoencoder, we obtain a nonlinear projection-based reduced-order model (6) in the autoencoder's latent space coordinates. Even though the latent space is low-dimensional, evaluating the right-hand side, \(\tilde{f}\), of (6) involves evaluating the right-hand-side, \(f\), of the full-order model (1). Hence, we cannot expect speedups when simulating the ROM by evaluating \(\tilde{f}\) in this manner. This section presents three methods for obtaining computationally-efficient ROMs that can be evaluated more quickly than the FOM. However, even in cases when it is more costly to evaluate the ROM than the FOM, we note that it may be possible to use larger time steps when simulating the ROM due to the removal of dynamics with fast time scales.
### Fitting the model in latent space
A simple approach to construct an efficient ROM in the latent space is to fit a surrogate model for \(\tilde{f}\) and \(\tilde{g}\) in (6) using sample-based interpolation or regression. Specifically, given a collection of samples \(z_{i}\in\mathbb{R}^{r}\) in the latent space of the autoencoder and samples of the input \(u_{i}\), we can evaluate \(z_{i}=\tilde{f}(z_{i},u_{i})\) and \(y_{i}=\tilde{g}(z_{i})\) using the definitions in (6), which rely on the FOM. Once the time derivatives and outputs at the samples have been evaluated, we can fit surrogates for \(\tilde{f}\) and \(\tilde{g}\) that can be evaluated more efficiently. Since \(\tilde{f}\) and \(\tilde{g}\) can be evaluated at arbitrary pairs \((z,u)\), we can choose the samples \((z_{i},u_{i})\) to achieve a desired level of accuracy for the surrogates of \(\tilde{f}\) and \(\tilde{g}\). In particular, we are not limited to the encoded snapshots used to train the autoencoder.
Appropriate sampling and fitting procedures to construct the surrogates will depend on the dimension of the latent space. For very low-dimensional latent spaces (\(\leq\) 5-dimensional) it is possible to construct a grid of sample locations and use spline-based interpolation. For higher-dimensional latent spaces, one can rely on random sampling and radial basis function interpolation or Gaussian process regression. The distribution from which the samples are drawn can be based on a density estimate from the encoded snapshot data collected from the FOM. More samples can also be added in an iterative manner until a desired level of accuracy for the surrogates of \(\tilde{f}\) and \(\tilde{g}\) is achieved.
### Assembling tensors using the outer layer
In certain cases when the full-order model has polynomial nonlinearities, we can improve the efficiency of the reduced-order model by pre-computing the linear projection of the full-order model associated with the outer-most layer of the autoencoder. Our method is similar to the approach described in Section 4.2 of Holmes _et al._[55] for assembling Petrov-Galerkin models. To illustrate, suppose that the right-hand side of the full-order model (1) has a term \(f_{2}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) that can be expressed as
\[f_{2}(x)=h_{2}(x,x), \tag{40}\]
where \(h_{2}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a symmetric bilinear form. In the Navier-Stokes equations, such terms arise from discretization of the convective term \(u\cdot\nabla u\) and from the solution of the pressure-Poisson equation \(\nabla P=-\nabla\Delta^{-1}\nabla\cdot(u\cdot\nabla u)\), where \(u\) denotes the velocity field. More generally, a system with polynomial nonlinearities can always be converted into a system with quadratic nonlinearities evolving on an invariant submanifold in a higher-dimensional state space via a lifting process called "quadratization"[63; 64; 65]. The quadratic nonlinearity can then be expressed using a symmetric bilinear form.
Isolating the linear operations in the outer-most layer of the autoencoder, we observe that the encoder and decoder can be written as
\[\psi_{e}:x\mapsto\tilde{\psi}_{e}\big{(}\Psi_{L}^{T}(x-b_{L})\big{)},\qquad \psi_{d}:z\mapsto b_{L}+\Phi_{L}\tilde{\psi}_{d}(z), \tag{41}\]
where
\[\tilde{\psi}_{e}=\psi_{e}^{(1)}\circ\cdots\circ\psi_{e}^{(L-1)}\circ\sigma_{- },\qquad\tilde{\psi}_{d}=\sigma_{+}\circ\psi_{d}^{(L-1)}\circ\cdots\circ\psi_ {d}^{(1)}.\]
This allows us to express the ROM given by (6) as
\[\begin{split}\tilde{f}(z,u)&=\mathrm{d}\,\tilde{ \psi}_{e}\big{(}\tilde{\psi}_{d}(z)\big{)}\Psi_{L}^{T}f\big{(}b_{L}+\Phi_{L} \tilde{\psi}_{d}(z),\;u\big{)}\\ \tilde{g}(z)&=g\big{(}b_{L}+\Phi_{L}\tilde{\psi}_{d} (z)\big{)}.\end{split} \tag{42}\]
The contribution of the bilinear term to the ROM expressed
element-wise is given by
\[\left[\Psi_{L}^{T}f_{2}\big{(}b_{L}+\Phi_{L}\Psi_{d}(z)\big{)}\right]_ {i}=\underbrace{\Psi_{L}[:,i]^{T}h_{2}\big{(}b_{L},b_{L}\big{)}}_{a_{l}}+\] \[\qquad 2\sum_{j_{1},j_{2}=1}^{n_{L-1}}\underbrace{\Psi_{L}[:,i]^{T}h_ {2}\big{(}b_{L},\Phi_{L}[:,j_{1}]\big{)}}_{b_{j_{1}}\wedge_{j_{2}}}\big{[}\bar{ \Psi}_{d}(z)\big{]}_{j_{1}}+\] \[\sum_{j_{1},j_{2}=1}^{n_{L-1}}\underbrace{\Psi_{L}[:,i]^{T}h_{2} \big{(}\Phi_{L}[:,j_{1}],\Phi_{L}[:,j_{2}]\big{)}}_{c_{j_{1}}\wedge_{j_{2}}} \big{[}\bar{\Psi}_{d}(z)\big{]}_{j_{1}}\big{[}\bar{\Psi}_{d}(z)\big{]}_{j_{2}}. \tag{43}\]
We observe that the elements of the tensors \([a_{l}]\), \([b_{l_{j},l_{j}}]\), and \([c_{l_{j},l_{j},l_{j}}]\) can be computed and stored prior to simulating the ROM.
Even if one does not employ quadratization, the above approach applies analogously to any term in the governing equations that can be expressed as a sum of multilinear forms, that is, any polynomial term of finite degree. The rank of the tensors to be assembled is \(d+1\) where \(d\) is the degree of
\begin{table}
\end{table}
Table 1: Manifold reconstruction error (38) and ROM prediction error (39) for the 12 training sessions. Each row corresponds to an autoencoder-ROM combination described in Sections V.1 and V.2. Each column identifies the type of loss function used during training (Reconstruction, GAP, or RVP) as well as the performance metric (Manifold or Prediction). The lowest prediction error is achieved by the ProjAE network, with RVP loss, and with EncROM projection.
Figure 10: Comparative study of the model that performs best for prediction (ProjAE architecture, RVP loss, and EncROM projection, trained on the fine data set), changing one component at a time. _Left:_ ROM prediction error for 50 test trajectories, comparing the two network architectures (ProjAE and StandAE). Our projection-constrained autoencoder reduces average prediction error by three orders of magnitude. _Right:_ ROM prediction error for the typical projection approach (DecROM) and loss function (Reconstruction). In both cases, our approach significantly decreases the model prediction error.
Figure 11: Visualizing a sample test trajectory for each loss function. Models seen here are trained on the Fine Data Set. _Left:_ The slow and learned manifold (shaded violet and green, respectively), along with a trajectory from the full and reduced models (in blue and red); _Right:_ Corresponding trajectories in the latent space. In the top left figure, the initial condition is projected almost orthogonally onto the learned manifold, while in the other cases, the initial condition is projected vertically along the direction of fast dynamics.
the polynomial nonlinearity. The dimensions of these tensors are all equal to the layer width \(n_{L-1}\). In general, these tensors are dense. Therefore, the amount of storage and number of operations required to act with the pre-assembled tensors on \(\bar{\Psi}_{d}(z)\in\mathbb{R}^{n_{L-1}}\) both scale as \(\mathbb{O}\big{(}(n_{L-1})^{d+1}\big{)}\). This should be compared against the \(\mathbb{O}(n)\) scaling typically required to act with sparse finite difference operators of the FOM acting on \(\psi_{d}(z)\in\mathbb{R}^{n}\). Therefore, simulating a ROM based on pre-assembled tensors will likely be advantageous only when \((n_{L-1})^{d+1}\ll n\), that is, when the degree of the polynomial nonlinearity and the width \(n_{L-1}\) are both sufficiently small. For example, when the FOM has \(n=10^{5}\) state variables coming from a finite difference discretization of the incompressible Navier-Stokes equations (\(d=2\)), we only expect to see advantages from pre-assembling tensors when \(n_{L-1}\) is in the low tens.
Since the decoder reconstructs states in an affine subspace of dimension \(n_{L+1}\), making this parameter too small can impair the decoder's ability to reconstruct state data with slowly decaying Kolmogorov \(n\)-widths (see Remark 2). The trade-off between computational efficiency and representational power associated with the choice of \(n_{L-1}\) limits the scope of applications in which pre-assembling tensors will be advantageous for reduced-order modeling.
### Sparsifying the encoder
When the state variables in the governing equations of the full-order order model are sparsely coupled, computational speedups for the reduced-order model can be achieved by sparsifying the weight matrices in the encoder. Here we rely on essentially the same principle as the Discrete Empirical Interpolation Method (DEIM) [44]. That is, if the time-derivative of the reduced-order model can be determined based on the time derivatives of a small collection of state variables in the full-order model, then we need only reconstruct the neighboring variables to evolve the reduced-order model.
Given a collection of state variable indices \(\mathcal{I}=\{i_{1},\ldots,i_{K}\}\), we defined the selection operator \(\mathcal{S}_{\mathcal{I}}:\mathbb{R}^{n}\to\mathbb{R}^{K}\) by
\[\mathcal{S}_{\mathcal{I}}:\big{(}[x]_{1},\ldots,[x]_{n}\big{)}\mapsto\big{(}[ x]_{i_{1}},\ldots,[x]_{i_{K}}\big{)}. \tag{44}\]
The time derivative of the selected states \(\mathcal{S}_{\mathcal{I}}x\) under (1) depend on a collection of state variables with indices \(\mathcal{N}(\mathcal{I})\) that we refer to as the "neighbors" of \(\mathcal{I}\). In other words, there is a function \(f_{\mathcal{I}}\) so that
\[\frac{\mathrm{d}}{\mathrm{d}t}(\mathcal{S}_{\mathcal{I}}x)=\mathcal{S}_{ \mathcal{I}}f(x,u)=f_{\mathcal{I}}\big{(}\mathcal{S}_{\mathcal{N}(\mathcal{I} )}x,u\big{)}. \tag{45}\]
In sparsely coupled systems, the time derivative of each state \([x]_{i}\) depends only on a small number of neighbors, meaning that if \(\mathcal{I}\) is small compared to the state dimension \(n\), then \(\mathcal{N}(\mathcal{I})\) is also small compared to \(n\).
Suppose that the weight matrix \(\Psi_{L}\in\mathbb{R}^{n\times n_{L-1}}\) describing the input layer of the encoder has nonzero entries only in the rows indexed by \(\mathcal{I}=\{i_{1},\ldots,i_{K}\}\). Assembling the sub-matrix \(\bar{\Psi}_{L}\in\mathbb{R}^{K\times n_{L-1}}\) from these nonzero rows, we have \(\bar{\Psi}_{L}^{T}=\bar{\Psi}_{L}^{T}\mathcal{S}_{\mathcal{I}}\). In the notation of Section VI.2, this means that the reduced-order model (42) can be written in terms of \(f_{\mathcal{I}}\) as
\[\tilde{f}(z,u) =\mathrm{d}\,\bar{\Psi}_{e}\big{(}\bar{\Psi}_{d}(z)\big{)}\bar{ \Psi}_{L}^{T}f_{\mathcal{I}}\big{(}\mathcal{S}_{\mathcal{N}(\mathcal{I})}b_{L }+\mathcal{S}_{\mathcal{N}(\mathcal{I})}\Phi_{L}\bar{\Psi}_{d}(z),\;u\big{)} \tag{46}\] \[\tilde{g}(z) =g\big{(}b_{L}+\Phi_{L}\bar{\Psi}_{d}(z)\big{)}.\]
If the number of neighbors described by the set \(\mathcal{N}(\mathcal{I})\) is small compared to the original state dimension \(n\), then we can obtain computational speedups by evaluating \(f_{\mathcal{I}}\) instead of \(f\).
Note that because the columns of \(\Psi_{L}\) are linearly independent, we must have \(K\geq n_{L-1}\), and in general there are at least \(K\) elements in \(\mathcal{N}(\mathcal{I})\). This means that the dimension \(n_{L-1}\) must be chosen to be much smaller than the state dimension, and so Remark 2 applies. However, the cost to evaluate the time derivative of the ROM in (46) does not grow as rapidly with \(n_{L-1}\) as in the tensor-based method described in Section VI.2. In the case of PDEs discretized in space using finite-difference schemes with small stencils, the number of neighboring elements in \(\mathcal{N}(\mathcal{I})\) and the cost to evaluate \(f_{\mathcal{I}}\) will grow linearly with the size of \(\mathcal{I}\). Therefore, in the best-case scenario where the number of nonzero rows of \(\Psi_{L}\) grows linearly with \(n_{L-1}\), then the cost to evaluate the time derivative of the ROM will also scale linearly with \(n_{L-1}\).
The simplest way obtain a sparse \(\Psi_{L}\) is to constrain which rows can have nonzero entries prior to training. The row indices can be chosen using methods such as random selection, coarsening a spatial grid, or QR-pivoting-based DEIM [66]. However, choosing the nonzero rows of \(\Psi_{L}\) prior to optimization may prevent the encoder from learning a useful direction of projection.
Better performance can likely be achieved by learning a sparse \(\Psi_{L}\) during the training process for the autoencoder. One option is to add a sparsity-promoting penalty on \(\Psi_{L}\) to the cost function used to train the autoencoder. This penalty should not introduce additional biases including weight matrix shrinkage into the optimization problem since this can affect the learned manifold and projection fibers. For example, an \(\ell^{1}\) penalty (see Tibshirani [67]) with a large weight factor will shrink the encoder weight matrix \(\Psi_{L}\) towards zero, while pushing the corresponding decoder weight matrix \(\Phi_{L}\) towards infinity due to the biorthogonality constraint. Other sparsity-promoting penalties such as those in [68; 69; 70; 71] have this same issue in our setting. Instead, for a matrix \(\Psi\in\mathbb{R}^{n\times r}\) (dropping the subscript \(L\)) with linearly independent columns, we construct \(U\in\mathbb{R}^{n\times r}\) having orthonormal columns spanning \(\mathrm{Range}(\Psi)\), for example via QR factorization \(U=\mathrm{qf}(\Psi)\). Our proposed penalty function is then defined by
\[\boxed{R_{1,2}\big{(}\mathrm{Range}(\Psi)\big{)}=\big{\|}U\big{\|}_{ 1,2}-r}\\ =\sum_{i=1}^{n}\big{\|}\operatorname{row}_{i}(U)\big{\|}_{2}-r, \tag{47}\]
where \(\|\cdot\|_{1,2}\) denotes the sum of Euclidean norms of the rows of a matrix. This function does not depend on the choice of \(U\) since \(\|\cdot\|_{1,2}\) is invariant under multiplication on the right by \(r\times r\) orthonormal matrices. Most importantly, the penalty defined by (47) depends only on the range of \(\Psi\) since it re
mains invariant under changes of basis, i.e., when \(\Psi\) is replaced by \(\Psi A\) for any invertible matrix \(A\in\mathbb{R}^{r\times r}\). Indeed, it defines a continuous function on the Grassmann manifold \(\mathcal{G}_{n,r}\) consisting of \(r\)-dimensional subspaces of \(\mathbb{R}^{n}\) (see Bendokat, Zimmermann, and Absil [22], Absil, Mahony, and Sepulchre [73], Wong [74]). The following theorem shows that this penalty does in fact promote sparsity of the rows of \(\Psi\).
**Theorem 2**.: _The minimum value of the penalty function defined by (47) over the space \(\mathbb{R}^{n\times r}_{s}\) of real \(n\times r\) matrices with linearly independent columns is zero. This value is attained by \(\Psi\in\mathbb{R}^{n\times r}_{s}\) if and only if \(\Psi\) has precisely \(r\) rows with nonzero entries._
Proof.: We give the proof in Appendix C.
Moreover, the penalty function increases sharply (in much the same way as \(x\mapsto\left\lvert x\right\rvert\)) in the neighborhood of its minimizers. Specifically, we have Corollary 1 in Appendix C, which we have not stated here as it requires machinery for the Grassmann manifold that is beyond the scope of this paper. This result implies that the penalty produces sparse minimizers when it is added with a sufficiently large, but finite factor to smooth optimization objectives. More precisely, we have the following theorem.
**Theorem 3**.: _Let \(\mathcal{M}\) be a smooth manifold and let \(D(J_{0})\) be an open subset of \(\mathcal{M}\times\mathcal{G}_{n,r}\) on which a real non-negative-valued function \(J_{0}\) is defined and continuously differentiable. Suppose that there is a finite constant \(M\) so that the preimage set \(S_{M}=\{(x,\mathcal{V})\in D(J_{0})\;:\;J_{0}(x,\mathcal{V})\leq M\}\) is compact and contains a point \((x_{0},\mathcal{V}_{0})\) so that \(R_{1,2}(\mathcal{V}_{0})=0\). Then for any \(\gamma\geq 0\), the function on \(D(J_{0})\) defined by_
\[J_{\gamma}(x,\mathcal{V})=J_{0}(x,\mathcal{V})+\gamma R_{1,2}(\mathcal{V}) \tag{48}\]
_attains its minimum and all such minimizers lie in \(S_{M}\). Furthermore, there is a constant \(\Gamma\geq 0\) so that when \(\gamma>\Gamma\), every minimizer \((x_{*},\mathcal{V}_{*})\) of \(J_{\gamma}\) satisfies \(R_{1,2}(\mathcal{V}_{*})=0\)._
Proof.: The proof of this result uses tools from Grassmannian geometry that are beyond the scope of this paper. We provide the necessary background, lemmata, and proof in Appendix C.
As a consequence of this theorem, a sparse matrix \(\Psi_{L}\) in the encoder with precisely \(n_{L-1}\) nonzero rows can be obtained by minimizing a cost function to which (47) has been added with a sufficiently large factor. Increasing the factor beyond this point has no further affect on the minimizers; specifically, there is no additional shrinkage of the weight matrix \(\Psi_{L}\). In practice, we suggest first optimizing the network without the sparsity-promoting penalty, then activating the penalty during a subsequent optimization stage to sparsify \(\Psi_{L}\).
## VII Conclusion
In this paper we develop a nonlinear projection-based model reduction framework in which it is possible to learn both a low-dimensional manifold and appropriate projection fibers for capturing transient dynamics away from the manifold. To do this, we introduce a new autoencoder neural network architecture defining a parametric class of nonlinear projections along with new dynamics-aware cost functions for training. In order to define a nonlinear projection, we ensure that the encoder is a left inverse of the decoder by utilizing a new pair of invertible activation functions and enforcing a biorthogonality constraint on the weight matrices. The biorthogonality constraint defines a smooth matrix manifold on which the optimization during training takes place.
As we demonstrate, optimizing the autoencoder on standard reconstruction-based loss does not generally yield appropriate projection fibers for capturing transient dynamics. To address this problem, we introduce two new cost functions based on additional information from the full-order model. The first cost function, which we call Reconstruction and Velocity Projection (RVP) loss, is based on a Gronwall-Bellman-type error analysis of the reduced-order model. It entails adding a time-derivative ("velocity") projection loss to the usual reconstruction-based loss. The second cost function, which we call Gradient-Aligned Projection (GAP) loss, is based on a first-order Taylor expansion of projection-based forecasting error. This analysis yields a cost function measuring the alignment of state projection errors with randomized gradient samples along trajectories. Both of these new loss function require us to be able to query the adjoint of the full-order model, acting on vector, and thus are not suitable if only experimental data is available.
We present a detailed study comparing our framework to state-of-the-art methods on a simple three-state model, introduced by Noack _et al._[60], of vortex shedding in the wake of a bluff body. Regardless of the cost function and neural network architecture, the autoencoders we trained were able to accurately locate the two-dimensional slow manifold in this problem. Nonetheless, the cost function used to train the networks had a large effect on the resulting model's ability to forecast trajectories with initial conditions lying away from the slow manifold. Training on reconstruction loss consistently produced inaccurate models with projection fibers failing to cancel the fast coordinate. Both of our new cost functions were able to remedy this issue, with RVP loss yielding slightly better performance than GAP loss and suffering from less deterioration in performance on a smaller training data set. For the forecasting task, our proposed architecture trained using either GAP or RVP loss significantly outperformed standard architectures and loss functions.
While we have discussed several methods for constructing computationally efficient reduced-order models, we have not yet applied our method to high-dimensional systems. This will be an important direction for future work. In particular, we will be interested in studying whether, or to what extent the amount of training data required to obtain an accurate ROM scales with the state dimension of the FOM. We have reason to expect favorable scaling behavior because the data requirements for computing CoBRAS[34] projections, which minimize a loss similar to GAP, do not scale with the dimension of the FOM, but rather with the effective ranks of covariance ma
trices for states and gradient data. This suggests that using a loss function like GAP, or the gradient-weighted CoBRAS loss, might allow for dimension-independent scaling of the training data set for certain systems with low-dimensional underlying manifolds and few directions of high sensitivity in the state space. We will also be interested in the performance of our proposed encoder sparsification technique, which may also reduce data requirements when the added bias towards sparsity is appropriate. Finally, in follow-up work we aim to provide some practical guidelines for choosing the number of layers and their widths in applications to high-dimensional systems.
Other directions for future work include developing convolutional autoencoders with similar constraints, as well as applying our autoencoder architecture for other tasks such as preprocessing data from dynamical systems, or as part of a SINDy-autoencoder [14]. Further investigation into data sampling strategies, especially in the presence of unstable structures in state space may also lead to practical guidelines for reduced-order modeling using our framework. Another exciting direction for future work will be to use our autoencoder to approximate solutions of the equations derived by Roberts [18; 19] for the correct spatially-varying affine projections. For this, a method analogous to physics-informed neural networks (PINNs) [75] could be employed.
###### Acknowledgements.
This work was supported by the Air Force Office of Scientific Research, award FA9550-19-1-0005.
## Author declarations
### Conflict of interest
The authors have no conflicts to disclose.
### Author contributions
**Samuel E. Otto:** conceptualization (lead); formal analysis (lead); methodology (lead); writing - original draft (lead); supervision (supporting). **Gregory R. Macchio:** software (lead); visualization (lead); writing - original draft (supporting). **Clarence W. Rowley:** funding acquisition (lead); supervision (lead); resources (lead); writing - review & editing (lead); conceptualization (supporting).
## Data availability statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study. Our code was written in Python and is available at [https://github.com/grmacchio/rommet_chaos2023](https://github.com/grmacchio/rommet_chaos2023) (Gregory R. Macchio's GitHub).
|
2307.13821 | Fitting Auditory Filterbanks with Multiresolution Neural Networks | Waveform-based deep learning faces a dilemma between nonparametric and
parametric approaches. On one hand, convolutional neural networks (convnets)
may approximate any linear time-invariant system; yet, in practice, their
frequency responses become more irregular as their receptive fields grow. On
the other hand, a parametric model such as LEAF is guaranteed to yield Gabor
filters, hence an optimal time-frequency localization; yet, this strong
inductive bias comes at the detriment of representational capacity. In this
paper, we aim to overcome this dilemma by introducing a neural audio model,
named multiresolution neural network (MuReNN). The key idea behind MuReNN is to
train separate convolutional operators over the octave subbands of a discrete
wavelet transform (DWT). Since the scale of DWT atoms grows exponentially
between octaves, the receptive fields of the subsequent learnable convolutions
in MuReNN are dilated accordingly. For a given real-world dataset, we fit the
magnitude response of MuReNN to that of a well-established auditory filterbank:
Gammatone for speech, CQT for music, and third-octave for urban sounds,
respectively. This is a form of knowledge distillation (KD), in which the
filterbank ''teacher'' is engineered by domain knowledge while the neural
network ''student'' is optimized from data. We compare MuReNN to the state of
the art in terms of goodness of fit after KD on a hold-out set and in terms of
Heisenberg time-frequency localization. Compared to convnets and Gabor
convolutions, we find that MuReNN reaches state-of-the-art performance on all
three optimization problems. | Vincent Lostanlen, Daniel Haider, Han Han, Mathieu Lagrange, Peter Balazs, Martin Ehler | 2023-07-25T21:20:12Z | http://arxiv.org/abs/2307.13821v1 | # Fitting Auditory Filterbanks with Multiresolution Neural Networks
###### Abstract
Waveform-based deep learning faces a dilemma between nonparametric and parametric approaches. On one hand, convolutional neural networks (convnets) may approximate any linear time-invariant system; yet, in practice, their frequency responses become more irregular as their receptive fields grow. On the other hand, a parametric model such as LEAF is guaranteed to yield Gabor filters, hence an optimal time-frequency localization; yet, this strong inductive bias comes at the detriment of representational capacity. In this paper, we aim to overcome this dilemma by introducing a neural audio model, named multiresolution neural network (MuReNN). The key idea behind MuReNN is to train separate convolutional operators over the octave subbands of a discrete wavelet transform (DWT). Since the scale of DWT atoms grows exponentially between octaves, the receptive fields of the subsequent learnable convolutions in MuReNN are dilated accordingly. For a given real-world dataset, we fit the magnitude response of MuReNN to that of a well-established auditory filterbank: Gammatone for speech, CQT for music, and third-octave for urban sounds, respectively. This is a form of knowledge distillation (KD), in which the filterbank "teacher" is engineered by domain knowledge while the neural network "student" is optimized from data. We compare MuReNN to the state of the art in terms of goodness of fit after KD on a hold-out set and in terms of Heisenberg time-frequency localization. Compared to convnets and Gabor convolutions, we find that MuReNN reaches state-of-the-art performance on all three optimization problems.
Vincent Lostanlen\({}^{1}\), Daniel Haider\({}^{2,3}\), Han Han\({}^{1}\), Mathieu Lagrange\({}^{1}\), Peter Balazs\({}^{2}\), and Martin Ehler\({}^{3}\)\({}^{1}\) Nantes Universite, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France.
\({}^{2}\) Acoustics Research Institute, Austrian Academy of Sciences, A-1040 Vienna, Austria.
\({}^{3}\) University of Vienna, Department of Mathematics, A-1090 Vienna, Austria.
Convolutional neural network, digital filters, filterbanks, multiresolution analysis, psychoacoustics.
## 1 Introduction
Auditory filterbanks are time-invariant systems whose design takes inspiration from domain-specific knowledge in hearing science [1]. For example, the critical bands of the human cochlea inspires frequency scales such as mel, bark, and ERB [2]. The phenomenon of temporal masking calls for asymmetric impulse responses, motivating the design of Gammatone filters [3]. Lastly, the constant-\(Q\) transform (CQT), in which the number of filters per octave is fixed, reflects the principle of octave equivalence in music [4].
In recent years, the growing interest for deep learning in signal processing has proposed to learn filterbanks from data rather than design them a priori [5]. Such a replacement of feature engineering to feature learning is motivated by the diverse application scope of audio content analysis: i.e., conservation biology [6], urban science [7], industry [8], and healthcare [9]. Since these applications differ greatly in terms of acoustical content, the domain knowledge which prevails in speech and music processing is likely to yield suboptimal performance. Instead, gradient-based optimization has the potential to reflect the spectrotemporal characteristics of the data at hand.
Enabling this potential is particularly important in applications where psychoacoustic knowledge is lacking; e.g., animals outside of the mammalian taxon [10, 11]. Beyond its perspectives in applied science, the study of learnable filterbanks has value for fundamental research on machine listening with AI. This is because it represents the last stage of progress towards general-purpose "end-to-end" learning, from the raw audio waveform to the latent space of interest.
Yet, success stories in waveform-based deep learning for audio classification have been, up to date, surprisingly few--and even fewer beyond the realm of speech and music [12]. The core hypothesis of our paper is that this shortcoming is due to an inadequate choice of neural network architecture. Specifically, we identify a dilemma between nonparametric and parametric approaches, where the former are represented by convolutional neural networks (convnets) and the latter by architectures used in SincNet [13] or LEAF [14]. In theory, convnets may approximate any finite impulse response (FIR), given a receptive field that is wide enough; but in practice, gradient-based optimization on nonconvex objectives yields suboptimal solutions [12]. On the other hand, the parametric approaches enforce good time-frequency localization, yet at the cost of imposing a rigid shape for the learned filters: cardinal sine (inverse-square envelope) for SincNet and Gabor (Gaussian envelope) for LEAF.
Our goal is to overcome this dilemma by developing a neural audio model which is capable of learning temporal envelopes from data while guaranteeing near-optimal time-frequency localization. In doing so, we aim to bypass the explicit incorporation of psychoacoustic knowledge as much as possible. This is unlike state-of-the-art convnets for filterbank learning such as SincNet or LEAF, whose parametric kernels are initialized according to a mel-frequency scale. Arguably, such careful initialization procedures defeat the purpose of deep learning; i.e., to spare the human effort of feature engineering.
Figure 1: Graphical outline of the proposed method. We train a neural network “student” \(\mathbf{\Phi_{\text{W}}}\) to regress the squared magnitudes \(\mathbf{Y}\) of an auditory filterbank “teacher” \(\mathbf{\Lambda}\) in terms of spectrogram-based cosine distance \(\mathcal{L}_{\mathbf{\kappa}}\), on average over a dataset of natural sounds \(\mathbf{x}\).
Furthermore, it contrasts with other domains of deep learning (e.g., image processing) in which all convnet layers are simply initialized with i.i.d. Gaussian weights [15].
Prior work on this problem has focused on advancing the state of the art on a given task, sometimes to no avail [16]. In this article, we take a step back and formulate a different question: before we try to outperform an auditory filterbank, can we replicate its responses with a neural audio model? To answer this question, we compare different "student" models in terms of their ability to learn from a black-box function or "teacher" by knowledge distillation (KD).
Given an auditory filterbank \(\mathbf{\Lambda}\) and a discrete-time signal \(\mathbf{z}\) of length \(T\), let us denote the squared magnitude of the filter response at frequency bin \(f\) by \(\mathbf{Y}[f,t]=|\mathbf{\Lambda}\mathbf{x}|^{2}[f,2^{J}t]\), where \(2^{J}\) is the chosen hop size or "stride". Then, given a model \(\Phi_{\mathbf{W}}\) with weights \(\mathbf{W}\), we evaluate the dissimilarity between teacher \(\mathbf{\Lambda}\) and student \(\mathbf{\Phi}_{\mathbf{W}}\) as their (squared) spectrogram-based cosine similarity \(\mathcal{L}_{\mathbf{\pi}}(\mathbf{W})\). The distance of student and teacher in this similarity measure can be computed via the \(L^{2}\) distance after normalizing across frequency bins \(f\), independently for each time \(t\). Let \(\left|\widehat{\mathbf{\Phi}}_{\mathbf{W}}\mathbf{z}\right|^{2}\) and \(\widetilde{\mathbf{Y}}\) denote these normalized versions of student and teacher, then
\[\mathcal{L}_{\mathbf{\pi}}(\mathbf{W}) =\mathrm{costist}\big{(}|\mathbf{\Phi}_{\mathbf{W}}|^{2},\mathbf{ Y}\big{)}\] \[=\frac{1}{2}\sum_{t=1}^{T/2^{J}}\sum_{f=1}^{F}\big{|}|\widetilde {\mathbf{\Phi}}_{\mathbf{W}}\mathbf{x}|^{2}[f,t]-\widetilde{\mathbf{Y}}[f,t] \big{|}^{2}, \tag{1}\]
where \(F\) is the number of filters. We seek to minimize the quantity above by gradient-based optimization on \(\mathbf{W}\), on a real-world dataset of audio signals \(\{\mathbf{x}_{1}\dots\mathbf{x}_{\mathbf{N}}\}\), and with no prior knowledge on \(\mathbf{\Lambda}\).
## 2 Neural Audio Models
### Learnable time-domain filterbanks (Conv1D)
As a baseline, we train a 1-D convnet \(\mathbf{\Phi}_{\mathbf{W}}\) with \(F\) kernels of the same length \(2L\). With a constant stride of \(2^{J}\), \(\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}\) writes as
\[\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}[f,t]=(\mathbf{x}*\mathbf{\phi}_{\mathbf{f}})[2^{J}t] =\sum_{\tau=-L}^{L-1}\mathbf{x}\big{[}2^{J}t-\tau\big{]}\mathbf{\phi}_{\mathbf{f}}[\tau], \tag{2}\]
where \(\mathbf{x}\) is padded by \(L\) samples at both ends. Under this setting, the trainable weights \(\mathbf{W}\) are the finite impulse responses of \(\mathbf{\phi}_{\mathbf{f}}\) for all \(f\), thus amounting to \(2LF\) parameters.We initialize \(\mathbf{W}\) as Gaussian i.i.d. entries with null mean and variance \(1/\sqrt{F}\).
### Gabor 1-D convolutions (Gabor1D)
As a representative of the state of the art (i.e., LEAF [14]), we train a Gabor filtering layer or Gabor1D for short. For this purpose, we parametrize each FIR filter \(\mathbf{\phi}_{\mathbf{f}}\) as Gabor filter; i.e., an exponential sine wave of amplitude \(a_{f}\) and frequency \(\eta_{f}\) which is modulated by a Gaussian envelope of width \(\sigma_{f}\). Hence a new definition:
\[\mathbf{\phi}_{\mathbf{f}}[\tau]=\frac{a_{f}}{\sqrt{2\pi}\sigma_{f}}\exp\left(-\frac {\tau^{2}}{2\sigma_{f}^{2}}\right)\exp(2\pi\mathrm{i}\eta_{f}\tau). \tag{3}\]
Under this setting, the trainable weights \(\mathbf{W}\) amount to only \(3F\) parameters: \(\mathbf{W}=\{a_{1},\sigma_{1},\eta_{1},\dots,a_{F},\sigma_{F},\eta_{F}\}\). Following LEAF, we initialize center frequencies \(\eta_{f}\) and bandwidths \(\sigma_{f}\) so as to form a mel-frequency filterbank [17] and set amplitudes \(a_{f}\) to one. We use the implementation of Gabor1D from SpeechBrain v0.5.14 [18].
### Multiresolution neural network (MuReNN)
As our original contribution, we train a multiresolution neural network, or MuReNN for short. MuReNN comprises two stages, multiresolution approximation (MRA) and convnet; of which only the latter is learned from data. We implement the MRA with a dual-tree complex wavelet transform (DTCWT) [19]. The DTCWT relies on a multirate filterbank in which each wavelet \(\mathbf{\psi}_{j}\) has a null average and a bandwidth of one octave. Denoting by \(\xi\) the sampling rate of \(\mathbf{x}\), the wavelet \(\mathbf{\psi}_{\mathbf{j}}\) has a bandwidth with cutoff frequencies \(2^{-(j+1)}\pi\) and \(2^{-j}\pi\). Hence, we may subsample the result of the convolution \((\mathbf{x}*\mathbf{\psi}_{\mathbf{j}})\) by a factor of \(2^{j}\), yielding:
\[\forall j\in\{0,\dots,J-1\},\ \mathbf{x}_{\mathbf{j}}[t]=(\mathbf{x}*\mathbf{\psi}_{\mathbf{j}})[2^{j}t], \tag{4}\]
where \(J\) is the number of multiresolution levels. We take \(J=9\) in this paper, which roughly coincides with the number of octaves in the hearing range of humans. The second stage in MuReNN consists in defining convnet filters \(\mathbf{\phi}_{\mathbf{f}}\). Unlike in the Conv1D setting, those filters do not operate over the full-resolution input \(\mathbf{x}\) but over one of its MRA levels \(\mathbf{x}_{\mathbf{j}}\). More precisely, let us denote by \(j[f]\) the decomposition level assigned to filter \(f\), and by \(2L_{j}\) the kernel size for that decomposition level. We convolve \(\mathbf{x}_{j[f]}\) with \(\mathbf{\phi}_{\mathbf{f}}\) and apply a subsampling factor of \(2^{J-j[f]}\), hence:
\[\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}[f,t] =(\mathbf{x}_{\mathbf{j}[\mathbf{f}]}*\mathbf{\phi}_{\mathbf{f}})[2^{J-j[f]}t]\] \[=\sum_{\tau=-L_{j}}^{L_{j}-1}\mathbf{x}_{\mathbf{j}[\mathbf{f}]}[2^{J-j[f]}t- \tau]\mathbf{\phi}_{\mathbf{f}}[\tau] \tag{5}\]
The two stages of subsampling in Equations 4 and 5 result in a uniform downsampling factor of \(2^{J}\) for \(\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}\). Each learned FIR filter \(\mathbf{\Phi}_{\mathbf{f}}\) has an effective receptive field size of \(2^{j[f]+1}L_{j[f]}\), thanks to the subsampling operation in Equation 4. This resembles a dilated convolution [20] with a dilation factor of \(2^{j[f]}\), except that the DTCWT guarantees the absence of aliasing artifacts.
Besides this gain in frugality, as measured by parameter count per unit of time, the resort to an MRA offers the opportunity to introduce desirable mathematical properties in the non-learned part of the transform (namely, \(\mathbf{\psi}_{\mathbf{f}}\)) and have the MuReNN operator \(\mathbf{\Phi}_{\mathbf{W}}\) inherit them, without need for a non-random initialization nor regularization during training. In particular, \(\mathbf{\Phi}_{\mathbf{W}}\) has at least as many vanishing moments as \(\mathbf{\psi}_{f}\). Furthermore, the DTCWT yields quasi-analytic coefficients: for each \(j\), \(\mathbf{x}_{j}=\mathbf{x}_{j}^{\mathrm{g}}+\mathrm{i}\mathbf{x}_{j}^{\mathrm{l}}\) with \(\mathbf{x}_{j}^{\mathrm{l}}\approx\mathcal{H}\left(\mathbf{x}_{j}^{\mathrm{g}}\right)\), where the exponent \(\mathbb{R}\) (resp. \(\mathbb{I}\)) denotes the real part (resp. imaginary part) and \(\mathcal{H}\) denotes the Hilbert transform. Since \(\mathbf{\phi}_{\mathbf{f}}\) is real-valued, the same property holds for MuReNN: \(\mathbf{\Phi}^{\dagger}\mathbf{x}=\mathcal{H}(\mathbf{\Phi}^{\mathrm{R}}\mathbf{x})\).
We implement MuReNN on GPU via a custom implementation of DTCWT in PyTorch1. Following [19], we use a biorthogonal wavelet for \(j=0\) and quarter-shift wavelets for \(j\geq 1\). We set \(L_{j}=8M_{j}\) where \(M_{j}\) is the number of filters \(f\) at resolution \(j\). We refer to [21] for an introduction to deep learning in the wavelet domain, with applications to image classification.
Footnote 1: [https://github.com/kymatio/murenn](https://github.com/kymatio/murenn)
* A constant-\(Q\) filterbank with \(Q=8\) filters per octave, covering eight octaves with Hann-modulated sine waves.
* A filterbank with 4-th order Gammatone filters tuned to the ERB-scale, a frequency scale which is adapted to the equivalent rectangular bandwidths of the human cochlea [22]. In psychoacoustics, Gammatone filters provide a good approximation to measured responses of the filters of the human basilar membrane [3]. Unlike Gabor filters, Gammatone filters are asymmetric, both in the time domain and frequency domain.We refer to [23] for implementation details.
* A variable-\(Q\) transform (VQT) with \(M_{j}=12\) frequency bins per octave at every level. The VQT is a variant of the constant-\(Q\) transform (CQT) in which \(Q\) is decreased gradually towards lower frequencies [24], hence an improved temporal resolution at the expense of frequency resolution.
* A third-octave filterbank inspired by the ANSI S1.11-2004 standard for environmental noise monitoring [25]. In this filterbank, center frequencies are not exactly in a geometric progression. Rather, they are aligned with integer Hertz values: 40, 50, 60; 80, 100, 120; 160, 200, 240; and so forth.
We construct the Synth teacher via nnAudio [26], a PyTorch port of librosa [27]; and Speech, Music, and Urban using the Large Time-Frequency Analysis Toolbox (LTFAT) for MATLAB [28].
### Gradient-based optimization
For all four "student" models, we initialize the vector \(\mathbf{W}\) at random and update it iteratively by empirical risk minimization over the training set. We rely on the Adam algorithm for stochastic optimization with default momentum parameters. Given the definition of spectrogram-based cosine distance in Equation 1, we perform reverse-mode automatic differentiation in PyTorch to obtain
\[\boldsymbol{\nabla}\mathcal{L}_{\boldsymbol{x}}(\mathbf{W})[i]= \sum_{f=1}^{F}\sum_{t=1}^{T/2^{f}}\frac{\partial|\widetilde{\boldsymbol{ \Phi}}_{\mathbf{W}}\boldsymbol{x}|^{2}[f,t]}{\partial\mathbf{W}[i]}(\mathbf{W})\] \[\times\big{(}|\widetilde{\boldsymbol{\Phi}}_{\mathbf{W}} \boldsymbol{x}|^{2}[f,t]-\widetilde{\mathbf{Y}}[f,t]\big{)} \tag{6}\]
for each entry \(\mathbf{W}[i]\). Note that the gradient above does not involve the phases of the teacher filterbank \(\mathbf{A}\), only its normalized magnitude response \(\mathbf{Y}\) given the input \(\boldsymbol{x}\). Consequently, even though our models \(\boldsymbol{\Phi}_{\mathbf{W}}\) contain a single linear layer, the associated knowledge distillation procedure is nonconvex, and thus resembles the training of a deep neural network.
## 4 Results and discussion
### Datasets
* As a proof of concept, we construct sine waves in a geometric progression over the frequency range of the target filterbank.
* The North Texas vowel database (NTVOW) [29] contains utterances of 12 English vowels from 50 American speakers, including children aged three to seven as well as male and female adults. In total, it consists of 3190 recordings, each lasting between one and three seconds.
* The TinySOL dataset [30] contains isolated musical notes played by eight instruments: accordion, alto saxophone, bassoon, flute, harp, trumpet in C, and cello. For each of these instruments, we take all available pitches in the tessitura (min = \(B_{0}\), median = \(E_{4}\), max = \(C_{8}^{\star}\) ) in three levels of intensity dynamics: _pp_, _mf_, and _ff_. This results in a total of 1212 audio recordings.
* The SONYC Urban Sound Tagging dataset (SONYC-UST) [31] contains 2803 acoustic scenes from a network of autonomous sensors in New York City. Each of these ten-second scenes contains one or several sources of urban noise pollution, such as: engines, machinery and non-machinery impacts, powered saws, alert signals, and dog barks.
\begin{table}
\begin{tabular}{l l l||c c c} Domain & Dataset & Teacher & Conv1D & Gabor1D & MuReNN \\ \hline Speech & NTVOW & Gammatone & \(2.12\pm 0.05\) & \(10.14\pm 0.09\) & \(\mathbf{2.00\pm 0.02}\) \\ Music & TinySOL & VQT & \(8.76\pm 0.2\) & \(16.87\pm 0.06\) & \(\mathbf{5.28\pm 0.03}\) \\ Urban & SONYC-UST & ANSI S1.11 & \(3.26\pm 0.1\) & \(13.51\pm 0.2\) & \(\mathbf{2.57\pm 0.2}\) \\ Synth & Sine waves & CQT & \(11.54\pm 0.5\) & \(22.26\pm 0.9\) & \(\mathbf{9.75\pm 0.4}\) \\ \end{tabular}
\end{table}
Table 1: Mean and standard deviation of test loss after knowledge distillation over five independent trials. Each column corresponds to a different neural audio model \(\boldsymbol{\Phi}_{\mathbf{W}}\) while each row corresponds to a different auditory filterbank and audio domain. See Section 4.2 for details.
Figure 2: Left to right: evolution of validation losses on different domains with Conv1D (green), Gabor1D (blue), and MuReNN (orange), as a function of training epochs. The shaded area denotes the standard deviation across five independent trials. See Section 4.2 for details.
### Benchmarks
For each audio domain, we randomly split its corresponding dataset into training, testing and validation subsets with a 8:1:1 ratio. During training, we select \(2^{12}\) time samples from the middle part of each signal, i.e., the FIR length of the filters in the teacher filterbank. We train each model with 100 epochs with an epoch size of 8000.
Table 1 summarizes our findings. On all three benchmarks, we observe that MuReNN reaches state-of-the-art performance, as measured in terms of cosine distance with respect to the teacher filterbank after 100 epochs. The improvement with respect to Conv1D is most noticeable in the Synth benchmark and least noticeable in the Speech benchmark. Furthermore, Figure 2 indicates that Gabor1D barely trains at all: this observation is consistent with the sensitivity of LEAF with respect to initialization, as reported in [32]. We also notice that MuReNN trains faster than Conv1D on all benchmarks except for Urban, a phenomenon deserving further inquiry.
### Error analysis
The mel-scale initialization of Gabor1D filters and the inductive bias of MuReNN enabled by octave localization gives a starting advantage when learning filterbanks on log-based frequency scales, as used for the Gammatone and VQT filterbank. Expectedly, this advantage is absent with a teacher filterbank that does not follow a geometric progression of center frequencies, as it is the case in the ANSI scale. Figure 2 reflects these observations.
To examine the individual filters of each model, we take the speech domain as an example and obtain their learned impulse responses. Figure 3 visualizes chosen examples at different frequencies learned by each model together with the corresponding teacher Gammatone filters. In general, all models are able to fit the filter responses well. However, it is noticeable that the prescribed envelope for Gabor1D impedes it from learning the asymmetric target Gammatone filters. This becomes prominent especially at high frequencies. From the strong envelope mismatches at coinciding frequency we may deduce that center frequencies and bandwidths did not play well together during training. On the contrary, MuReNN and Conv1D are flexible enough to learn asymmetric temporal envelopes without compromising its regularity in time. Although the learned filters of Conv1D are capable of fitting the frequencies well, they suffer from noisy artifacts, especially outside their essential supports. Indeed, through limiting the scale and support of the learned filters, MuReNN restrains the potential introduction of high-frequency noises of a learned filter of longer length. The phase misalignment at low frequencies is a natural consequence of the fact that the gradients are computed from the magnitudes of the filterbank responses.
Finally, we measure the time-frequency localization of all filters by computing the associated Heisenberg time-frequency ratios [33]. From theory we know that Gaussian windows are optimal in this sense [34]. Therefore, it is not surprising that Gabor1D yields the best localized filters, even outperforming the teacher, see Figure 4. Expectedly, the localization of the filters from Conv1D is poor and appears independent of the teacher. MuReNN roughly resembles the localization of the teachers but has some poorly localized outliers in higher frequencies, deserving further inquiry.
## 5 Conclusion
Multiresolution neural networks (MuReNN) have the potential to advance waveform-based deep learning. They offer a flexible and data-driven procedure for learning filters which are "wavelet-like": i.e., narrowband with compact support, vanishing moments, and quasi-Hilbert analyticity. Those experiments based on knowledge distillation from three domains (speech, music, and urban sounds) illustrate the suitability of MuReNN for real-world applications. The main limitation of MuReNN lies in the need to specify a number of filters per octave \(M_{j}\), together with a kernel size \(L_{j}\). Still, a promising finding of our paper is that prior knowledge on \(M_{j}\) and \(L_{j}\) suffices to finely approximate non-Gabor auditory filterbanks, such as Gammatones on an ERB scale, from a random i.i.d. Gaussian initialization. Future work will evaluate MuReNN in conjunction with a deep neural network for sample-efficient audio classification.
## 6 Acknowledgment
V.L. thanks Fergal Cotter and Nick Kingsbury for maintaining the ddcwt and pytorch.wavelets libraries; LS2N and OAW staff for arranging research visits; and Neil Zeghidour for helpful discussions. D.H. thanks Clara Holloney for helping with the implementation of the filterbanks. V.L. and M.L. are supported by ANR MuReNN; D.H., by a DOC Fellowship of the Austrian Academy of Sciences (A 26355); P.B., by FWF projects LoFT (P 34624) and NoMASP (P 34922); and M.E., by WWTF project CHARMED (VRG12-009).
Figure 4: Distribution of Heisenberg time–frequency ratios for each teacher–student pair (lower is better). See Section 4.3 for details.
Figure 3: Compared impulse responses of Conv1D (left), Gabor1D (center), and MuReNN (right) with different center frequencies after convergence, with a Gammatone filterbank as target. Solid blue (resp. dashed red) lines denote the real part of the impulse responses of the learned filters (resp. target). See Section 4.3 for details. |
2303.10607 | Automatic pain recognition from Blood Volume Pulse (BVP) signal using
machine learning techniques | Physiological responses to pain have received increasing attention among
researchers for developing an automated pain recognition sensing system. Though
less explored, Blood Volume Pulse (BVP) is one of the candidate physiological
measures that could help objective pain assessment. In this study, we applied
machine learning techniques on BVP signals to device a non-invasive modality
for pain sensing. Thirty-two healthy subjects participated in this study.
First, we investigated a novel set of time-domain, frequency-domain and
nonlinear dynamics features that could potentially be sensitive to pain. These
include 24 features from BVP signals and 20 additional features from Inter-beat
Intervals (IBIs) derived from the same BVP signals. Utilizing these features,
we built machine learning models for detecting the presence of pain and its
intensity. We explored different machine learning models, including Logistic
Regression, Random Forest, Support Vector Machines, Adaptive Boosting
(AdaBoost) and Extreme Gradient Boosting (XGBoost). Among them, we found that
the XGBoost offered the best model performance for both pain classification and
pain intensity estimation tasks. The ROC-AUC of the XGBoost model to detect low
pain, medium pain and high pain with no pain as the baseline were 80.06 %,
85.81 %, and 90.05 % respectively. Moreover, the XGboost classifier
distinguished medium pain from high pain with ROC-AUC of 91%. For the
multi-class classification among three pain levels, the XGBoost offered the
best performance with an average F1-score of 80.03%. Our results suggest that
BVP signal together with machine learning algorithms is a promising
physiological measurement for automated pain assessment. This work will have a
national impact on accurate pain assessment, effective pain management,
reducing drug-seeking behavior among patients, and addressing national opioid
crisis. | Fatemeh Pouromran, Yingzi Lin, Sagar Kamarthi | 2023-03-19T09:03:14Z | http://arxiv.org/abs/2303.10607v1 | ## Automatic pain recognition from Blood Volume Pulse (BVP) signal using machine learning techniques
## Abstract
Physiological responses to pain have received increasing attention among researchers for developing an automated pain recognition sensing system. Though less explored, Blood Volume Pulse (BVP) is one of the candidate physiological measures that could help objective pain assessment. In this study, we applied machine learning techniques on BVP signals to device a non-invasive modality for pain sensing. Thirty-two healthy subjects participated in this study. First, we investigated a novel set of time-domain, frequency-domain and nonlinear dynamics features that could potentially be sensitive to pain. These include 24 features from BVP signals and 20 additional features from Inter-beat Intervals (IBIs) derived from the same BVP signals. Utilizing these features, we built machine learning models for detecting the presence of pain and its intensity. We explored different machine learning models, including Logistic Regression, Random Forest, Support Vector Machines, Adaptive Boosting (AdaBoost) and Extreme Gradient Boosting (XGBoost). Among them, we found that the XGBoost offered the best model performance for both pain classification and pain intensity estimation tasks. The ROC-AUC of the XGBoost model to detect low pain, medium pain and high pain with no pain as the baseline were 80.06 %, 85.81 %, and 90.05 % respectively. Moreover, the XGBoost classifier distinguished medium pain from high pain with ROC-AUC of 91%. For the multiclass classification among three pain levels, the XGBoost offered the best performance with an average F1-score of 80.03%. Our results suggest that BVP signal together with machine learning algorithms is a promising physiological measurement for automated pain assessment. This work will have a national impact on accurate pain assessment, effective pain management, reducing drug-seeking behavior among patients, and addressing national opioid crisis.
**Keywords:** Pain classification, Physiological response, Machine learning, Blood Volume Pulse (BVP), Cold pressor test
## Introduction
Pain is an unpleasant sensory and emotional experience as well as a primary symptom of many medical conditions. Effective pain management is one of the main goals in patient healthcare. Therefore, an accurate pain assessment is necessary to diagnose and provide a proper treatment plan. However, since there is no automated pain estimation method, clinicians rely on patient's self-report about how much pain they are experiencing. The most common available measures for pain assessment are numerical rating scales (NRS), and verbal rating scales (VRS)[1]. Unfortunately, these self-reporting measures are limited by the fact that they require the patient to be functionally capable, mentally alert, and clinically cooperative. For example, self-reporting is not a feasible option for patients with dementia or paralysis or patients who are drowsy or unable to answer consciously correct. Moreover, the absence of proper objective pain assessment tools to quantify the pain severity has resulted in sub-optimal treatment plans, delayed responses to patient needs, over-prescription of opioids, and drug-seeking behavior among patients.
According to the biological mechanism of pain, multiple areas of the Autonomic Nervous System (ANS) are participated in experiencing the pain. Pain often starts with the activation of the sensory neural pathway upon stimulation by noxious mechanical, heat, cold, chemical, or inflammatory stimuli. ANS has two parts: (1) Parasynaptic Nervous System (PSN) activated during rest, and (2) Sympathetic Nervous System (SNS) activated during stress or pain. Activation of ANS leads to changes in the electrical property of the brain, heart, muscle, and skin. These changes can be measured by physiological signals such as Electroencephalography (EEG), Electrocardiography (ECG), Blood Volume Pulse (BVP), and Electrodermal Activity (EDA)[2, 3, 4, 5, 6, 7]. So, these signals are potential candidates for automatically detecting the existence of pain and estimating its intensity. Chen et al.[8], have recently reviewed the mechanism of pain and various wearable physiological and behavioral sensors used in the healthcare domain that may be helpful for automated monitoring systems for pain and stress detection.
Developing an automated pain assessment system has received increasing attention among researchers. Several researchers have studied automatic pain assessment using machine learning techniques. However, there are only a few research groups that collected databases of physiological signals specific to pain. Walter et al. [9] introduced the BioVid heat pain database, in which they induced four levels of gradually increasing pain through temperature elevation. They recorded video streams, EDA, ECG, and EMG signals from 87 healthy subjects during the experiment. The pain labels for the recorded data were based on the four levels of temperature for pain elicitation. Researchers have studied physiological signals from the BioVid database to build machine learning models for pain detection or pain intensity estimation tasks[5, 10, 11, 12, 13, 14]. The results shows that EDA signal works significantly better than EMG and ECG signal for automated pain intensity estimation [5].
Aung et al.[15] proposed the multimodal EmoPain Dataset for automatic detection of chronic pain-related expressions. This dataset consists of 22 individuals suffering from chronic lower back pain and 28 health subjects carrying out physical exercises. They recorded face videos, head-mounted and room audio signals, full-body 3D motion capture, and EMG signals from
back muscles. Two sets of labels were assigned: First, the level of pain from facial expressions was annotated by eight raters who gave values between 0 and 1. Second, the occurrence of six pain-related body behaviors (guarding or stiffness, hesitation, bracing or support, abrupt action, limping, rubbing, or stimulating) was segmented by four experts. Most recently, Velana et al.[16] introduced SenseEmotion Database, consisting of video streams, trapezius EMG, respiratory (RSP), ECG, and EDA. The experiment was conducted on 45 healthy participants, each subjected to a series of artificially induced heat pain stimuli. The heat pain was induced at three levels depending on temperature. These temperatures were separately calculated for each participant. Thiam et al.[17] studied the modalities in this dataset for the recognition of artificially induced heat pain. Pouromran et al. [18] explored EDA signal collected through Cold Pressor Test from 29 healthy participants and built four-category pain intensity classification model using deep learning generated representations of the signal.
Gruss et al.[19] proposed a psychophysiological experiment in which 134 healthy participants were subjected to thermal and electrical pain stimuli, while audio, video, and physiological signals such as EMG, ECG, and Skin Conductance Level were recorded as X-ITE Pain Database. These datasets can be categorized by the source of pain, recorded modality, ground truth, and subject groups. The subject groups can be categorized by age group, for example, infants [20], and health condition of the subjects. A better overview of the automatic pain assessment studies can be found in [7, 21, 22].
Blood volume pulse signal reflects the changes in blood volume in tissue in each beat (pulse) of the heart. It is measured by shining an infrared light through the body's surface, usually a finger, then the amount of infrared light that returns to the sensor is recorded. The light registered by the sensor is proportional to the amount of blood volume in a tissue. BVP is widely used to measure heart rate variability (HRV), which consists of changes in the time intervals between consecutive heartbeats called inter-beat intervals (IBIs) [23]. In comparison with the ECG sensor and Respiration sensor which are also used for HRV measurements, the BVP sensor is easier to apply and preferred in some clinical situations, for example, when a subject is not still and stationary. BVP is considered as a candidate sensor to reflect the pain response in the autonomic nervous system that affects heart activity. However, the BVP signal is hardly unexplored in the literature for automated pain assessment. To our knowledge, chu et al.[24] are the only research group that investigated BVP in the pain experiment. They collected BVP, ECG, and EDA signals from six healthy subjects aged between 22 and 25 years old to classify the level of pain induced by electrical stimulation. They extracted simple statistical features, including mean, standard deviation, minimum, maximum, range, minimum ratio, and maximum ratio from the signal and the first-order differences of the signal. In their paper, they used 16 statistical features from the time-domain of BVP, plus eight features from ECG and ten features from EDA to predict pain levels using Linear Discriminant Analysis and principal component analysis. Chu et al.[25] also developed a hybrid genetic algorithm using support vector machines (SVMs) and k-nearest neighbors (KNN) to detect different pain levels on the same dataset.
In this study, we aim to investigate the BVP signal for automated pain intensity classification. Toward this objective, we first filtered the BVP signal and extracted a broad set of 44 features
from time-domain, frequency-domain, and nonlinear measures, directly from the BVP and corresponding Inter Beat Intervals (IBIs) captured from BVP. Then we utilized machine learning algorithms to build an automatic pain assessment model and find the best pain-sensitive features from the BVP signal. These features captured the sympathetic activation changes that occur in response to cold pain. These physiological responses were acquired from a finger-clip BVP sensor during the cold pain experiment. We also explored the feature importances and conducted statistical analysis to test the potential relationship between features and pain states.
## Materials and Methods
### Data Collection from Cold Pressor Test
The data was collected through the cold pain experiment at the Intelligent Human-Machine Systems lab at Northeastern University (IRB#: 191215) to explore human physiological signals affected by pain. We obtained informed consent from all the participants involved for participation in the experiments. All methods were carried out in accordance with relevant guidelines and regulations. The participants were informed that they could stop this experiment at any time. A total of thirty-two healthy subjects (6 females and 26 males) aged 18 to 24 participated in this experiment. Each subject was asked to rest when a 20-second recording of the Blood Volume Pulse (BVP) sensor was taken as the baseline measurement. Simultaneously, the subject was asked to focus on a green dot displayed on a monitor in front of them. After initial baseline 20 seconds, the subject was asked to place their dominant hand into a bucket containing iced water to trigger pain from cold temperature (32F or 0C). Then, the BVP signal of the subject was recorded for the 200s. The sensor was placed on the subject's middle finger for recording BVP. Every 20 seconds, the subjects were asked to self-report their pain score based on a numerical rating scale of 0 to 10 provided to them during the experiment. For each subject, we recorded a total of 220 seconds of BVP signal at a fixed sampling rate of 2048 Hz. Figure 1 shows the average and standard deviation of these self-reported pain intensities during the cold pressor test.
Figure 1: Mean and standard deviation of pain levels reported by 32 healthy subjects during the cold pressor test.
#### BVP Signal Preprocessing and Inter Beat Intervals Detection
Before extracting feature representations from a physiological signal, it is necessary to undertake a preprocessing step to improve the signal-to-noise ratio (SNR). We employed an 8-Hz Butterworth low-pass filter to reduce the noise and artifacts within the BVP signal.
Since the BVP signal measures blood volume changes in vessels due to heart activity, the peaks in BVP are associated with heartbeats. So, we used BVP to identify the Inter Beat Intervals (IBI), the distances between consecutive heartbeats, also known as RR intervals. Figure 2 presents a 40-second segment of BVP and corresponding IBI signal. We investigated changes in IBIs to study the Heart rate variability (HRV), which indexes neurocardiac function and is generated by heart-brain interactions and dynamic nonlinear Autonomic Nervous System (ANS) processes[23].
#### Feature Engineering
We extracted features from time-domain, frequency-domain, and nonlinear metrics from BVP and its corresponding Inter-beat Intervals (IBIs). After noise removal, the data was segmented into 5-second sliding windows with fifty percent overlap. We extracted 24 features directly from BVP signal and 20 features from IBIs related to the heart rate variability measured by BVP signal. Lastly, the measurements were normalized for each individual subject by subtracting the mean and dividing by the standard deviation. The feature vector is generated by concatenating the set of features extracted from IBIs with those extracted directly from BVP, resulting in a feature vector of dimensionality \(20+24=44\). The IBIs features were selected from the heart rate variablity metrics widely used in other domains[23]. These metrics and their description are listed in Table 1. The BVP features were selected from the time-series features which were reported to be the most effective in a wide variety of other applications[26]. Table 2 shows the list of these features and their description. In authors' earlier work,
Figure 2: Sample recording of BVP signal and its corresponding IBIs extracted from BVP for a subject before and after the experiment. The first 20 seconds is the baseline in which there is no pain, and the following 20 seconds is when the subject is experiencing pain.
they found these features to be effective for heat pain assessment using EDA, ECG and EMG signals[5]. We performed the feature extractions using the Scipy[27], pyphysio[28], catch22[26], Pandas[29] and Numpy[30] libraries in Python. To the best of our knowledge, this is the first study that investigated this novel set of features from BVP and its corresponding IBIs for objective pain assessment.
\begin{table}
\begin{tabular}{|c|c|l|} \hline
**Number** & **Symbol** & **Feature Description** \\ \hline
1 & RMSSD & Root mean square of successive time differences between heartbeats \\ \hline
2 & SDSD & Standard deviation of the 1st order discrete differences \\ \hline
3 & pNN50 & Percentage of successive IBIs that differ by more than 50 milliseconds \\ \hline
4 & pNN25 & Percentage of successive IBIs that differ by more than 25 milliseconds \\ \hline
5 & pNN10 & Percentage of successive IBIs that differ by more than 10 milliseconds \\ \hline
6 & RRmean & Mean of RR intervals \\ \hline
7 & RRSTD & Standard deviation of RR intervals \\ \hline
8 & RRMed & Median of RR intervals \\ \hline
9 & RRMin & Minimum of RR intervals \\ \hline
10 & RRMax & Maximum of RR intervals \\ \hline
11 & I-VLF & Power in very low frequency [0.003, 0.04 Hz] \\ \hline
12 & I-ILF & Power in low-frequency band [0.04, 0.15 Hz] \\ \hline
13 & I-HF & Power in high-frequency band [0.15, 0.4 Hz] \\ \hline
14 & I-Pow & Total power of IBIs \\ \hline
15 & I-SD1 & Poincaré plot standard deviation perpendicular the line of identity \\ \hline
16 & I-SD2 & Poincaré plot standard deviation along the line of identity \\ \hline
17 & I-SD12 & Ratio of SD1-to-SD2 \\ \hline
18 & I-Sdell & SD1*SD2*pi value of the Poincare’ plot of input IBIs \\ \hline
19 & I-DFA1 & Detrended fluctuation analysis, which describes short-term fluctuations of IBIs \\ \hline
20 & I-ApEn & Approximate entropy which measures the regularity and complexity of IBIs \\ \hline \end{tabular}
\end{table}
Table 1: Feature engineering from Inter-beat Intervals (IBIs) based on BVP signal.
**Model Architecture for Pain Assessment**
We categorized the 0 to 10 self-reported pain intensities into four pain states as no pain (NP: P = 0), low pain (LP: 0 \(<\) P \(\leq\) 3), medium pain (MP: 3 \(<\) P \(\leq\) 6), and high pain (HP: 6 \(<\) P \(\leq\) 10). Then we performed a set of binary classification tasks between pair of these pain states. We also trained models for a multiclass classification task to differentiate the pain levels during the cold pressor test. In addition to classification, we performed the regression to predict the continuous pain intensity as the pain levels are ordinal.
We explored different machine learning algorithms in this study: Logistic Regression (LR), Support Vector Machines (SVM), and Ensemble methods including Random Forest (RF), Adaptive Boosting (AdaBoost) and Extreme Gradient Boosting (XGBoost). The ensemble method is an algorithm that uses a group of predictors, called Ensembles, generally yielding an overall better model. Random Forest is an ensemble of Decision Trees, generally trained via the bagging method, or sometimes pasting. When sampling is performed with replacement, this method is called _bagging_, which is short for bootstrap aggregating. When sampling is performed without replacement, it is called _pasting_. In other words, both bagging and pasting allow training instances to be sampled several times across multiple predictors, but only bagging allows training instances to be sampled several times for the same predictor. One way to get a diverse set of classifiers is to use the same training algorithm for every model and train them on different random subsets of the training set. Once all models are trained, the ensemble can make a prediction for a new instance by simply aggregating the predictions of all models. Generally, the ensemble has a similar bias but a lower variance than a single model
\begin{table}
\begin{tabular}{|c|p{28.5pt}|p{28.5pt}|} \hline
10 & SPow5th & Total power in the lowest fifth of frequencies in the Fourier power spectrum \\ \hline
11 & SPowCent & Centroid of the Fourier power spectrum \\ \hline
12 & FCmean & Mean error from a rolling 3-sample mean forecasting \\ \hline
13 & COtrev & Time-reversibility statistic, ((x\({}_{+1}\)-x\({}_{\lambda}\))3)t \\ \hline
14 & AMI & Auto-mutual information, m = 2, \(\tau\) = 5 \\ \hline
15 & INAut & First minimum of the auto-mutual information function \\ \hline
16 & MDpnn40 & Proportion of successive differences exceeding 0.04\(\sigma\) \\ \hline
17 & SBlongst & Longest period of successive incremental decreases \\ \hline
18 & SBshanEn & Shannon entropy of two successive 3-letter symbolization \\ \hline
19 & SBTrace & Trace of covariance of transition matrix between symbols in 3-letter alphabet \\ \hline
20 & SBPerioc & Periodicity measure of Wang \\ \hline
21 & FCmean & Change in correlation length after iterative differencing \\ \hline
22 & COexpfit & Exponential fit to successive distances in 2-d embedding space \\ \hline
23 & SCFlucdfa & Proportion of slower timescale fluctuations that scale with DFA \\ \hline
24 & SCFlucsr & Proportion of slower timescale fluctuations that scale with linearly scaled range fits \\ \hline \end{tabular}
\end{table}
Table 2: Feature engineering directly from Blood Volm Pulse (BVP) signal
trained on the original training set. The random forest algorithm also introduces extra randomness when growing trees; it searches for the best feature among a random subset of features, instead of searching among all features, when splitting a node. The algorithm results in greater tree diversity, which trades a higher bias for a lower variance to generally obtain a better model.
Boosting is another ensemble method. The general idea of most boosting methods is to train models sequentially, each trying to correct its predecessor. The most popular boosting methods are Adaptive Boosting (AdaBoost) and Gradient Boosting[31]. In AdaBoost, a new model corrects its predecessor by paying more attention to the training instances that the predecessor under-fitted. This results in new models focusing more and more on the hard cases. In Gradient Boosting, instead of tweaking the instance weights at every iteration as AdaBoost does, the method tries to fit the new predictor to the residual errors made by the previous predictor. XGBoost is an advanced implementation of a gradient boosting algorithm that is extremely fast, scalable, and portable.
We employed 5-fold stratified cross-validation among all subjects to evaluate our models. This cross-validation object is a variation of k-fold that returns stratified folds made by preserving the percentage of samples for each class. So, we generate test sets containing the same distribution of classes, as close as possible, which is a helpful tip to build a model on an imbalanced dataset when the classes are not represented equally. We also applied the ExtraTree-based feature selection technique to find the optimal subset of features for each model. This method computes the feature importance based on impurity measure in the forest of extremely randomized decision trees. We used 16% of the samples outside of the train set and test set for hyperparameter tuning and feature selection. We conducted the exhaustive grid search to find the balance between bias and variance and thus, prevent the model from underfitting and overfitting. We employed the L2 regularization technique, which adds squared magnitude of coefficient as a penalty term to the loss function to reduce the model complexity. We used Scikit-Learn and XGBoost in Python to build and tune the models.
**Handling the Class Imbalance**
To encounter the issue of class imbalance in our dataset, we used the Synthetic Minority Oversampling Technique (SMOTE) [32, 33]. In this technique, we oversample the minority class by creating synthetic examples rather than by resampling with replacement. SMOTE first selects a minority class instance at random and finds its \(k\) nearest minority class neighbors. Then randomly chooses one of these neighbors and connects them to form a line segment in the feature space. The synthetic instances are generated as a convex combination of the two chosen instances. Using this data augmentation technique, we generated as many synthetic examples for the minority class as required to balance the class distribution in the training dataset.
**Model Evaluation Metrics**
The model performance was measured by Accuracy, Area under the ROC curve (ROC-AUC), Balanced Accuracy (B-ACC), Precision, Recall, and F1-score. It is noted that F1-score and ROC AUC provide a less biased estimate of the performance when the classes are imbalanced. The
below equations show the mathematical expression of precision, recall, and F1-score, where TP, FP, TN, and FN refer to "True Positive," "False Positive," "True Negative,", and "False Negative", respectively. Since our data is imbalanced, we also calculate the balanced accuracy (B_Acc), where \(m\) is the number of classes in our dataset.
\[Precision\ =\frac{TP_{i}}{TP_{i}+FP_{i}}\]
\[Recall\ =\frac{TP_{i}}{TP_{i}+FN_{i}}\]
\[F1\ =\frac{2.Precision.recall}{Precision+recall}\]
\[BalancedAccuracy=\frac{1}{m}\sum_{i=1}^{m}\left(\frac{TP_{i}}{TP_{i}+FN_{i}}\right)\]
To evaluate the performance of regressor models, we considered Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) as performance metrics. The equations below show the mathematical expression of these performance metrics: \(y\) is the actual value, \(\hat{y}\) is the predicted value, and n is the number of samples.
\[MAE\ =\frac{\sum_{k=1}^{n}\ |y-\hat{y}|}{n}\]
\[RMSE\ =\sqrt{\frac{\sum_{k=1}^{n}(y-\hat{y})^{2}}{n}}\]
**Statistical analysis between pain levels**
We evaluated the differences between pain levels for the different characteristics of BVP and IBIs during the cold pressor test. First, we tested the normality of features using the Kolmogorov-Smirnov test. Then, since the data were not Normal, we used Dunn's test to compare the values of the features at different pain levels. Dunn's test is a non-parametric multiple comparison test to pinpoint which specific groups are significantly different from the others after rejecting ANOVA null hypothesis. In this analysis, a \(p\)-value \(<0.05\) was considered statistically significant.
## Results and Discussion
### Overall Pain Classification Performances
We performed three binary classification tasks for pain detection. In each model, our goal was to detect the presence of pain. Table 3 presents the results of 5-fold stratified cross-validation for detecting the presence of pain (no pain vs. low pain, no pain vs. medium pain, no pain vs. high pain). The XGBoost model using the features extracted from the BVP signal
resulted in pain detection with ROC-AUC of \(90.05\pm 2.67\%\) for high pain, \(85.81\pm 5.06\%\) for medium pain, and \(80.06\pm 5.14\%\) for low pain, with no pain as the baseline. The Receiver Operating Characteristics (ROC) is a curve of probability in which we plot the True Positive Rate (TPR) against the False Positive Rate (FPR). The area under the ROC curve measures the degree of separability of classes achieved by a classifier. It tells how much the model is capable of distinguishing between classes at different thresholds. The higher the AUC, the better the model is at distinguishing subjects with pain from those with no pain. For example, the average ROC-AUC of \(90.05\%\) means a \(90.05\%\) chance that the model can differentiate between no pain and high pain. Even if patients are in low pain, there is an \(80.06\%\) chance that the model can distinguish low pain from no pain condition. Please note that in the benchmark classifier whose accuracy depends only on the proportion of positive and negative cases in the dataset, the value for ROC-AUC would be \(50\%\), which means that there is only a \(50\%\) chance to separate the classes accurately. The model's classification matrix is shown in Figure 3. It shows how samples in the presence and absence of pain can be classified if we set the threshold to \(0.5\) on the predicted class probabilities. This is an impressive model performance for pain detection using an easy-to-capture signal like BVP.
We also performed three binary classifications to distinguish the different pairs of pain levels: Low pain vs. Medium pain, Low pain vs. High pain, and Medium pain vs. High pain.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Metric** & **No pain vs. Low pain** & **No pain vs. Medium pain** & **No pain vs. High pain** \\ \hline Accuracy (\(\%\)) & \(75.25\pm 2.77\) & \(85.0\pm 3.94\) & \(87.01\pm 1.41\) \\ \hline ROC-AUC (\(\%\)) & \(80.06\pm 5.14\) & \(85.81\pm 5.06\) & \(90.05\pm 2.67\) \\ \hline B-Acc (\(\%\)) & \(73.84\pm 3.02\) & \(75.56\pm 3.35\) & \(79.61\pm 3.52\) \\ \hline Precision (\(\%\)) & \(80.07\pm 3.12\) & \(92.04\pm 0.93\) & \(93.27\pm 1.47\) \\ \hline Recall (\(\%\)) & \(79.91\pm 4.25\) & \(89.72\pm 4.32\) & \(90.9\pm 1.01\) \\ \hline F1-score (\(\%\)) & \(79.89\pm 2.57\) & \(90.83\pm 2.59\) & \(92.06\pm 0.84\) \\ \hline \end{tabular}
\end{table}
Table 3: Pain detection performance (mean \(\pm\) std) of XGBoost model using BVP signal.
Figure 3: The classification matrix for detecting the presence of pain using BVP signal and XGBoost model during the cold pressor test: (a) no pain vs. low pain; (b) no pain vs. medium pain; (c) no pain vs. high pain. The pain becomes more detectable at higher levels of pain, as one can be expected.
Table 3 presents the result of 5-fold stratified cross-validation. The reported results in Table 4. represent that the BVP signal can distinguish between low and high pain with a ROC-AUC of \(95.09\pm 0.91\%\). According to the results, there is an average 88.5 % chance to distinguish low pain from medium pain. There is also an average 83.41% chance to separate high pain from medium pain. The classification matrix of each classification task is shown in Figure 4. The highest F1-score is 95.09 % between low pain and high pain.
**Multiclass Pain Intensity Classification and Regression**
We explored how the BVP signal identifies the pain intensity when we know that a patient is experiencing pain. We explored different machine learning models to classify pain into three classes: Low, Medium, and High pain. As shown in Table 5, we found that the XGBoost model gives the best results with an average F1-score of \(80.03\pm 1.28\%\), balanced accuracy of \(78.09\pm 1.39\%\), and ROC-AUC of \(92.25\pm 0.95\%\). Figure 5 presents the classification matrices and ROC-AUC for XGBoost for the 3-class pain level classification. Table 6 summarizes classification performance of XGBoost model. This result is considerably better than the naive benchmark rule in which all samples are classified as medium pain, which is the majority class in our dataset. As can be seen in the last row of Table 5, the balanced accuracy for the naive
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Metric** & **Low vs. Medium** & **Low vs. High** & **Medium vs. High** \\ \hline Accuracy & \(85.66\pm 0.97\) & \(89.97\pm 0.98\) & \(83.41\pm 1.49\) \\ \hline ROC-AUC & \(88.5\pm 1.07\) & \(95.09\pm 0.91\) & \(91.31\pm 1.02\) \\ \hline B-ACC & \(78.92\pm 1.64\) & \(85.96\pm 0.64\) & \(83.41\pm 1.49\) \\ \hline Precision & \(89.33\pm 0.92\) & \(92.75\pm 0.35\) & \(82.84\pm 2.28\) \\ \hline Recall & \(92.06\pm 1.21\) & \(93.97\pm 1.51\) & \(83.67\pm 3.04\) \\ \hline F1-score & \(90.66\pm 0.63\) & \(93.35\pm 0.71\) & \(83.2\pm 1.57\) \\ \hline \end{tabular}
\end{table}
Table 4: Performance of pain level binary classification (mean \(\pm\) std) of XGBoost model using BVP signal.
Figure 4: The classification matrix for classification of different levels of pain using XGBoost and BVP signal during the cold pressor test: (a) Low pain vs. Medium pain; (b) Low pain vs. High pain; (c) Medium pain vs. High pain.
benchmark rule is 33% for the 3-class pain level classification, and the random average ROC-AUC value for that is 50%.
Since the pain levels are ordinal, it is desirable to treat pain assessment as a regression task instead of a classification task. The cost of misclassifying high pain as a medium, or high pain as a low pain are not the same. A multiclass classification model treats both errors equally, while a regression model with continuous response values tends to minimize the distance between the predicted pain intensity and the actual pain intensity. For continuous pain estimation to predict pain intensity on a numerical rating scale between 1-10, we explored linear
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Pain level** & **Precision** & **Recall** & **F1-score** \\ \hline Low pain & 0.70 & 0.71 & 0.71 \\ \hline Medium pain & 0.79 & 0.80 & 0.80 \\ \hline High pain & 0.84 & 0.83 & 0.84 \\ \hline \end{tabular}
\end{table}
Table 6: Classification performance of XGBoost model for pain intensity classification using BVP during cold pressor test. The average accuracy is 80%.
Figure 5: Classification matrix for pain intensity classification using XGBoost and BVP signals among three different levels of pain: low pain (LP), medium pain (MP), and high pain (SP).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Model** & **F1-score (\%)** & **Balanced Accuracy** & **ROC-AUC (\%)** \\ & & **(\%)** & \\ \hline Logistic Regression & \(44.01\pm 1.45\) & \(42.01\pm 2.02\) & \(59.39\pm 1.82\) \\ \hline Support Vector Machines & \(53.33\pm 0.58\) & \(50.9\pm 1.56\) & \(68.33\pm 1.41\) \\ \hline Random Forest & \(77.95\pm 1.86\) & \(75.02\pm 1.66\) & \(91.1\pm 0.77\) \\ \hline AdaBoost & \(61.38\pm 2.05\) & \(59.62\pm 2.31\) & \(71.74\pm 1.18\) \\ \hline XGBoost & \(\mathbf{80.03\pm 1.28}\) & \(\mathbf{78.09\pm 1.19}\) & \(\mathbf{92.25\pm 0.95}\) \\ \hline Random benchmark & \(26.58\pm 0.1\) & \(33.33\pm 0.0\) & \(50.0\pm 0.0\) \\ \hline \end{tabular}
\end{table}
Table 5: Exploring model performance of different machine learning models for three-class pain intensity classification task using BVP signal during cold pressor test.
regression, support vector regressor, random forest, AdaBoost, and XGBoost. Performance evaluation of these models is presented in Table 7. XGboost gave the best pain estimation result with an average MAE of \(0.94\pm 0.04\) and an average RMSE of \(1.26\pm 0.06\) from 5-fold stratified cross-validation.
**Exploring Feature Importance for Pain Recognition**
To determine the contribution of each of the 44 features extracted directly from the raw BVP signals and indirectly from the signal IBIs, we calculated the impurity-based feature importance for each feature using 5-fold stratified cross-validation using ExtraTree classifier. Then we averaged the importance of features across the folds. Figure 6 presents the top features for pain classification with feature importance above the 0.025 threshold. See Table 1 and 2 for the definition of these features. The feature importance criteria allow us to evaluate the extent to which each feature contributes to the success of the automated pain assessment model and serves as a mechanism to generate a hypothesis of potential relationship that can be tested directly through experiments.
Our results showed that the Auto-Mutual Information (AMI) from the histogram of BVP signal was the most important feature for pain level classification. The auto-mutual information (AMI) function describes the amount of common information between the original time series \(x_{t}\) and the time-shifted time series \(x_{t+\tau}\). The AMI function is a nonlinear equivalent of the auto-correlation function based on the Shannon entropy. AMI can be calculated as:
\[AMI(\tau)=\frac{1}{d-1}\log_{2}\]
where \(P(x_{t})\) is the estimated probability functions, \(d\) is the control parameter, and \(\tau\) is the embedding delay. AMI of the signal shows how the signal can predict its time-shifted version. A signal with higher AMI values denotes better predictability. Figure 7 depicts the AMI feature from the histogram of the BVP signal at different pain states. The Dunn's test verified a significant difference in histogram AMI of BVP signal in the presence or absence of pain, in all pain levels from low to medium and high pain. Moreover, this AMI feature from the BVP signal is significantly different in high pain compared to both low pain and medium pain (\(p\)-value \(<\)0.05). It indicates the importance of the auto mutual information as a significant non-linear biomarker from the BVP signal for pain intensity estimation.
As can be seen in Figure 8, the standard deviation of the BVP signal is a good feature for detecting the presence or absence of cold pain, which shows a significant difference across pain and no pain states (\(p\)-_value_\(<\)0.05). This feature could not only significantly distinguish the pain and no pain states, but also it could significantly differentiate the high pain from medium pain and low pain (\(p\)-_value_\(<\)0.05 ). On the other hand, the standard deviation of RR intervals also shows a significant difference across pain and no pain states (\(p\)-_value_\(<\)0.05).
Figure 7: Auto Mutual Information (AMI) feature from the histogram of BVP signal at different pain states. There is a significant difference between no pain and each of the other pain levels (\(p\)-_value_\(<\)0.05). Moreover, the AMI feature of subjects experiencing high pain is significantly different from other pain levels (\(p\)-_value_\(<\)0.05).
However, this variable couldn't significantly differentiate pain intensity among low, medium, and high pain levels. This finding shows the value of analyzing the raw BVP signal, not just for the heart rate variability measures from corresponding interbeat intervals.
We also explored the features that significantly detected pain, although they couldn't differentiate among the pain levels. One of these features was IBI-sdell which returns the ellipse area fitted into the Poincare plot of the intervals between heartbeats and computed as SD1*SD2*\(\pi\). A Poincare plot, named after Henri Poincare, is a type of recurrence plot used to quantify self-similarity in processes, usually periodic functions. It is also known as a return map. Poincare plots can be used to distinguish chaos from randomness by embedding a data set in higher-dimensional state space. SD1 means the standard deviation of the Poincare plot perpendicular to the line-of-identity and reflects the level of short-term heart rate variability, and SD2 represents the standard deviation of the Poincare plot along the line-of-identity and indicates the level of long-term heart rate variability[34]. Figure 9 shows the changes in this feature during the cold pressor test. The Poincare plot has been explored in the literature as a heart rate variability measure in other domains such as stress evaluation[35] or detecting the Atrial Fibrillation[36]. However, to the best of our knowledge, our current work for the first time identifies the potential value of the Poincare plot of interbeat intervals for automated pain detection. The other exciting feature we found in this experiment was the IBI_Max, which was significantly different when the participants had no or low pain than when they experienced medium or high pain. Figure 10 represents the changes in the maximum interbeat interval during the cold pressor test.
Figure 8: Standard deviation BVP signal in presence or absence of pain. This feature has a significant difference (_p-value_\(<\)0.05) between pain vs. no pain conditions.
## Conclusion
In this study, we explored BVP signal for automated pain assessment using machine learning algorithms. BVP signals were collected from thirty-two healthy subjects during the cold pressor test. We explored a novel set of 24 features directly from BVP signal, and 20 heart rate variability features from BVP inter-beat intervals from time-domain, frequency-domain,
Figure 10: There is a significant difference (_p-value_\(<\)0.05) between Maximum IBIs of subjects when they are experiencing no or low pain vs. when they are experiencing medium or high pain.
Figure 9: The SD1*SD2*\(\pi\) value of the Poincaré plot of input Inter Beat Intervals is significantly different (_p-value_\(<\)0.05) between subjects experiencing no pain and each of other pain levels (low, medium, and high pain). Thus, IBI_Sdell is a valuable feature for detecting the presence of pain.
and nonlinear dynamics metrics. We explored different machine learning models: Logistic Regression, SVR, Random Forest, AdaBoost, and XGBoost. In general, among all the models, XGBoost gave the best model performance for both pain classification and pain intensity estimation tasks. The XGBoost pain detection model to detect low pain, medium pain and high pain with no pain as the baseline achieved an average F1-score of \(80.06\pm 5.14\%\), \(85.81\pm 5.06\%\), \(90.05\pm 2.67\%\) respectively. For continuous pain estimation on a numerical rating scale between 1-10, the XGBoost model achieved an average MAE of \(0.94\pm 0.04\) and an average RMSE of \(1.26\pm 0.06\). We also achieved an average F1-score of \(80.03\pm 1.28\) % for multiclass classification among low, medium, and high pain. To the best of our knowledge, this is the first study that investigated this novel set of features from BVP signal for pain detection and pain intensity estimation. Our results show that BVP is a promising non-invasive signal for automated pain assessment.
For future work, we plan to continue our efforts in investigating the performance of other non-invasive signals collected through the cold pressor test for automated pain assessment. We will also explore data fusion techniques to find the possible benefits of combing different modalities. Although we validated our model with healthy subjects and received promising results, it would be valuable if we could also validate with pain patients in a clinical context to investigate the effect of specific diseases on physiological responses to pain.
## Acknowledgments
This material is based upon work supported by the National Science Foundation's Division of Information and Intelligent Systems under Grant No. 1838796. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
## Author contributions
FP contributed to conceptualization, data curation, formal analysis, investigation, methodology, visualization, writing the original draft, review and editing. YL contributed to funding acquisition, resources, review and editing. SK contributed to funding acquisition, supervision, investigation, resources, validation, project administration, review and editing. All authors reviewed and approved the manuscript.
## Competing interests
The authors declare no competing interests.
## Data availability
All the data used in this study are managed and protected under IRB#: 191215 approved by the Northeastern University human subject research protection Office. We obtained consent from all the participants involved for participation in the experiments. All methods were carried out in accordance with relevant guidelines and regulations. |
2303.08048 | Active target TPC for study of photonuclear reactions at astrophysical
energies | A setup designed to study photonuclear reactions at astrophysical energies -
an active target Time Projection Chamber was developed and constructed at the
Faculty of Physics, University of Warsaw. The device was successfully employed
in two experiments at the Institute of Nuclear Physics Polish Academy of
Sciences in Cracow, in which {\gamma}- and neutron-induced reactions with CO2
gas target were measured. The reaction products were detected and their momenta
reconstructed. Preliminary results are shown. | M. Kuich, M. Ćwiok, W. Dominik, A. Fijałkowska, M. Fila, A. Giska, Z. Janas, A. Kalinowski, K. Kierzkowski, C. Mazzocchi, W. Okliński, M. Zaremba, D. Grządziel, J. Lekki, W. Królas, A. Kulińska, A. Kurowski, W. Janik, T. Pieprzyca, Z. Szklarz, M. Scholz, M. Turzański, U. Wiącek, U. Woźnicka, A. Caciolli, M. Campostrini, V. Rigato, M. Gai, H. O. U. Fynbo | 2023-03-13T14:00:06Z | http://arxiv.org/abs/2303.08048v1 | # Active target TPC for study of photonuclear reactions at astrophysical energies1
###### Abstract
A setup designed to study photonuclear reactions at astrophysical energies - an active target Time Projection Chamber was developed and constructed at the Faculty of Physics, University of Warsaw. The device was successfully employed in two experiments at the Institute of Nuclear Physics Polish Academy of Sciences in Cracow, in which \(\gamma\)- and neutron-induced reactions with CO\({}_{2}\) gas target were measured. The reaction products were detected and their momenta reconstructed. Preliminary results are shown.
## 1 Introduction
One of the most important open questions in nuclear astrophysics concerns the creation of carbon and oxygen in the stars. Synthesis of carbon and oxygen happens in helium-burning thermonuclear reactions in the star's core, the triple-alpha reaction and \({}^{12}\)C(\(\alpha\),\(\gamma\))\({}^{16}\)O, respectively. The basic observable to resolve the reaction rates is the cross-section, which has to be determined at the relevant energies (at the Gamow peak) and also is required as input information for star evolution models [1]. One way to determine
the reaction cross-section is to study its inverse reaction by using an active-target together with photon and neutron beams. In this paper, we present preliminary studies to validate this approach.
## 2 Experiments
Studying the \(\gamma\)- and n-induced reactions on \({}^{12}\)C and \({}^{16}\)O requires a detection system that allows for reconstructing the momenta of all the charged reaction products. An ideal tool is an active-target time-projection chamber (active-target TPC), filled with CO\({}_{2}\) gas. The Warsaw active-target TPC was developed for this purpose. It consists of a \(33\times 20\times 20\) cm\({}^{3}\) active volume, immersed in a vacuum vessel equipped with a control system to maintain a constant gas pressure inside. The active volume is surrounded by field-shaping electrodes and terminated with a cathode plate on one end and a stack of 3 Gas Electron Multiplier foils as amplification section followed by a planar, 3-coordinate (U, V, W), redundant readout plane at the other end. The arrays of U-, V-, and W-strips register the charge deposit in 2-dimensions, while time-distribution of the charge collected at the electrode, combined with the drift velocity of the electrons in the given gas mixture and drift field, allows for determining the third coordinate. The device allows for full and unambiguous kinematic reconstruction of multiple-particle events [2, 3]. The first commissioning measurements with the Warsaw active-target TPC were conducted in 2021 at the Institute of Nuclear Physics, Polish Academy of Sciences in Cracow (IFJ).
In the first experiment, a 1.03 MeV proton beam from the Van de Graaff accelerator with currents of about 10-20 \(\mu\)A was used to produce 13.1 MeV \(\gamma\)-rays in the \({}^{15}\)N(p,\(\gamma\))\({}^{16}\)O reaction. For this purpose, a \({}^{15}\)NCr target (about 1.3\(\times 10^{18}\) atoms/cm\({}^{2}\)), produced in reactive ion sputtering on a Ta backing at the National Laboratories of Legnaro, was used. The \(\gamma\) beam intensity was monitored by a NaI detector positioned at the side of the target. The TPC, filled with CO\({}_{2}\) at the absolute pressure of 250 mbar, was placed right behind the target station and the produced \(\gamma\)-rays interacted with the gas of the TPC, where they induced the photo-disintegration of \({}^{12}\)C and \({}^{16}\)O. Charged reaction products were detected. Examples of \({}^{16}\)O(\(\gamma\),p)\({}^{15}\)N and \({}^{16}\)O(\(\gamma\),\(\alpha\))\({}^{12}\)C event candidates are shown in Fig. 1. Raw charge distributions in strip (U-, V-, W-) coordinate and time are presented as 2D plots. A manual reconstruction method was used to determine the emitted particle position in the chamber and measure its range. Identification of the two-particle event as due to \({}^{16}\)O(\(\gamma\),p)\({}^{15}\)N or \({}^{16}\)O(\(\gamma\),\(\alpha\))\({}^{12}\)C as well as the energy reconstruction was done by comparing the track length of emitted particles with SRIM simulations [4]. Examples of event reconstruction is presented in the 1D plots in Fig. 1 as charge distribution along the track(s)
with fitted Bragg curve(s). A preliminary distribution of the reconstructed energy of the protons is depicted in Fig.2 (partial statistics was analyzed).
The mean energy of the protons, \(\langle E_{\rm P}\rangle=890\pm 24\) keV, is in 3\(\sigma\) agreement with expected 966 keV.
The second experiment took place at the Impulse Neutron Generator (IGN-14), where 14 MeV neutrons were produced in the d(t,n) reaction and interacted with the CO\({}_{2}\) gas, which was kept at 80 mbar, in the TPC positioned just behind the tritium target. The neutron flux was monitored with a \({}^{3}\)He counter and additionally estimated with activation of aluminium targets. The typical neutron flux amounted to about \(3\times 10^{4}\) n/s/cm\({}^{2}\). Two example events for \({}^{12}\)C(n,n')\({}^{12}\)C\({}^{*}\) and \({}^{16}\)O(n,\(\alpha\))\({}^{9}\)Be reactions are presented in Fig. 3. Partial statistics was manually analyzed and the emitted particles identified. The majority of the two-particle events were classified as \({}^{12}\)C(n,\(\alpha\))\({}^{9}\)Be, while three-particle events as carbon dissociation into 3 \(\alpha\) particles. Preliminary reconstruction of the particles momenta indicates that the \({}^{12}\)C(n,n')\({}^{12}\)C\({}^{*}\) event in the Fig. 3 corresponds to the decay of the Hoyle state at 7654 keV by 3\(\alpha\) emission.
Figure 1: Event reconstruction example for an \({}^{16}\)O(\(\gamma\),p)\({}^{15}\)N (_left_) and \({}^{16}\)O(\(\gamma\),\(\alpha\))\({}^{12}\)C (_right_). The 2D plots show the raw data (U, V, W strip position vs time bin) re-scaled to mm. The bottom-right plot shows the charge distribution along the track(s) and the charge profile(s) fit.
Figure 2: Reconstructed proton energy spectrum from \({}^{16}\)O(\(\gamma\),p)\({}^{15}\)N.
## 3 Summary
An active-target TPC dedicated for studying reactions of astrophysical interest at the relevant energies with \(\gamma\) or neutron beams was developed at the University of Warsaw. The first experiments employing it were conducted in 2021 at IFJ, where \({}^{16}\)O(\(\gamma\),\(\alpha\))\({}^{12}\)C, \({}^{16}\)O(\(\gamma\),p)\({}^{15}\)N, \({}^{12}\)C(\(\gamma\),3\(\alpha\)), \({}^{12}\)C(n,\(\alpha\))\({}^{9}\)Be and \({}^{12}\)C(n,n')\({}^{12}\)C\({}^{*}\) were observed. Preliminary analysis shows that the Warsaw active-target TPC is an adequate tool for measuring such reactions induced by non-charged particles.
We would like to thank H. Czyrkowski and R. Dabrowski for their support in the preparation of the equipment. Scientific work was supported by the National Science Centre, Poland, contract no. 2019/33/B/ST2/02176, by the University of Warsaw, Poland, through the Interdisciplinary Centre for Mathematical and Computational Modelling, comp. alloc. no. G89-1286.
|
2310.17050 | Exploring Question Decomposition for Zero-Shot VQA | Visual question answering (VQA) has traditionally been treated as a
single-step task where each question receives the same amount of effort, unlike
natural human question-answering strategies. We explore a question
decomposition strategy for VQA to overcome this limitation. We probe the
ability of recently developed large vision-language models to use human-written
decompositions and produce their own decompositions of visual questions,
finding they are capable of learning both tasks from demonstrations alone.
However, we show that naive application of model-written decompositions can
hurt performance. We introduce a model-driven selective decomposition approach
for second-guessing predictions and correcting errors, and validate its
effectiveness on eight VQA tasks across three domains, showing consistent
improvements in accuracy, including improvements of >20% on medical VQA
datasets and boosting the zero-shot performance of BLIP-2 above chance on a VQA
reformulation of the challenging Winoground task. Project Site:
https://zaidkhan.me/decomposition-0shot-vqa/ | Zaid Khan, Vijay Kumar BG, Samuel Schulter, Manmohan Chandraker, Yun Fu | 2023-10-25T23:23:57Z | http://arxiv.org/abs/2310.17050v1 | # Exploring Question Decomposition for Zero-Shot VQA
###### Abstract
Visual question answering (VQA) has traditionally been treated as a single-step task where each question receives the same amount of effort, unlike natural human question-answering strategies. We explore a question decomposition strategy for VQA to overcome this limitation. We probe the ability of recently developed large vision-language models to use human-written decompositions and produce their own decompositions of visual questions, finding they are capable of learning both tasks from demonstrations alone. However, we show that naive application of model-written decompositions can hurt performance. We introduce a model-driven _selective decomposition_ approach for second-guessing predictions and correcting errors, and validate its effectiveness on eight VQA tasks across three domains, showing consistent improvements in accuracy, including improvements of \(>20\%\) on medical VQA datasets and boosting the zero-shot performance of BLIP-2 above chance on a VQA reformulation of the challenging Winoground task. Project Site: [https://zaidkhan.me/decomposition-Oshot-vqa/](https://zaidkhan.me/decomposition-Oshot-vqa/)
## 1 Introduction
On a question-answering test, humans are able to answer some questions in a single step, while other questions require potential deliberation and second-guessing. Visual question answering (VQA) [1; 2; 3] has traditionally been treated as a single-step task. Models only get one chance for each question, and each question receives equal amounts of computation. This is incongruent to the natural human approach to such tasks, where simple perceptual questions are quickly answered, while harder reasoning questions are allocated more time and computation.
The emergence of task decomposition techniques for large language models (LLMs) [4] is a potential solution to this incongruency. Task decomposition techniques _prompt_ a LLM to break down an initial complex task into simpler subtasks that can each be solved independently. However, VQA has not benefited from advances in task decomposition techniques for two reasons. First, many task decomposition techniques [5; 6] have only been effective in the regime of very large unimodal LLMs with parameters in the 30B+ range, while the LLMs underlying vision-language models are typically much smaller, only recently reaching \(\approx\) 13b parameters for publicly available models[7; 8; 9]. Second, existing methods for prompting vision-language models (VLMs) during VQA tasks focus on other use cases, such as providing more examples of the input task [10] or more information about the image [11]. Given the recent emergence of multi-billion scale VLMs, our main research question is:
_Can multi-billion scale vision-language models benefit by approaching reasoning-heavy VQA as a two-step rather than a single-step problem using decomposition?_
To this end, we explore a form of task decomposition called _question decomposition_ as a strategy for zero-shot visual question answering with large VLMs. Although question decomposition has been explored for specific unimodal QA[12; 13; 14], it has not been explored as a strategy for multimodal
tasks such as VQA with emerging large VLMs [7; 8; 9; 15; 16], and little is known about the in-context learning ability of emerging large VLMs.
First, we probe the in-context learning ability [17; 18; 19] of both LMs and VLMs to exploit oracular question decompositions written by humans. We design experiments to understand whether models can learn to use decompositions without explicit training, and whether they are merely exploiting keywords and surface statistics when they use decompositions. Second, we conduct a series of experiments, again using in-context learning, to understand how well models can _produce_ decompositions that correct the errors of a fixed VQA model. Last, we propose and study an entirely model-driven closed-loop approach mimicking a simplified form of a classic human second-guessing strategy: second-guess answers based on how confident you are about them. We conduct experiments across three domains (art, natural images, medical), eight datasets, three model families, and model sizes ranging from 80M to 11B parameters. Our contributions can be listed as follows:
1. We experimentally demonstrate that large VLMs based on instruction-tuned LLMs can use decompositions to improve their predictions without any training, and are not merely exploiting changes in word statistics introduced by the decomposition. (Sec. 3)
2. We quantitatively show that generative, instruction-tuned language models are capable of writing effective decompositions zero-shot, without task-specific training. (Sec. 4)
3. We find that applying decomposition naively to every question instance harms performance rather than helps (Fig. 4), and propose _selective decomposition_ (Fig. 3), a modular, model-agnostic, training-free strategy that treats VQA as a two-step task. (Sec. 5)
4. We apply selective decomposition to a testbed of 8 datasets and show that it consistently improves performance (Tabs. 3 and 4), with gains of \(>20\%\) on medical VQA datasets[20; 21; 22], and boosts the performance of BLIP-2[7] above chance on the Winoground[23] benchmark when formulated as a VQA task. (Sec. 5).
## 2 Background
### Problem Setting
In zero-shot VQA, a model \(f:v,q\to a\) is given an image \(v\), a question \(q\), and outputs an answer, \(a\). Unlike traditional VQA, the model \(f(\cdot)\) has never seen \(v\), \(q\), \(a\) triplets. In practice, such a setting often occurs when \(f(\cdot)\) is a foundation model that contains several billion parameters and has undergone large scale pretraining. It is undesirable to retrain such an \(f(\cdot)\) on visual question answering pairs specifically, both for reasons of computational convenience and because finetuning can degrade robustness[24]. The most common case is that \(f(\cdot)\) is an autoregressive, generative language model that can optionally be conditioned on the visual modality. We restrict ourselves to such models, which approximate \(\Pi_{k=1}^{N}p(t_{k+1}|t_{1:k},v)\), where \(v\) is an image and \(t_{1:k}\) is a sequence of language tokens. In a zero-shot VQA setting, it is expected that \(f(\cdot)\) understands that it has been given a question \(q\) and should produce the correct answer \(a\) to the question \(q\) in the context of the image \(v\) by modeling it as \(p(a|v,q)\). This setting is common when evaluating very large frozen models, such as in [10; 11], with the exception that in our case, \(f(\cdot)\) is a vision-language model rather than a language-only model.
### Question Decomposition
Question decomposition is the task of decomposing a complex main question into one or more simpler subquestions that are logically related to the main question, answering those simpler subquestion(s), and then using the answered subquestion(s) to help in composing a final answer to the complex main question. This is a strategy often used by humans for problem solving. For example, consider a human being confronted by a wild animal they have never seen before. To answer the main question "_does this animal pose a threat to me?_" a human might decompose it into subquestions such as "_does the animal have sharp canine teeth?_" and "_does the animal have forward facing eyes typical of a predator?_" Knowing the answer to even one of these subquestions makes answering the main question much easier.
Adopting the terminology of Sec. 2.1, the task of question decomposition consists of _decomposing_ a main visual question \(v,q\) into one or more subquestions \((s_{1},s_{2},\ldots)\), answering those subquestions to
obtain the decomposition (\(((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{\prime}_{2})\ldots)\), and then using \(v,q\) together with the decomposition (\((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{\prime}_{2})\ldots)\) to obtain the final answer \(a\).
### What makes a good subquestion?
In Sec. 2.2, we gave a definition of decompositions that is dependent on notions of "simpler" and "logically related". It is challenging to make these notions precise, and difficult to operationalize them to measure whether a sequence of text really is a valid subquestion according to these notions. To sidestep these difficulties, we adopt a consequentialist view of whether a subquestion is "good", following a common consequentialist tradition in artificial intelligence as a whole [25]. We evaluate the "goodness" of a subquestion by measuring the effect of the subquestion. Concretely, let \(v,q,a\) be a visual question triplet where \(v\) is the image, \(q\) is the question, and \(a\) is the answer. Let \(p_{f}(a|v,q)\) be the probability of the ground-truth answer \(a\) as assessed by a visual question answering system \(f(\cdot)\). We regard a decomposition of \(v,q\) consisting of series of subquestions and their answers (\((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{\prime}_{2})\ldots)\) as "good" if \(p_{f}(a|v,q)<p_{f}(a|v,q,((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{ \prime}_{2})\ldots)\)), that is, if seeing the decomposition increases the probability of the ground-truth answer \(a\). In practice, we adopt a simpler criterion that takes the consequentialist definition to the limit. _We regard a decomposition as "good" if seeing the decomposition induces the model to produce the true ground-truth answer \(a\)._
### Scope & Limitations
We only consider in-context learning techniques for zero-shot VQA, and do not explore full model training in this work. The class of model we are interested in are instruction-following vision-language models based on large language models [7; 8; 9]. This excludes previous-generation vision-language models that are not based on multi-billion parameter instruction-tuned language models [26; 27; 28; 29; 30].
Figure 1: Model-produced decompositions and their error correcting effects. The decompositions and before/after answers shown above were produced by prompting BLIP-2 models based on FLAN-T5 to produce a subquestion, answering the subquestion with the model and feeding the question and answered subquestion back to the model: it is correcting itself. Before answers are wrong and After answers are correct.
Not all datasets are suitable for exploring question decomposition, as some primarily test low-level perception skills rather than high-level reasoning skills that would benefit from a decomposition. We thus limit our evaluation to datasets that explicitly test for high-level reasoning / knowledge-based ability. We are few-shot for the task of _visual question decomposition_ but zero-shot for the task of _visual question answering_.
## 3 How well can models use decompositions?
Our goal in this section is to understand the ability of vision-language models based on large language models to _consume_ decompositions. The hypothesis we test is: _When provided with gold-standard decompositions on a VQA task, a model's error rate should be lower than without the gold-standard decompositions._ Evaluating this hypothesis presents a number of challenges. First, how can we obtain a set of decompositions that are apriori "known to be good"? Second, how should the model be fed the decompositions?
To find a source of apriori "good" decompositions, we turn to the literature on internal consistency in visual question answering. To probe consistency in question answering systems, several datasets [31; 32; 33] have been proposed. A particularly relevant case of such a dataset is VQA-Introspect [32], which probes consistency along a reasoning-perception axis. Selvaraju et al. [32] annotate each question in the VQAv2[1] validation set as a high-level "reasoning" question or a low-level "perception" question. For each "reasoning" question, Selvaraju et al. [32] write 1-3 "perceptual" subquestions which are implied by the reasoning question. For example, given a high-level reasoning question such as "Can I eat this banana?" a model that says "yes" should also reply "yellow" to the low-level perception question "what is the color of the banana?" We propose to use the low-level perception questions and answers written for the high-level reasoning questions as an _oracular_ decomposition for the high-level reasoning question, on the basis that the low-level perception questions are simpler than the high-level reasoning question, entail the answer for the high-level reasoning question, and are written by humans.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{4}{c}{Image + Text (3B)} & \multicolumn{4}{c}{Image+Text (13B)} \\ Decomposition & Overall & Boolean & Number & Other & Overall & Boolean & Number & Other \\ \hline None (Baseline) & 79 & 82.5 & 6.8 & 67.4 & 79.1 & 81.9 & 13.7 & 70.1 \\
**Oracle/Oracle** & 88.6 & 91.4 & 40.2 & 79.4 & 89.8 & 92.6 & 45.3 & 80.4 \\ \hline \(\lambda\) w.r.t Baseline & 9.6 & 8.8 & 33.3 & 12.1 & 10.8 & 10.7 & 31.6 & 10.3 \\
**Oracle/Self-Answer** & 84 & 87.3 & 21.4 & 72.8 & 83.9 & 87.1 & 26.5 & 73.2 \\ \(\lambda\) w.r.t Baseline & 5 & 4.8 & 14.5 & 5.4 & 4.8 & 5.2 & 12.8 & 3.1 \\
**Oracle/No Answer** & 83.3 & 85.9 & 27.4 & 74.9 & 84.1 & 86.9 & 27.4 & 75.2 \\ \(\lambda\) w.r.t Baseline & 4.4 & 3.4 & 20.5 & 7.6 & 5.1 & 5 & 13.7 & 5.1 \\
**Oracle/Oracle (Scrambled)** & 84.9 & 87.9 & 37.6 & 74.8 & 86 & 88.9 & 39.3 & 76.2 \\ \(\lambda\) w.r.t Baseline & 5.9 & 5.4 & 30.8 & 7.4 & 6.9 & 7 & 25.6 & 6 \\ \hline \multicolumn{1}{c}{Text (3B)} & \multicolumn{4}{c}{Text (13B)} & \multicolumn{4}{c}{Text (13B)} \\ Decomposition & Overall & Boolean & Number & Other & Overall & Boolean & Number & Other \\ \hline None (Baseline) & 57.4 & 64.4 & 6 & 32.2 & 63.8 & 71.9 & 6.8 & 34.3 \\
**Oracle/Oracle** & **72** & **75.8** & **37.6** & **58.4** & **81.5** & **85.1** & **45.3** & **69** \\ \hline \(\lambda\) w.r.t Baseline & 14.5 & 11.4 & 31.6 & 26.2 & 17.8 & 13.2 & 38.5 & 34.7 \\
**Oracle/Self-Answer** & 62.1 & 65.8 & 23.1 & 48.8 & 68 & 72.1 & 20.5 & 53.7 \\ \(\lambda\) w.r.t Baseline & 4.6 & 1.4 & 17.1 & 16.7 & 4.3 & 0.2 & 13.7 & 19.4 \\
**Oracle/No Answer** & 64.8 & 68.7 & 21.4 & 50.9 & 75.2 & 79 & 26.5 & 62.2 \\ \(\lambda\) w.r.t Baseline & 7.3 & 4.3 & 15.4 & 18.8 & 11.4 & 7.1 & 19.7 & 27.9 \\
**Oracle/Oracle (Scrambled)** & 60.5 & 62.6 & 28.2 & 53.3 & 78.9 & 83.1 & 40.2 & 63.5 \\ \(\lambda\) w.r.t Baseline & 3.1 & -1.8 & 22.2 & 21.1 & 15.1 & 11.3 & 33.3 & 29.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Models are capable of using decompositions written by humans to provide more accurate answers. The gray \(\overline{\text{rows}}\) are the baseline performance with no decomposition, and each \(\Delta\) is calculated w.r.t to this baseline. Oracle/Oracle rows denoting oracle subquestions/oracle answers, have the highest \(\Delta\). “Self-Answer” means the model answered oracular subquestions itself, and “No Answer” indicates the answer was left out entirely. Image+Text indicates a vision-language model (BLIP-2) was tested with multimodal inputs, while Text indicates the corresponding language model inside BLIP-2 (FLAN-T5) was tested with text only inputs. Validation split of VQA Introspect is the dataset (22k reasoning questions with their associated decompositions).
The second challenge lies in using a decomposition consisting of a series of subquestions and answers (\((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{\prime}_{2})\ldots\)) alongside a main visual question \((v,q)\). Recall that we cannot train the model \(f(\cdot)\) being used for the visual question answering task, and for any arbitrary model, it is unknown whether the model has ever seen the _exact_ task of decomposition-aided visual question answering. Thus, we rely on the in-context learning ability [18; 19] of large language models to learn to perform the tasks we require from a demonstration of the task. We handcraft a simple prompt to contain a main visual question \(v\), \(q\) from the VQAv2 validation set, along with one human-written oracular subquestion and human-written answer \(q^{\prime}\), \(a^{\prime}\) for the main question \(v,q\) extracted from VQA-Introspect. The prompt is simply
```
exemplar="Context:istheskyblue?no.aretherecloudsinthesky? yes.Question:whatweatherislikely?Shortanswer:rain" prompt=exemplar+"Context:{subquestion}?{subanswer}.Question:{question}?Shortanswer:"
```
**Experiments & Discussion** We use BLIP-2 [7]models based on the instruction-tuned FLAN-T5[34] in 3B and 13B sizes. Experiments are run on a combination of A6000s and TPUv3s, on the VQA-Introspect validation set containing 22K reasoning questions and their associated decompositions. The results are shown in Tab. 1. Compared to the baseline with no oracular decompositions, both the 3B/13B vision-language models and their corresponding language models show a clear ability to benefit from decompositions across a variety of question types, with numerical questions benefiting the most. Next, we seek to gain insight into the mechanism by which decompositions aid inference.
_Is the model merely exploiting changes in surface level statistics?_ If so, we would expect that perturbations that leave the statistics largely unchanged but significantly alter the meaning and logical structure of the oracle decomposition should not result in significantly different performance from the unaltered oracle decompositions. We remove the answers from the decomposition so that it only contains the subquestions, and test the effect of only using the subquestions. Compared to the oracle, there is a significant 50% relative decrease in improvement w.r.t to the baseline. Most of the subquestion answers are boolean, so removing them should not significantly change content words in the prompt, though it changes the meaning of the context significantly. Next, we allow the models to answer the subquestions themselves (Oracle/Self-Answer) rather than using the ground-truth questions. The accuracy of all models again decreases relative to the oracle answers, suggesting the answer and question together contribute to the result. Finally, we take the oracle subquestion+answer and scramble the words before providing them to the models. If the model is merely exploiting surface level statistics, the performance difference between the scrambled oracular decompositions and the original decompositions should be minimal, as the words are all the same. Again, we observe a significant drop compared to the original decompositions, suggesting that the models _are not merely exploiting changes in the surface level statistics_. Furthermore, human-written decompositions help in almost all cases over the no-decomposition baseline. **Note:**_See supplement for complete experimental details for all experiments_.
## 4 Can models produce effective decompositions?
In this section, we conduct experiments to answer the following research questions:
1. Can language models \(\leq\) 13B parameters learn to produce effective decompositions purely through demonstrations?
2. Is question decomposition mostly a linguistic ability, or is being able to see the image important?
Recall that a decomposition of a visual question \(v,q\) is a series of one or more _subquestions_ (\((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{\prime}_{2})\ldots\)) and their answers, with the constraint that the subquestions and answers should have the property that \(p_{f}(a|v,q)<p_{f}(a|v,q,((q^{\prime}_{1},a^{\prime}_{1}),(q^{\prime}_{2},a^{ \prime}_{2})\ldots))\) where \(p_{f}(\cdot)\) represents probability assessed by a given vision-language model \(f(\cdot)\) of the ground-truth answer \(a\). We simplify this task to the task of producing a _single_ subquestion \(q^{\prime}\) given a main visual question \(v,q\), and denote the process of decomposition with an arbitrary autoregressive language model \(g(\cdot)\) as \(d_{g}(v,q)\to q^{\prime}\). We hereafter refer to the model \(g(\cdot)\) that generates the decomposition as the _decomposer_. The subquestion is then answered by the vision-language model \(f(v,q^{\prime})=a^{\prime}\) to produce the subquestion-answer pair \((q^{\prime},a^{\prime})\). We call the question answering model the _recomposer_.
We then measure the effectiveness of the decomposition by measuring the _error correction rate_:
\[\mathrm{E}_{CR}=\frac{\sum_{i=1}^{N}\mathbb{1}[f(v_{i},q_{i})\neq a_{i}\wedge f(v_ {i},q_{i},(q^{\prime}_{i},a^{\prime}_{i}))=a_{i}]}{\sum_{i=1}^{N}\mathbb{1}[f(v _{i},q_{i})\neq a_{i}]} \tag{1}\]
where \((v_{i},q_{i},a_{i})\) represent the \(i\)-th image, question, and ground-truth answer respectively, and \(q^{\prime}_{i},a^{\prime}_{i}\) represent a subquestion generated by the decomposer model and the answer predicted for the subquestion by the recomposer (VQA) model, and \(\mathbb{1}[\mathit{cond}]\) is an indicator function that is equal to 1 when \(\mathit{cond}\) is true and 0 otherwise. Simply put, \(\mathrm{E}_{CR}\) measures the number of instances on which \(f(\cdot)\) initially predicted a wrong answer, but switched to the correct answer after seeing the decomposition generated by \(g(\cdot)\). Alternatively, this can be understood as the effectiveness of a decomposer model at correcting the errors of the recomposer model. The error induction rate \(\mathrm{E}_{IC}\) is the opposite:
\[\mathrm{E}_{IC}=\frac{\sum_{i=1}^{N}\mathbb{1}[f(v_{i},q_{i})=a_{i}\wedge f(v_ {i},q_{i},(q^{\prime}_{i},a^{\prime}_{i}))\neq a_{i}]}{\sum_{i=1}^{N}\mathbb{1 }[f(v_{i},q_{i})=a_{i}]} \tag{2}\]
and measures how often the produced decompositions flipped an answer that was initially correct to an incorrect answer. The decomposer can be the same as the recomposer if the model can do both tasks by following different prompts, as in the case of instruction-tuned models [38].
**Experiments & Discussion** We use BLIP-2 [7] based on the FLAN-T5[34] as the question answering model (recomposer). For the decomposers, we use FLAN-T5[34] models ranging in size from 80M parameters to 11B parameters, as well as the BLIP-2 models themselves. We use four VQA
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{A-OKVQA[35]} & \multicolumn{3}{c}{ArtVQA[36]} & \multicolumn{3}{c}{OK-VQA[37]} & \multicolumn{3}{c}{SL-AKE[20]} & \\ \cline{3-11} VQA Model & Decompeer & \(\mathrm{E}_{CR}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{CR}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}\) & \(\mathrm{E}_{CR}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) & \(\mathrm{E}_{IC}\) \\ \hline \multirow{6}{*}{BLIP2 (3B)} & Text & 12.5 & 28.21 & 20.31 & 7.1 & 42.06 & 83.15 & **97.6** & 31.38 & 63.56 & 14.12 & 35.41 & 66.73 & 80.0M \\ & Text & 10.42 & 5.08 & -9.56 & **59.81** & -9.4 & 52.47 & -1.25 & 49.29 & - & 25.0M \\ & Text & 9.2 & 30.76 & - & **12.22** & **41.12** & - & 8.64 & 29.58 & - & 15.25 & 36.83 & - & 780.0M \\ & & 7.99 & 15.11 & - & 6.06 & **21.03** & - & 7.98 & 15.01 & - & 16.38 & 37.68 & - & 3.0B \\ & & **ImageText** & **7.81** & **10.9** & **-** & **4.36** & **13.08** & - & **7.24** & **12.29** & - & **15.06** & **28.9** & - & 3.0B \\ \hline \hline \multirow{6}{*}{BLIP2 (11B)} & Text & 9.9 & 24.43 & - & 8.05 & 30.37 & - & 9.73 & 22.46 & - & **17.09** & 39.94 & - & 11.0B \\ \cline{2-11} & Text & 11.52 & 33.44 & 6.99 & 9.42 & 52.83 & 11.33 & 34.85 & 60.31 & 19.12 & 30.42 & 77.38 & 80.0M \\ \cline{1-1} & Text & 8.92 & 60.63 & - & 9.94 & 49.07 & - & 9.6 & 58.16 & - & 18.15 & 84.75 & - & 250.0M \\ \cline{1-1} & Text & 10.22 & 36.57 & - & **12.12** & **40.65** & - & 11.07 & 35.5 & - & 15.35 & 30.83 & - & 780.0M \\ \cline{1-1} & Text & 10.78 & **20.59** & - & **8.33** & **19.63** & - & 9.73 & **14.43** & - & 19.85 & 35.42 & - & 3.0B \\ \cline{1-1} & **ImageText** & **14.13** & **26.36** & - & 10.61 & **21.03** & - & **13.54** & **25.06** & - & **20.71** & **30.42** & - & 11.0B \\ \cline{1-1} & Text & 12.45 & 30.64 & - & 8.05 & 28.97 & - & **12.42** & 27.11 & - & 18.51 & 32.5 & - & 11.0B \\ \hline \hline \end{tabular}
\end{table}
Table 2: Models of drastically different sizes and multimodal capability can produce effective subquestions, as measured by \(\mathrm{E}_{CR}\) in Eq. (1), their ability to correct errors in a VQA Model. However, subquestions produced by larger models are less likely mislead the consuming VQA model, as measured by \(\mathrm{E}_{IC}\) in Eq. (2). “Text” indicates a language-only decomposer, while “Image+Text” indicates a vision-language decomposer. “Params” refers to the parameters of the decomposer. A pink highlight indicates when the decomposer and vqa model are the same (the model is talking to itself).
Figure 2: The procedure we use to generate a decomposition and use it as additional guidance during zero-shot VQA. The recomposer can be any question answering model, and the decomposer can be any generative language model, and some models can perform both roles, leading to self-talk. In experiments, we test if various decomposer candidates can learn to write effective subquestions purely from seeing a demonstration of the task.
datasets from three domains: ArtVQA[36] (art), SLAKE[20] (medical), and A-OKVQA[35] and OKVQA [37] (external knowledge VQA on natural images). We then carry out the procedure illustrated in Fig. 2 for each combination of decomposer, recomposer, and dataset. We handcraft three demonstrations of writing a subquestion for a question, in the form "_Reasoning Question: <question>? Perception Question: <subquestion>?_" For each \(v\), \(q\) pair in a dataset, we prompt the decomposer with the demonstration, followed by the question \(q\) as in Fig. 2, and measure E\({}_{CR}\) as in Eq. (1) and E\({}_{IC}\) as in Eq. (2) for each dataset. We show the results in Tab. 2. We find that _yes, language models \(\leq 13\)_B parameters can learn to produce effective decompositions just by viewing examples_. Decomposer size correlates positively with E\({}_{CR}\) (\(R^{2}=0.344\)), and negatively with E\({}_{IC}\) (\(R^{2}=.273\)) and the correlations are significant at \(\alpha=0.05\) across a larger collection of eight datasets used in Sec. 5. A human examination of the "subquestions" produced by smaller models shows that many of them are gibberish and not properly formed questions at all. Despite this, they surprisingly manage to maintain an \(E_{CR}\) that is sometimes higher than larger models. Finally, **the ability to decompose questions in the evaluated datasets _may be a primarily linguistic ability_** in that it is possible to ask effective subquestions about an image without being able to see the image, and the difference in effectiveness between the Image+Text BLIP-2 models and the text-only FLAN-T5 models of a similar size is on average \(\approx 10\%\) of the base error rate (but this may not be true of other VQA datasets).
## 5 Selective Decomposition Works Better Than Naive Decomposition
One problem shows up in Sec. 4, which is that applying decompostions to _every_ question can hurt performance, by flipping answers that were initially correct to be incorrect. If we were able to decompose only wrong answers, we would always see a net gain in performance due to the error correction of decompositions. However, in a realistic setting, we do not know apriori that our answers are wrong, and thus run the risk of flipping an answer that was initially correct to be incorrect by applying a decomposition that is misleading.
We call this the _second-guessing_ problem. To deal with the second-guessing problem, we propose following an intuitive human strategy: stick with your initial answer on questions you are confident about, and only second-guess (apply a decomposition) for questions you are not confident about. Language models can be surprisingly well calibrated[39], meaning that the probability they assess to an output sequence they produce is often well-correlated with the probability that the produced output sequence is the "correct" one for a given task. We make use of this property to treat visual question answering as a selective prediction [40] task, using the language models's confidence as a decision score to determine whether we should apply a decomposition to a instance or stick with the original answer.
We describe the algorithm in pseudocode in Fig. 3. The _selective decomposition_ procedure transforms VQA from a single-step task to a two-step task given a decomposer model, a recomposer model, a confidence threshold \(\tau\), and a visual question pair \(v\), \(q\). An initial answer \(\hat{a}\) and confidence \(p(\hat{a})\) is solicited from the recomposer model. If \(p(\hat{a})<\tau\), the decomposition procedure is invoked, and a subquestion and answer pair \((q^{\prime},a^{\prime})\) are generated by the decomposer and recomposer working together. The recomposer model is then allowed to "second-guess" the inital answer \(\hat{a}\) with the decomposition \((q^{\prime},a^{\prime})\) as additional context. The decomposer and recomposer can be the same model or different models. We experiment with both scenarios. This introduces an extra hyperparameter \(\tau\) into the inference procedure.
**Experiments & Discussion** In Fig. 4, we show the effect of different values of \(\tau\) on the accuracy of selective decomposition with several decomposers. Across all datasets and all models, there is a wide range of \(\tau\) (expressed as percentiles) for which selective decomposition improves predictive accuracy. At the same time, we clearly demonstrate the _second guessing_ problem in Fig. 4. Decomposing _every_ question often eventually leads to lower accuracy than decomposing no questions at all, because hallucinations and misleading decompositions can flip an initially correct answer to an incorrect answer.
Figure 3: Pseudocode for selective decomposition.
In Tabs. 3 and 4, we show the highest possible net gain achieved by selective question decomposition on three domains by different decomposers. Selective decomposition consistently improves predictive accuracy regardless of the decomposer and domain. _Net gains are larger on datasets (passes -test with \(\alpha=0.05\)) containing non-naturalistic images and specialized domains (e.g. medical) than they are on domains containing natural images._ The mean optimal surprisal \(I(\tau)\) for second-guessing answers is lower for non-natural domains (\(\mu_{I(\tau)}=13.2\) for the natural image datasets vs \(\mu_{I(\tau)}=9.0\) for the medical and art datasets, confirmed by t-test at \(\alpha=0.5\)). We further visualize this in Fig. 5. _This matches our expectations: you should second guess yourself on domains you understand poorly more than on domains you understand well._ A linear regression fit shows that larger decomposers correlate with larger net gains (\(R^{2}\)=0.365, \(R^{2}\)=0.342 for natural image domains and medical / art domains respectively, t-test with \(\alpha\)=0.05).
We reformulate Winoground as a VQA task by turning each caption into a question with a boolean yes / no answer (does "<caption>" describe the image?) on which chance accuracy is 50%. As visible in Tab. 3, all BLIP-2 models perform below random chance, in agreement with previous results on Winoground showing that it is extremely difficult for vision-language models. Surprisingly, after decompositions produced by the relatively FLAN-T5-small/base models (80M/200M parameters), the performance of BLIP-2 (13B) rises to significantly above chance (+18%). Upon inspection, many of the decompositions produced by the model appear to be gibberish, yet remarkably, induce the much larger 13B BLIP-2 model to correct over 30% of its initially wrong answers.
## 6 Literature Review
Task decomposition [5; 41; 42; 6] improves the performance of large language models on zero-shot reasoning tasks. The only work so far to apply similar techniques for VQA is MM-CoT [43], but it does not explore task decomposition with large vision-language models, choosing to finetune a smaller model instead. The ability to use zero-shot task decompositions may be a property of model scale, emerging at 60-200B parameters [44], or may be a property of large-scale pretraining on code [45]. Such large vision-language models have only been developed recently due to advances in vision-language alignment. The prevailing paradigm in vision-language pretraining was to build vision-language models atop (relatively) small language models [26; 27; 28; 46; 29] below 1B parameters. Meanwhile, language models were being scaled from 3B-175B parameters [47; 48; 49; 50; 34], with each model family having at least one representative with \(>10\)B parameters. Because vision-language
Figure 4: Selective decomposition mitigates the problem of misleading decompositions. We decompose questions based on model confidence in the initial answer, and show how accuracy initial rises past the baseline as the model mostly second guesses wrong answers, and then drops below the baseline with no decompositions (horizontal line) if too many questions initially answered correctly are second-guessed.
Figure 5: Decomposition is more effective on non-natural image domains, and models are also less confident in these domains. Size of circles is proportional to parameter count.
pretraining typically requires full model training, aligning these multi-billion parameter models to the visual modality was prohibitively computationally expensive. However, recent discoveries [51, 52] motivated by earlier work with frozen models [53] have shown that the representation spaces of vision models and large-language models are surprisingly close, and rough alignment can be achieved with adapters [15] or linear mapping layers while keeping the language model frozen, and more advanced techniques have given rise to vision-LLMs [7, 8, 9]. Our work is closely related to the visual question generation paradigm of [11, 54, 55]. However, we direct our question generation to focus on decompositions rather than general questions.
## 7 Conclusion
We show that question decomposition is already a viable strategy that can be used to implement a more natural approach to VQA. Without any training, instruction-tuned VLMs can learn to produce and use decompositions from a few demonstrations of the task. This approach has many possible future directions. For example, we only consider two-step approaches for visual question answering, where we "hardcode" the depth of the decomposition. A natural next step would be to extend the two-step approach to a multi-step approach, which remains unexplored for large vision-language models in an open-world visual question answering setting. Second, in-context learning has limitations. Would models benefit from being trained to produce and consume decompositions?
|
2305.13080 | Mitigating Catastrophic Forgetting for Few-Shot Spoken Word
Classification Through Meta-Learning | We consider the problem of few-shot spoken word classification in a setting
where a model is incrementally introduced to new word classes. This would occur
in a user-defined keyword system where new words can be added as the system is
used. In such a continual learning scenario, a model might start to misclassify
earlier words as newer classes are added, i.e. catastrophic forgetting. To
address this, we propose an extension to model-agnostic meta-learning (MAML):
each inner learning loop, where a model "learns how to learn'' new classes,
ends with a single gradient update using stored templates from all the classes
that the model has already seen (one template per class). We compare this
method to OML (another extension of MAML) in few-shot isolated-word
classification experiments on Google Commands and FACC. Our method consistently
outperforms OML in experiments where the number of shots and the final number
of classes are varied. | Ruan van der Merwe, Herman Kamper | 2023-05-22T14:51:15Z | http://arxiv.org/abs/2305.13080v1 | # Mitigating Catastrophic Forgetting for Few-Shot Spoken Word Classification Through Meta-Learning
###### Abstract
We consider the problem of few-shot spoken word classification in a setting where a model is incrementally introduced to new word classes. This would occur in a user-defined keyword system where new words can be added as the system is used. In such a continual learning scenario, a model might start to misclassify earlier words as newer classes are added, i.e. catastrophic forgetting. To address this, we propose an extension to model-agnostic meta-learning (MAML). In our new approach, each inner learning loop--where a model "learns how to learn" new classes--ends with a single gradient update using stored templates from all the classes that the model has already seen (one template per class). We compare this method to OML (another extension of MAML) in few-shot isolated-word classification experiments on Google Commands and FACC. Our method consistently outperforms OML in experiments where the number of shots and the final number of classes are varied.
Ruan van der Merwe\({}^{1}\) and Herman Kamper\({}^{2}\)\({}^{1}\)ByteFuse
\({}^{2}\)E&E Engineering, Stellenbosch University, South Africa
[email protected], [email protected]
**Index Terms**: continual learning, few-shot learning, spoken word classification, meta-learning.
## 1 Introduction
Imagine a speech system that a user can teach new commands by providing it with just a few examples per word class. To start out with, the user might provide the system with examples of the words "sing", "open" and "close", and with just a handful of support examples, the system should be able to correctly classify new test inputs. (This should work irrespective of the language of the user.) In contrast to conventional speech recognition systems that are trained on thousands of hours of examples, such a system would be _few-shot_. Inspired by the observation that humans can learn new words from very few examples, a number of studies in machine learning have started to look at this problem of few-shot word classification [1, 2, 3].
But now imagine that, as the user is using the system, they want to add more words to the system, e.g. "turn" and "give". As more and more words are added, the system might start to misclassify words that it learned earlier--the problem of catastrophic forgetting [4, 5]. The combination of dynamic environments, limited support examples used for training, and continual learning make this task a major challenge. While other studies have look at the few-shot problem [1, 6], the proposed methods do not deal with the continual learning problem. In this paper we propose a new approach for few-shot continual learning and evaluate it specifically for isolated word classification.
Outside of speech processing, there has been several studies on continual learning, e.g. [7]. Many of these studies try to explicitly address the problem of catastrophic forgetting [8, 9]. Within speech research, there has been some limited attempts to address the continual learning problem, specifically in automatic speech recognition (ASR) [10] and keyword spotting applications [11]. However, these studies do not consider the few-shot learning setting, but rather on adding new vocabulary words to supervised models trained on substantial amounts of labelled data. Within the signal processing community, there has been some studies looking at both few-shot learning and continual updating [12], but this was for general audio and not spoken word classifacation.
In this paper we specifically look at addressing few-shot continual learning by utilising meta-learning techniques, where algorithms learn automatically how to solve the continual learning task [13, 14, 15]. We specifically extend model-agnostic meta-learning (MAML) [16], which is a meta-learning technique that optimises an initial set of model weights such that they can be quickly updated to a new task. MAML has been used before within speech research for speaker adaptive training [17], and data-efficient ASR [18, 19], but not for few-shot continual word learning.
We propose a new approach: MAML for **c**ontinual learning (MAMLCon). This extension over MAML is very simple, but it leads to consistent improvements in few-shot word classification. MAMLCon specifically extends MAML by explicitly doing meta-learning of an increasing number of new classes in the inner loop of the algorithm. At the end of the inner loop, MAMLCon also performs a single update using templates stored for all the classes seen up to that point. Since MAMLCon has learned how to learn continually, it is able to do so efficiently at test time on classes that are completely unseen during meta-learning.
We compare MAMLCon to another continual learning extension of MAML called OML [13]. We perform experiments where we vary the number of shots, the number of steps where classes are added, and the final number of word classes. In all cases the simple MAMLCon extension outperforms OML in isolated word few-shot classification.
## 2 MAMLCon
### Background on MAML
Model-agnostic meta-learning (MAML) [16] is an algorithm that aims to learn an initial set of weights that can be rapidly adapted to new tasks using just a few examples from the target task. Consider the example of one-shot speech classification. We want a model that can learn to classify new words based on a single training example per word class. E.g. we give the model a _support set_1 with a single example for "sing", "open", "close" and then want the model to accurately classify test inputs from
one of these classes. A naive approach would be to start with a randomly initialised model and then simply update its weights through gradient descent directly on this support set. The idea behind MAML is to instead learn good initial weights which can then subsequently be fine-tuned. MAML does this by using a large labelled dataset and then simulating many few-shot classification tasks. Continuing with our example, let's say we have a very large training set of isolated words with their labels (no examples from our few-shot classes). From this training dataset we can sample a meta-support set and a meta-test set, e.g. "hello", "drop", "greetings". In the so-called _inner loop_ of the MAML algorithm, we then update the model weights using a few gradient descent steps on the support set. Instead of storing the resulting weights from these inner-loop updates, MAML optimises the initial weights \(\theta\) on top of which the inner-loop updates are performed. I.e., the _outer loop_ of MAML tries to find a good initialisation for doing a few gradient steps on a handful of examples. The result is weights \(\theta^{*}\) that are optimised so that they work well when a few gradient steps are applied on top of them using a small set of support examples.
More formally, in the inner loop, the model's current weights at step \(j\), \(\theta^{j}_{0}\), are optimised for a given task \(\mathcal{T}_{i}\), resulting in updated weights \(\theta^{j}_{T}\), where \(T\) is the total number of inner-loop update steps. In the outer loop, the performance of the fine-tuned model \(\theta^{j}_{T}\) is evaluated on a meta-test set, and the initial weights \(\theta^{j}_{0}\) are then updated through:
\[\theta^{j+1}_{0}\leftarrow\theta^{j}_{0}-\beta\nabla_{\theta^{j}_{0}}\sum_{ \mathcal{T}_{i}}\mathcal{L}_{\mathcal{T}_{i}}\left(X^{\text{TEST}}_{i},Y^{ \text{TEST}}_{i},\theta^{j}_{T}\right) \tag{1}\]
Here, \(X_{i}\) and \(Y_{i}\) are data points from task \(\mathcal{T}_{i}\), and \(\beta\) is the outer learning rate, with the inner-loop update steps having an inner learning rate \(\alpha\). Updating \(\theta^{j}_{0}\) in this manner leads to optimised weights \(\theta^{*}\) which can be fine-tuned to new tasks in only a few steps. When the inner loop is constrained to only a few examples per class, the algorithm can learn to accomplish the task with a limited number of examples, thus resulting in a few-shot classification model.
To test a model after training it using MAML, we can sample multiple groups of words from our few-shot classes and construct multiple scenarios where you train on a support set and measure on a held-out test set. The optimised model \(\theta^{*}\) is copied to each distinct scenario for training. An example of how these meta-training and -testing scenarios are constructed is shown in Figure 1, where we show just one task in both the training and testing stages. For further reading on meta-learning and MAML, please refer to [20].
### MAMLCon: Learning to Continually Learn
Consider the following example for word classification in a continual learning setting. Let's say at test time a model has received a support set for the words "sing", "open" and "close". We used MAML and updated the model on this support set and it achieves reasonable performance. But now we want the model to additionally be able to classify the words "turn" and "give". We give the model a few more support examples for these new words and update its weights through further fine-tuning. Later on, we want to add even more words by just giving a few examples. The problem is that as we add more and more words, the model would start to fail on words that it learned earlier. This is called catastrophic forgetting.
To address this, we propose a new extension of MAML: **m**odel-**a**gnostic **m**eta-**learning for **c**on**tinual learning (MAMLCon). MAMLCon extends MAML in two ways. First, it formulates the continual learning problem itself as a meta-learning task. Secondly, it utilises a single update step on previously acquired knowledge. The motivation for this step is to optimise the model such that one can use use the smallest possible dataset (one example per class) to maintain performance on previously learned words.
The training process of MAMLCon is shown in Figure 2. As an example, let's say that during training we sample a meta-support set consisting of five examples each for "hello", "drop", "greetings". In MAML we would just fine-tune on all the examples together. Instead, in the inner-loop training phase of MAMLCon, the model is first trained for \(T\) steps on the "hello" examples, followed by \(T\) steps of training on "drop" and then \(T\) steps on "greetings". Once the model has been trained on all examples in the meta-support set, a single batched weight update step is performed using a single stored example for each of the "hello", "drop", "greetings" classes. In the outer loop, the meta-test set, which contains samples for all words in the meta-support set, is used to evaluate the performance of the model, and the original weights are updated to obtain an optimal set of weights \(\theta^{*}\). Because with MAMLCon the model has seen incremental learning during training, these weights are optimised to facilitate few-shot continual learning. This means we can the update the model further on "turn" and "give" and the model would still perform well on "hello", "drop" and "greetings".
To state this formally, in the inner loop, the model's weights, \(\theta^{j}_{0}\), are updated through sequential training on new classes in the meta-support set. The inner-loop optimisation is performed through the calculation of gradients with respect to \(\theta^{j}_{i}\) based on the loss computed on a per-class (or per-group of classes) basis from the meta-support set, leading to the updated weights \(\theta^{j}_{i+1}\). At the end of the inner loop, a single weight update is performed on a previously seen template from each class, enabling the model to leverage its prior knowledge. In Figure 2,
Figure 1: _During training, MAML samples meta-support and -test sets from labelled data. At test time, it is then presented with a support set containing classes never seen during training, and asked to classify test items from these classes._
Figure 2: _The MAMLCon training process. We construct the continual learning setup directly as a meta-task, where the algorithm is tasked to learn how to perform well in continual learning setup while being allowed to observe one already seen example from previously learned word groups and update its weights with one update step._
this set of templates is denoted with a dash, \(\{X^{i}_{1:3},Y^{i}_{1:3}\}\). The outer loop computes the loss on the meta-test set and applies the meta-update step to the original weights to obtain \(\theta^{j+1}_{0}\). The update is performed based on the gradient of the test loss with respect to \(\theta^{j}_{0}\), as in Equation 1.
At test time, MAMLCon is used by just following the inner loop. Every time that classes are added, \(T\) update steps are followed with one update step on a set of templates for all classes learned up to that point. This means that in a real-life use case, we will just have to store a single example per class to act as templates in future updates.
Our method is most similar to online aware meta-learning (OML) [13]. The OML classifier consists of a feature extractor with weights \(\theta_{\text{RE}}\) that feeds into a prediction network with weights \(\theta_{\text{PS}}\). In OML's the inner loop, they sample \(N\) classes to train on sequentially but only update \(\theta_{\text{PN}}\), leading to \(\theta^{*}_{\text{PN}}\). After training these \(N\) classes, they sample a random batch of data and measure the meta-test loss on this batch. They then back-propogate through this entire process to update \(\theta_{\text{FE}}\) and \(\theta_{\text{PN}}\). Our method differs from OML in several ways. Firstly, in the inner loop, we update the entire network and not just the prediction network. Secondly, we allow the model to access a single example of a previously seen class during the inner-loop training phase. Finally, in contrast to OML, we do not perform the meta-test on a random sample of classes, but instead on all classes seen up to that point.
## 3 Experimental Setup
**Data.** We perform word classification experiments using the Flickr 8k Audio Caption Corpus (FACC) [21] and the Google Commands v2 dataset [22]. For the experiments on FACC, utterances are segmented into isolated words using forced alignments, and words with the same stem are grouped into a single class. Both the FACC and Google Commands datasets are split so that words with the same stem will not appear in the training and test sets. For FACC, this results in approximately 100 unique stems that can be sampled for continual learning, while there are 10 unique stems for Google Commands. We divide these stems randomly into our test and train splits. Between epochs in meta-learning, the same word class will be assigned a different integer label so that the model is not able memorise a particular word in the meta-learned weights.
**Models.** All words are parameterised as mel-frequency cepstral coefficients (MFCCs) with delta and delta-delta features. Input items are zero-padded to a consistent length. A simple 3-layer 2D convolutional neural network is applied to extract features from the MFCCs, which are then fed into a single fully connected layer that is trained to classify the given words. We use the same architecture for OML. The Adam optimiser [23] is used for both inner and outer loop updates, with a learning rate of \(0.001\) for the inner loop and \(0.0001\) for the outer loop.
In all the experiments below we start with a set of initial words, and then incrementally add more word classes. For the initial set of words being learned by the model, we perform \(T=30\) weight updates to ensure saturation of the model to simulate the scenario in the real world of having a well-trained model and subsequently updating it. After this, for each new group of classes added to the model, \(T=5\) update steps are performed. In the quick adaptation step on the templates at the end of the inner loop, a single example per class is sampled from the support set and a single update is performed. We use the first-order MAML algorithm [16], which ignores the meta-learning process's second-order derivatives; this doesn't affect performance while speeding up computation and reducing memory requirements [16, 24]. We adapt the Learn2Learn software package [25] for training both OML and MAMLCon.2
Footnote 2: Source code: [https://github.com/ByteFuse/MAMLCon](https://github.com/ByteFuse/MAMLCon)
**Evaluation.** We consider different continual learning scenarios. All start with an initial set of few-shot learned word classes: this number of initial classes are denoted as **CS**. We then incrementally introduce a number of additional word types (**CA**) at every update step. The final number of word types is denoted as \(N\). An experiment can then be summarised using a succinct notation: e.g. \(N50\)**:CS5:CA5** would represent a a scenario in which the model ends with a total of 50 word classes, with each iteration incorporating five new words after initially training on five words.
## 4 Experiments
We compare MAMLCon to OML for few-shot word classification in a range of continual learning experiments. We do not evaluate MAML in isolation, as it has been surpassed in performance by OML and other recent advancements [13, 14].
### Frequent vs Infrequent Updates
A good continual learning algorithm should perform well in scenarios where we add many words at every update step (therefore requiring fewer updates to reach the final number of types \(N\)) as well as scenarios where a small number of words are added at every update (requiring more frequent updates to reach \(N\)). We compare MAMLCon to OML in both these scenarios, referred to, respectively, as infrequent and frequent updates. For infrequent updates we consider these setups: \(N5\)**:CS1**:CA3**, \(N10\)**:CS2**:CA5** and \(N50\)**:CS5**:CA2**. For frequent updates we consider \(N5\)**:CS1**:CA1**, \(N10\)**:CS2**:CA1** and \(N50\)**:CS5**:CA5**. All setups here use \(K=5\) shots (we vary this in the section below).
The results are shown in Table 1, where \(N\) is used to identify the particular learning scenario. By looking at the infrequent update scenario, we observe that MAMLCon achieves high accuracies in both smaller (\(N=\{5,10\}\)) and larger class scenarios (\(N=50\)). In contrast, OML struggles particularly when more classes need to be learned: this can be seen when looking at the sharp drop in accuracy between the results for the FACC dataset in the \(N=10\) and \(N=50\) cases, and on the Google Commands dataset when going from \(N=5\)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Google Commands} & \multicolumn{2}{c}{FACC} \\ \cline{2-5} \(N\) final classes & 5 & 10 & 10 & 50 \\ \hline _Infrequent updates:_ & & & & \\ \hline OML & 62.1 & 49.9 & 72.8 & 32.3 \\ MAMLCon & **85.2** & **73.6** & **86.7** & **74.5** \\ _Frequent updates:_ & & & & \\ \hline OML & 61.8 & 36.5 & 75.1 & 51.8 \\ MAMLCon & **82.7** & **72.9** & **76.8** & **71.7** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Few-shot classification accuracy (%) over all \(N\) classes for continual learning settings where a small number of classes are added frequently, or a large number of classes are added infrequently. \(N\) is the final number of classes after continual learning.
to \(N=10\). A similar pattern emerges in the frequent update scenario, where we see that OML shows large drops in accuracy when learning more classes: a particularly large drop is observed on Google Commands when going from \(N=5\) to \(N=10\). Overall, the results demonstrate the superior performance of MAMLCon over OML in both frequent and infrequent update scenarios.
### Few-shot Capabilities
The number of support examples a model can use for learning a new word would depend on the specific practical setting: in some cases we would have only one example per class, while in other cases we could get substantially more. Here we assess the performance of MAMLCon as the number \(K\) of support examples (the number of "shots") are varied. We investigate how well MAMLCon operates under these different conditions to gain a better understanding of its capabilities.
Concretely, we present the performance for continual learning setups of \(N50\):**CS5**:**CA5 when evaluating on the FACC dataset and \(N10\):**CS2**:**CA1 when evaluating on the Google Commands dataset over different values of \(K\). These setups were chosen as they represent the most challenging scenarios, requiring multiple weight updates between the initial and final classes.
As seen in Figure 3, when focusing solely on the results for the FACC dataset, as \(K\) increases from \(1\) to \(20\), the overall performance improves as expected, with only a small increase in performance between \(K=5\) and \(K=20\). However, as \(K\) continues to increase, performance decreases. This pattern is also evident in the Google Commands results.
It is encouraging that MAMLCon still performs well with a small number of shots, but it is also somewhat surprising and concerning that as \(K\) increases, performance starts to deteriorate. This relationship between accuracy and the number of training examples in Figure 3 can be explained by the trade-off between sample complexity and catastrophic forgetting. We speculate that a moderate value of \(K\), in the range of 20, is sufficient to acquire a robust representation of the task at hand, which is to learn a new word. However, as \(K\) increases beyond this point, the weight updates for the new classes may become excessive, resulting in the model forgetting previously learned information.
### Retention of Knowledge
In the preceding sections we looked at performance across all words after a few-shot system has been trained in a continual learning setting. But how does performance differ between words that are learned earlier relative to words added later in the continual learning cycle? To answer this, we look at the performance of individual words. This allows us to determine how well the model performs on previous classes and how well it retains the knowledge about those words after being trained on new words.
Table 2 shows the results of MAMLCon, OML and a model which was not pre-trainedon the FACC dataset. We use a \(N50\):**CS5**:**CA5 setup, with \(K=20\). This means that there will be ten update steps, with five word classes being added each time. The performance for the words learned in the very first group are given in the row with the 1-5 label, while the words learned in the very last update are given in the 46-50 row. The accuracy after initial training (S) and the final training (E) for each label group is displayed, along with the difference (\(\Delta\)) between these two accuracy scores.
MAMLCon again outperforms OML in terms of overall accuracy, achieving 77.0% accuracy versus the 64.5% of OML. Looking at individual words, MAMLCon is effective in retaining its knowledge of early label groups (1-30) while struggling more to maintain its accuracy over the later label groups. Conversely, OML performs better in retaining knowledge over later label groups, but shows low accuracy for the early groups.
## 5 Conclusion
We proposed a novel few-shot continual learning algorithm: model-agnostic meta-learning for continual learning (MAMLCon). It is an extension of MAML that formulates the few-shot continual learning task as a meta-task, allowing the weights to be updated only once by a previously seen word example upon completion of training on new words. We compared MAMLCon to OML, a previous meta-learning algorithm for continual learning. The findings show that MAMLCon outperforms OML in overall accuracy across two datasets and label distribution sizes under both infrequent and frequent update scenarios. Furthermore, our results indicate that MAMLCon effectively maintains knowledge of early label groups while showing more difficulty retaining knowledge of later groups. Nonetheless, it achieves a higher overall accuracy.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{MAMLCon} & \multicolumn{2}{c}{OML} & \multicolumn{2}{c}{No Pre-Training} \\ \cline{2-7} Labels & S/E & \(\Delta\) & S/E & \(\Delta\) & S/E & \(\Delta\) \\ \hline
1-5 & 95/95 & 0 & 100/35 & -65 & 90/20 & -70 \\
6-10 & 100/95 & -5 & 85/5 & -80 & 85/50 & -35 \\
11-15 & 90/85 & -5 & 100/70 & -30 & 90/25 & -65 \\
16-20 & 95/70 & -25 & 90/75 & -25 & 100/40 & -60 \\
21-25 & 95/80 & -15 & 75/65 & -10 & 60/10 & -50 \\
26-30 & 95/70 & -25 & 100/95 & -5 & 100/40 & -60 \\
31-35 & 95/50 & -45 & 80/55 & -25 & 85/20 & -65 \\
36-40 & 80/70 & -10 & 75/80 & 5 & 90/0 & -90 \\
41-45 & 85/60 & -25 & 90/75 & -15 & 75/60 & -15 \\
46-50 & -95 & - & -90 & - & -65 & - \\ \hline Accuracy & 77.0 & & 64.5 & & 33.0 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of knowledge retention capabilities in continual learning models on the FACC dataset. We measure the accuracy for each label group as it was trained, as well as at the end of training after all words have been learned by the model. We then show the the difference (\(\Delta\)) between the start (S) and end (E) accuracies. The final accuracy when taking all labels into account is also shown.
Figure 3: Few-shot classification accuracy (%) of MAMLCon as the number of shots \(K\) per class is varied. |
2301.01173 | Message Passing-Based 9-D Cooperative Localization and Navigation with
Embedded Particle Flow | Cooperative localization (CL) is an important technology for innovative
services such as location-aware communication networks, modern convenience, and
public safety. We consider wireless networks with mobile agents that aim to
localize themselves by performing pairwise measurements amongst agents and
exchanging their location information. Belief propagation (BP) is a
state-of-the-art Bayesian method for CL. In CL, particle-based implementations
of BP often are employed that can cope with non-linear measurement models and
state dynamics. However, particle-based BP algorithms are known to suffer from
particle degeneracy in large and dense networks of mobile agents with
high-dimensional states.
This paper derives the messages of BP for CL by means of particle flow,
leading to the development of a distributed particle-based message-passing
algorithm which avoids particle degeneracy. Our combined particle flow-based BP
approach allows the calculation of highly accurate proposal distributions for
agent states with a minimal number of particles. It outperforms conventional
particle-based BP algorithms in terms of accuracy and runtime. Furthermore, we
compare the proposed method to a centralized particle flow-based
implementation, known as the exact Daum-Huang filter, and to sigma point BP in
terms of position accuracy, runtime, and memory requirement versus the network
size. We further contrast all methods to the theoretical performance limit
provided by the posterior Cram\'er-Rao lower bound (PCRLB). Based on three
different scenarios, we demonstrate the superiority of the proposed method. | Lukas Wielandner, Erik Leitinger, Florian Meyer, Klaus Witrisal | 2023-01-03T16:01:43Z | http://arxiv.org/abs/2301.01173v1 | # Message Passing-Based 9-D Cooperative Localization and Navigation with Embedded Particle Flow
###### Abstract
Cooperative localization (CL) is an important technology for innovative services such as location-aware communication networks, modern convenience, and public safety. We consider wireless networks with mobile agents that aim to localize themselves by performing pairwise measurements amongst agents and exchanging their location information. Belief propagation (BP) is a state-of-the-art Bayesian method for CL. In CL, particle-based implementations of BP often are employed that can cope with non-linear measurement models and state dynamics. However, particle-based BP algorithms are known to suffer from particle degeneracy in large and dense networks of mobile agents with high-dimensional states.
This paper derives the messages of BP for CL by means of particle flow, leading to the development of a distributed particle-based message-passing algorithm which avoids particle degeneracy. Our combined particle flow-based BP approach allows the calculation of highly accurate proposal distributions for agent states with a minimal number of particles. It outperforms conventional particle-based BP algorithms in terms of accuracy and runtime. Furthermore, we compare the proposed method to a centralized particle flow-based implementation, known as the exact Daum-Huang filter, and to sigma point BP in terms of position accuracy, runtime, and memory requirement versus the network size. We further contrast all methods to the theoretical performance limit provided by the posterior Cramer-Rao lower bound (PCRLB). Based on three different scenarios, we demonstrate the superiority of the proposed method.
## I Introduction
Location awareness is crucial for various applications, such as Internet-of-Things, autonomous navigation, or public safety [1, 2, 3, 4]. Cooperative localization (CL) methods aim to estimate the locations of agents in a wireless sensor network, where agents can communicate among their neighbors and exchange information about their position [5, 6, 7, 8, 9]. This leads to an improvement of the positioning accuracy as well as an increasing localizability [10] while preventing the use of high-density anchor deployment as needed for non-CL [11, 12, 13, 14, 5, 6]. In fact, the anchor infrastructure can be fully avoided when using multipath channel information contained in radio-signals [9]. Due to the increased localizability, CL is more robust than non-CL since more information in the network can be used. This increased robustness is especially useful for scenarios with very uninformative measurement models such as RSS based localization [15, 16, 13, 17]. CL algorithms are scalable and can be implemented in a distributed manner, which makes them particularly useful for large-scale networks [18, 19, 20]. A further crucial aspect of CL is to track high-dimensional agent states accurately. This paper proposes a new method for this purpose where different state-of-the-art algorithms fail as described in the following.
### _State-of-the-Art_
Promising methods for CL are based on the framework of factor graphs (FGs) and message-passing (MP) calculations, which can be categorized into mean-field message-passing-based methods [21, 22] and belief propagation (BP)-based methods [16, 18, 20, 23, 24, 25, 26]. In particular, BP-based methods are known to provide accurate solutions to high-dimensional Bayesian estimation problems efficiently by executing message-passing on a cyclic FG. The sum-product rule is used to compute approximations ("beliefs") of the marginal posterior probability density functions (PDFs) of agent positions [18, 20]. BP-based methods are very flexible and have been successfully applied to many diverse applications as for example radio signal-based simultaneous localization and mapping (SLAM) [27, 28, 29], multiobject tracking [30, 31, 32], and cooperative multiobject tracking [33]. Their excellent scalability and distributed nature make BP-based algorithms a
Fig. 1: Visualization of the particle flow (dash-dotted green lines) of two cooperating agents in the vicinity of three anchors. Each agent has only connections to two anchors (grey circles) indicated by the multimodal PDF of the agent positions (color map).
powerful tool for CL on large-scale networks [18, 19, 20]. BP-based methods are categorized into parametric BP algorithms [25] and non-parametric BP algorithms [23]. Since the measurement models are usually non-linear and the calculations of the messages and beliefs cannot be evaluated in closed form, it is common to use non-parametric BP algorithms, resorting to conventional bootstrap particle-based implementations [16, 20]. A common drawback of such methods is the curse of dimensionality, a known problem of sample-based estimation in high dimensions, and the presence of informative measurements. The curse of dimensionality can lead to particle collapse, also known as particle degeneracy [34]. It can often only be avoided by using an infeasible number of particles to represent the state accurately. Since the required memory and the computational demand are proportional to the number of particles, new strategies need to be developed for online estimation. A common approach to avoid particle degeneracy is to design an accurate proposal distribution or to make use of regularization [35, 20, 36]. For the former, we have to address the problem of how to design accurate proposal distributions. Furthermore, regularization has to be treated very carefully since it can introduce biases if not correctly chosen.
Recently, particle flow (PF) [37, 38, 39, 40, 41, 42] was suggested for estimation in nonlinear systems with high-dimensional states and highly informative likelihood models. It is shown that the resulting PF particle filter is asymptotically optimal for nonlinear estimation problems and avoids particle degeneracy even for a relatively small number of particles. PF particle filters are successfully applied to multi-sensor localization [43] and BP-based multi-target tracking [44] with the benefit that a significantly smaller number of particles are needed compared to bootstrap particle-based implementations. The main disadvantage of those methods is that they perform estimation based on the joint state. This increases the computational complexity excessively. Furthermore, some particle flow-based algorithms have an inherently large complexity, which provides an additional scaling by the number of used particles, for example, the localized EDH (LEDH) filter given in [43] or the stochastic flow described in [40]. This makes it unattractive for large networks and does not allow for a distributed implementation.
### _Contributions and Organization of the Paper_
This paper introduces a hybrid particle-based PF-BP message-passing algorithm for CL of mobile agents with 9-D states (three-dimensional position, velocity, and acceleration state vectors) and very informative measurement models. In this scenario, bootstrap particle-based BP methods that draw samples from predicted agent beliefs fail since an infeasible large number of particles is needed to represent the belief of agents accurately. Our approach avoids particle degeneracy using invertible PF [43] to compute BP messages. Invertible PF enables the migration of particles towards regions of high probability, leading to an accurate approximation of BP messages with a relatively small number of particles. Therefore, the proposed algorithm combines the computational efficiency and scalability of BP methods with the benefits of the PF method. The proposed algorithm exploits the factorization structure of the cooperative localization problem. This leads to an inherent reduction of the number of dimensions per calculation, which also counteracts the particle degeneration problem and allows for a distributed implementation. As an example, Figure 1 shows the particle flow of two cooperating agents, which are in the vicinity of three anchors. Since each agent has only connections to two anchors, the PDFs of the agent positions are multimodal. After considering the cooperative measurement, the particles flow to the "correct mode" of the posterior PDF, representing the "true" distribution of the agent positions.
Numerical simulations demonstrate that the proposed PF-BP algorithm can significantly outperform a conventional bootstrap particle-based BP algorithm using sampling-importance-resampling (abbreviated with SIR-BP) [20], a sigma point BP (SP-BP) algorithm [25], and a particle-based exact Daum-Huang (EDH) filter (with a stacked state vector containing all agent state vectors) [43] in terms of position accuracy. The results show that the proposed algorithm is Bayes-optimal in that it reaches the posterior Cramer-Rao lower bound (PCRLB) [45, 5], which can also be expressed in the framework of the equivalent Fisher information matrix [6, 7, 8]. The proposed algorithm has much lower memory requirements than the SIR-BP algorithm since it needs a significantly smaller number of particles for the same level of position accuracy. The particle-based EDH filter calculates the matrix inversions and multiplications for the stacked state vector containing all agent states. Therefore, the memory requirements are also in favor of the proposed algorithm for the same number of particles. This is due to the fact that using PF-BP, the matrix inversions and multiplications reduce to the dimensions of a subset of the joint agent state. The key contributions of this paper can be summarized as follows.
* We develop a distributed particle-based message-passing method for the CL of dynamic agents that computes BP messages using PF.
* We compare the proposed PF-BP method to state-of-the-art CL methods and demonstrate its superiority in terms of accuracy, runtime, and communication overhead.
* We demonstrate numerically that the proposed PF-BP method for CL can reach the PCRLB if the agents are localizable.
* We comprehensively analyze the investigated methods and highlight their benefits depending on different scenarios and applications.
In this work, we do not consider uncertainties beyond Gaussian noise, like missed detections, clutter/false alarm measurements, and data association uncertainty of measurements [31, 33, 46, 47]. This paper focuses on dynamic networks. The behavior of static networks can be analyzed by considering a single time step of the statistical model. This paper advances over the preliminary account of our method provided in the conference publication [48] by (i) also considering the uncertainties of cooperating neighbor agents in the PF-BP belief update equations, (ii) a detailed description of the proposed algorithm, (iii) an extension to higher state dimensions, (iv) a comprehensive comparison to established state-of-the-art algorithms and to the theoretical performance limit in terms
of the PCRLB. The remainder of this paper is organized as follows. Section II introduces the system and measurement model. We state the problem formulation in Section III. In Section IV, we provide a review of PF. In Section V, we describe the message-passing framework and explain the proposed method. The results of numerical experiments are reported in Section VI. Section VII concludes the paper.
_Notation:_ Column vectors are denoted by boldface lower-case letters and matrices in boldface uppercase letters. Random variables are indicated with sans serif, upright fonts and their realizations in serif, italic fonts as, for example, \(\mathbf{x}\) and \(\mathbf{x}\) and its respective realization as \(\mathbf{x}\) and \(x\). We define the PDF of a continuous random variable as \(f(\mathbf{x})\). For a vector \(\mathbf{x}\), we indicate its transpose by \(\mathbf{x}^{\mathsf{T}}\) and the Euclidean norm by \(\|\mathbf{x}\|\). The mean value of a vector is denoted as \(\overline{\mathbf{x}}\). We will also use this notation to indicate the sample-based mean value and the minimum mean-square error (MMSE) estimate. The cardinality of a set \(\mathcal{C}\) is defined as \(|\mathcal{C}|\). Furthermore, we use the notation \(\mathcal{C}\backslash\{i\}\) to indicate the exclusion of member \(\{i\}\) from the set \(\mathcal{C}\). The notation \(\mathbf{A}\otimes\mathbf{B}\) denotes the Kronecker product between matrix \(\mathbf{A}\) and \(\mathbf{B}\), whereas \(\odot\) indicates the Hadamard product. diag(\(\cdot\)) stands for a diagonal matrix or a block diagonal matrix with elements on the main diagonal given by the elements or matrices in brackets, respectively. \(\mathbf{I}_{m}\) is an identity matrix of dimensions \(m\). \([\mathbf{X}]_{k:l,m:n}\) denotes a submatrix of \(\mathbf{X}\) containing \(k\) to \(l\) rows and \(m\) to \(n\) columns. The notation \([\mathbf{x}]_{k:l}\) denotes a subvector of \(\mathbf{x}\) containing \(k\) to \(l\) elements. The time step \(k\) is indicated by a superscript \({}^{(k)}\) whereas the \(u\)th message passing iteration with \({}^{[u]}\). \(\nabla_{\mathbf{x}}\) indicates the Nabla operator with respect to \(\mathbf{x}^{(k)}\).
## II System Model
We consider a set of agents \(\mathcal{C}\) and a set of anchors \(\mathcal{A}\). The state of the agents is unknown, whereas the state of the anchors is exactly known. The number of agents and anchors is indicated by the cardinality of \(\mathcal{C}\) and \(\mathcal{A}\), respectively. We define two types of measurements: (i) measurements between agents and anchors \(\mathbf{z}_{i,a}^{(k)}\) at time step \(k\) with \(i\in\mathcal{C}\) and \(a\in\mathcal{A}_{i}^{(k)}\) where \(\mathcal{A}_{i}^{(k)}\subseteq\mathcal{A}\) is the set of anchors that perform measurements to agent \(i\) at time \(k\) and (ii) measurements in-between agents \(\mathbf{z}_{i,j}^{(k)}\) with \(i\in\mathcal{C}\) and \(j\in\mathcal{D}_{i}^{(k)}\) where \(\mathcal{D}_{i}^{(k)}\subseteq\mathcal{C}\backslash\{i\}\) is the set of agents that cooperate with agent \(i\) at time \(k\). The stacked vector of all measurements for all time steps is written as \(\mathbf{z}=[\mathbf{z}_{i,l}^{(1:K)}]_{i\in\mathcal{C},l\in\mathcal{A}_{i}^{(1:K) }\cup\mathcal{D}_{i}^{(1:K)}}\) with \(K\) being the total number of time steps. Each anchor has a fixed position which does not vary with time. The state of the \(i\)-th agent at time step \(k\) is denoted as \(\mathbf{x}_{i}^{(k)}=[\mathbf{p}_{i}^{(k)\mathsf{T}}\ \mathbf{v}_{i}^{(k)\mathsf{T}}\ \mathbf{a}_{i}^{(k) \mathsf{T}}]^{\mathsf{T}}\in\mathbb{R}^{9\times 1}\), where \(\mathbf{p}_{i}^{(k)}\in\mathbb{R}^{3\times 1}\), \(\mathbf{v}_{i}^{(k)}\in\mathbb{R}^{3\times 1}\), \(\mathbf{a}_{i}^{(k)}\in\mathbb{R}^{3\times 1}\) are, respectively, the position, velocity, and acceleration vectors. Thus, the number of dimensions per agent state is \(N_{\mathrm{D}}=9\). We define the joint state of agent \(i\) for all time steps as \(\mathbf{x}_{i}^{(1:K)}=[\mathbf{x}_{i}^{(1)\mathsf{T}}\ \ldots\ \mathbf{x}_{i}^{(K)\mathsf{T}}]^{ \mathsf{T}}\). The states of the anchors are time-independent and assumed to be known. We write the state of the \(a\)-th anchor as \(\mathbf{\mathrm{x}}_{a}=[\mathbf{p}_{x_{a}}\ \mathbf{\mathrm{p}_{y_{a}}}\ \mathbf{ \mathrm{p}_{z_{a}}}]^{\mathsf{T}}\in\mathbb{R}^{3\times 1}\). The vector \(\mathbf{\mathrm{x}}\) denotes the stacked vector of all agent and anchor states for all time steps. It is defined as \(\mathbf{\mathrm{x}}=[\mathbf{\mathrm{x}}_{1}^{(1:K)\mathsf{T}}\ \ldots\ \mathbf{\mathrm{x}}_{ \mathcal{C}}^{(1:K)\mathsf{T}},\mathbf{\mathrm{x}}_{\mathcal{C}|+1}^{(K)}\ \ldots\ \mathbf{\mathrm{x}}_{ \mathcal{C}|+|\mathcal{A}|}^{\mathsf{T}}]^{\mathsf{T}}\). The \(i\)-th agent state \(\mathbf{x}_{i}^{(k)}\) is assumed to evolve according to a constant acceleration model given by
\[\mathbf{x}_{i}^{(k)}=\mathbf{F}\mathbf{x}_{i}^{(k-1)}+\mathbf{G}\mathbf{u}^{(k-1)} \tag{1}\]
with the state transition matrix \(\mathbf{F}\in\mathbb{R}^{9\times 9}\) and the matrix \(\mathbf{G}\in\mathbb{R}^{9\times 3}\) relating the state noise to the state variables. The state noise vector \(\mathbf{u}^{(k)}\in\mathbb{R}^{3\times 1}\) is an independent and identically distributed (iid) sequence of 3-D Gaussian random vectors with standard deviation \(\sigma_{a}\). The matrices are given as
\[\mathbf{F}=\left[\begin{array}{ccc}1&\Delta T&\frac{(\Delta T)^{2}}{2}\\ 0&1&\Delta T\\ 0&0&1\end{array}\right]\otimes\mathbf{I}_{3} \tag{2}\]
and
\[\mathbf{G}=\left[\begin{array}{c}\frac{(\Delta T)^{2}}{2}\\ \vec{\Delta T}\\ 1\end{array}\right]\otimes\mathbf{I}_{3}\,. \tag{3}\]
Given the motion model, we can define the state transition probability and define the joint prior PDF for all agent states up to time \(K\) using common statistical independence assumptions [18, 20] as
\[f(\mathbf{x}^{(1:K)})=\prod_{k=1}^{K}\prod_{i\in\mathcal{C}}f(\mathbf{x}_{i}^{(0)})f( \mathbf{x}_{i}^{(k)}|\mathbf{x}_{i}^{(k-1)})\,. \tag{4}\]
The joint posterior PDF up to time \(K\) is given as
\[f(\mathbf{x}^{(1:K)}|\mathbf{x}^{(1:K)})\propto f(\mathbf{z}^{(1:K)}|\mathbf{x}^{(1:K)})f(\mathbf{x}^{(1 :K)}). \tag{5}\]
By assuming that measurements between nodes and time steps are independent of each other [18, 20], we can factorize the joint likelihood function as
\[f(\mathbf{z}^{(1:K)}|\mathbf{x}^{(1:K)})= \prod_{k=1}^{K}\prod_{i\in\mathcal{C}}\prod_{a\in\mathcal{A}_{i}^ {(k)}}f(z_{i,a}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{a})\] \[\times\prod_{j\in\mathcal{D}_{i}^{(k)}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!f(z _{i,j}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)}). \tag{6}\]
The joint posterior PDF can now be written in terms of its factorization by plugging (4) and (6) into (5), which results in
\[f(\mathbf{x}^{(1:K)}|\mathbf{z}^{(1:K)})\] \[\propto\ \prod_{k=1}^{K}\prod_{i\in\mathcal{C}}\!\!f(\mathbf{x}_{i}^{(0)})f( \mathbf{x}_{i}^{(k)}|\mathbf{x}_{i}^{(k-1)})\] \[\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## III Problem Formulation
We aim to estimate mobile agent states \(\mathbf{x}_{i}^{(k)}\) cooperatively. Our Bayesian approach determines the marginal posterior PDF \(f(\mathbf{x}_{i}^{(k)}|\mathbf{z}^{(1:k)})\) based on all measurements \(\mathbf{z}^{(1:k)}\) up to time \(k\). Estimates of the agent state \(\mathbf{x}_{i}^{(k)}\) are obtained by the minimum mean-square error (MMSE) estimator [49, Ch. 4] given by
\[\mathbf{\overline{x}}_{i}^{(k)}=\int\mathbf{x}_{i}^{(k)}f(\mathbf{x}_{i}^{(k)}|\mathbf{z}^{(1 :k)})\text{d}\mathbf{x}_{i}^{(k)}. \tag{9}\]
Since direct marginalization of the joint posterior in (7) typically cannot be evaluated in closed form, usually bootstrap particle-based BP [50, 51] implementations are chosen to approximate the marginal PDFs. This conventional particle-based implementation suffers from particle degeneracy [34] when agent states are high-dimensional, or measurements are very informative. Particle degeneracy leads to a "wrong" representation of agent beliefs that deteriorates the convergence behavior and performance of the particle-based BP algorithms. To overcome this issue, we propose a hybrid PF-BP algorithm. Before the proposed algorithm is introduced, a short review of the PF method is presented.
## IV Review of Particle Flow
In the case of a nonlinear measurement model as in (8), the posterior PDF \(f(\mathbf{x}|\mathbf{z})\propto f(\mathbf{z}|\mathbf{x})f(\mathbf{z})\) is often approximated by a set of weighted samples \(\{w^{m},\mathbf{x}^{m}\}_{m=1}^{M}\) with \(\sum_{m=1}^{M}w^{m}\!=\!1\) and the number of samples \(M\). They are calculated based on the importance sampling principle [36] as
\[w^{m}\propto\frac{f(\mathbf{z}|\mathbf{x}^{m})f(\mathbf{x}^{m})}{q(\mathbf{x}^{m}|\mathbf{z})} \tag{10}\]
with the proposal PDF \(q(\mathbf{x}|\mathbf{z})\), from which the set of particles \(\{\mathbf{x}^{m}\}_{m=1}^{M}\) is drawn. The only restriction to the proposal PDF is that it has to have the same support as the posterior PDF and heavier-tails [52], i.e., it is less informative. Otherwise, it can be arbitrary. Importance sampling can provide an arbitrarily good approximation of the posterior PDF by choosing \(M\) sufficiently large. Even though importance sampling is asymptotically optimal, if \(q(\mathbf{x}|\mathbf{z})\) is correctly chosen, it is often infeasible to implement due to the large number of particles required for correct state estimation in high-dimensions.
### _Derivation of the PF Equation_
Particle flow is an approach that migrates particles from the prior PDF to the posterior PDF by solving a partial differential equation [37, 38, 40, 43, 53]. The particle flow is described by making use of the homotopy property and the Fokker-Planck equation (FPE) [54]. The FPE is used to find a flow of particles that is equivalent to the flow of the probability density according to the log-homotopy function for the joint state \(\mathbf{x}^{(k)}\) at time \(k\). The log-homotopy function is given by [37, 43]
\[\text{log}f(\mathbf{x}^{(k)};\lambda) =\text{log}f(\mathbf{x}^{(k)}|\mathbf{x}^{(k-1)})\] \[\quad+\lambda\text{log}f(\mathbf{z}^{(k)}|\mathbf{x}^{(k)})-\text{log}Z(\lambda) \tag{11}\]
where \(\lambda\in[0,1]\) is the pseudo time of the flow process, \(f(\mathbf{x}^{(k)};\lambda)\) is the pseudo posterior during the flow process at time \(\lambda\), and \(Z(\lambda)\) is the evidence. We want to mention that \(Z(\lambda=0)=1\). The log-homotopy function describes a continuous and smooth deformation of the distribution starting from the prior PDF \(f(\mathbf{x}^{(k)}|\mathbf{x}^{(k-1)})\), i.e., \(\text{log}f(\mathbf{x}^{(k)};0)=\text{log}f(\mathbf{x}^{(k)}|\mathbf{x}^{(k-1)})\) to finally result in the posterior PDF \(\text{log}f(\mathbf{x}^{(k)};1)\propto\text{log}f(\mathbf{x}^{(k)}|\mathbf{x}^{(k-1)})+ \text{log}f(\mathbf{z}^{(k)}|\mathbf{x}^{(k)})\).
It is assumed that the flow follows a stochastic differential equation of the form of [37, 38]
\[d\mathbf{x}^{(k)}=\mathbf{\zeta}(\mathbf{x}^{(k)},\lambda)d\lambda+dw. \tag{12}\]
A detailed derivation of the flow equations can be found in Appendix A.
### _Exact Daum-Huang (EDH) Filter_
This filter estimates the joint agent state \(\mathbf{x}^{(k)}\) for each time step \(k\). We review it since it will be a reference method and a fundamental cornerstone of our proposed approach.
An analytic solution for \(\mathbf{\zeta}(\mathbf{x}^{(k)},\lambda)\) in (39), given in Appendix A, can be found for Gaussian distributions [37], resulting in the EDH filter [37, 43]. To satisfy these conditions, we approximate the prior PDF as Gaussian distributed where \(\mathbf{R}^{(k)}\) and \(\mathbf{P}^{(k|k-1)}\) are the measurement noise covariance matrix and the predicted covariance matrix of the joint state at time \(k\), respectively. The solution for \(\mathbf{\zeta}(\mathbf{x}^{(k)},\lambda)\), according to the EDH filter, is given by [53]
\[\mathbf{\zeta}(\mathbf{x}^{(k)},\lambda)=\mathbf{A}_{\lambda}^{(k)}\mathbf{x}^{(k)}+\mathbf{c}_{ \lambda}^{(k)}. \tag{13}\]
A detailed description of the EHD filter and its implementation can be found in Appendix B, providing also the solution for (12). We would like to point out that the EDH in this form can only be implemented in a centralized manner.
## V Message Passing Algorithms and Proposed Method
In a Bayesian framework, we estimate the position of each agent based on the marginal posterior PDFs. Since a direct marginalization of the joint posterior (7) is often infeasible, we perform message passing (MP) by means of the sum-product-algorithm rules on the factor graph that represents our statistical model. This so-called "belief propagation (BP)" yields approximations ("beliefs") of the marginal posterior PDFs in an efficient way [50, 51]. It gives the exact marginal PDFs for a tree-like graph but provides only an approximate marginalization if the underlying factor graph has cycles [50]. In this case, the BP message passing becomes iterative, and there exist different orders in which the messages can be calculated. We have chosen that in each iteration, the beliefs of all agents \(i\in\mathcal{C}\) are updated in parallel. In the following section, we derive the MP scheme based on the factor graph in Figure 2. In Section V-B, we shortly present the standard particle-based implementation of BP, whereas in Section V-C, we state the proposed method based on the same MP scheme.
### _BP Message Passing_
Based on the factor graph in Figure 2, we define the MP scheme to approximate the marginal posterior PDFs. For a better readability, we use the following shorthand
notation: In a _distributed_ implementation of BP, the factor \(f_{ij}\triangleq f(z_{ij}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)})\) represents the likelihood function with respect to the involved agents \(i\) and \(j\) at time \(k\) since only measurement \(z_{i,j}^{(k)}\) is available at node \(\mathbf{x}_{i}^{(k)}\). Therefore \(f_{ij}\neq f_{ji}\). In a _centralized_ implementation, both measurements between agent \(i\) and \(j\) at time \(k\) are available. Therefore the factor is given as the product of the likelihood of both measurements as \(f_{ij}\triangleq f(z_{i,j}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)})f(z_{j,i}^{ (k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)})\), which results in \(f_{ij}=f_{ji}\). The factor \(f_{i}^{(k)}\triangleq f(\mathbf{x}_{i}^{(k)}|\mathbf{x}_{i}^{(k-1)})\) corresponds to the state transition PDF. At time \(k=0\) it corresponds to the prior PDF \(f(\mathbf{x}_{i}^{(0)})\). The factor \(f_{ai}\triangleq f(z_{i,a}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{a})\) represents information from an anchor measurement. Since the factor graph has loops, we use an iterative MP scheme to approximate the marginal PDF (belief) of agent state \(i\) at time step \(k\). We define the belief at MP iteration \(u\in\{1,\ldots,U\}\) as the product of all incoming messages as
\[b^{[u]}(\mathbf{x}_{i}^{(k)})=\eta(\mathbf{x}_{i}^{(k)})\prod_{a\in\mathcal{A}_{i}^{( k)}}\varphi_{a}(\mathbf{x}_{i}^{(k)})\prod_{j\in\mathcal{D}_{i}^{(k)}}\nu_{j}^{[u-1 ]}(\mathbf{x}_{i}^{(k)}). \tag{14}\]
The messages are defined in the following manner: The message representing the state transition of agent \(i\) is given as
\[\eta(\mathbf{x}_{i}^{(k)})=\int f(\mathbf{x}_{i}^{(k)}|\mathbf{x}_{i}^{(k-1)})b^{[U]}(\mathbf{ x}_{i}^{(k-1)})d\mathbf{x}_{i}^{(k-1)} \tag{15}\]
whereas the message from anchor \(a\) to agent \(i\) is
\[\varphi_{a}(\mathbf{x}_{i}^{(k)}) =\int f(z_{i,a}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{a})\delta(\mathbf{x}_{a}-\bm {x}_{\text{{true}},a})d\mathbf{x}_{a}\] \[=f(z_{i,a}|\mathbf{x}_{i}^{(k)};\mathbf{x}_{\text{{true}},a}) \tag{16}\]
where \(\mathbf{x}_{\text{{true}},a}\) corresponds to the true position of anchor \(a\). Using the extrinsic information \(\psi_{i}^{[u-1]}(\mathbf{x}_{j}^{(k)})\) from the cooperative agent \(j\), the messages of the cooperative part can be written in the form of
\[\nu_{j}^{[u-1]}(\mathbf{x}_{i}^{(k)})=\int f(z_{i,j}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x} _{j}^{(k)})\psi_{i}^{[u-1]}(\mathbf{x}_{j}^{(k)})d\mathbf{x}_{j}^{(k)} \tag{17}\]
for a _distributed_ implementation since only measurement \(z_{i,j}^{(k)}\) is available at node \(\mathbf{x}_{i}^{(k)}\). In a _centralized_ manner, it is given as
\[\nu_{j}^{[u-1]}(\mathbf{x}_{i}^{(k)}) =\int f(z_{i,j}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)})\] \[\times f(z_{j,i}^{(k)}|\mathbf{x}_{i}^{(k)},\mathbf{x}_{j}^{(k)})\psi_{i}^ {[u-1]}(\mathbf{x}_{j}^{(k)})d\mathbf{x}_{j}^{(k)} \tag{18}\]
since both measurements between agent \(i\) and \(j\) at time \(k\) are available. The extrinsic information is given as
\[\psi_{i}^{[u]}(\mathbf{x}_{j}^{(k)})=\eta(\mathbf{x}_{j}^{(k)})\prod_{a\in\mathcal{A}_{ j}^{(k)}}\varphi_{a}(\mathbf{x}_{j}^{(k)})\prod_{l\in\mathcal{D}_{j}^{(k)}\backslash\{i \}}\nu_{l}^{[u-1]}(\mathbf{x}_{j}^{(k)}) \tag{19}\]
where the notation \(\mathcal{D}_{j}^{(k)}\backslash\{i\}\) indicates that \(i\) is excluded from the set \(\mathcal{D}_{j}^{(k)}\). It is very common to approximate the extrinsic information by the corresponding belief, resulting in \(\psi_{i}^{[u]}(\mathbf{x}_{j}^{(k)})\approx b^{[u]}(\mathbf{x}_{j}^{(k)})\)[18, 20, 24]. This reduces the computational complexity significantly since it avoids calculating the extrinsic information, which is different for each cooperating agent pair. An additional benefit is that it also reduces the communication between the agents since exchanging extrinsic information requires point-to-point communication, whereas the belief can be broadcast [18, 20, 24]. Throughout the paper, we use the approximation of extrinsic information.
The agent marginal PDF \(f(\mathbf{x}_{i}^{(0:k)}|\mathbf{x}^{(1:k)})\) is approximated up to a normalization constant by the belief \(b^{[u]}(\mathbf{x}_{i}^{(k)})\). We estimate the state of the \(i\)-th agent at the end of the MP iterations according to the MMSE estimator [49] as
\[\bar{\mathbf{x}}_{i}^{(k)}=\int\mathbf{x}_{i}b^{[U]}(\mathbf{x}_{i}^{(k)})\text{d}\mathbf{x}_{ i}^{(k)}. \tag{20}\]
### _SIR-BP Algorithm_
We represent the belief at MP iteration \(u\) with a weighted set of particles \(\{w_{i}^{(k)[u],m},\mathbf{x}_{i}^{(k)[u],m}\}_{m=1}^{M}\). For further insights, please refer to [20]. After each iteration \(u\), we use systematic resampling [36] to approximate the belief of the \(i\)th agent state by a set of equally weighted particles as \(\{1/M,\mathbf{x}_{i}^{(k)[u],m}\}_{m=1}^{M}\), where \(M\) is the number of particles. To avoid particle degeneracy after resampling, we can use regularization to convolve the resampled set of particles with a kernel that could be estimated or predefined [55]. I.e., the \(m\)-th particle \(\hat{\mathbf{x}}_{i}^{(k)[u],m}\) is drawn from a Gaussian distribution with a mean value of \(\mathbf{x}_{i}^{(k)[u],m}\) and a covariance of \(\Sigma_{r}\).
### _PF-BP Algorithm_
This approach uses the same BP MP to approximate the marginal PDF of the state as mentioned in Section V-A. The only difference is that instead of a point-wise multiplication of the incoming messages at a variable node, we use particle flow to determine the product of the messages. We represent the agent state \(i\) at time \(k\) by a set of equally weighted particles \(\{1/M,\mathbf{x}_{i}^{(k)},\}_{m=1}^{M}\). In the following, we present the particle-based implementation of PF-BP.
Comparing to Section IV-B and Appendix B, the flow of the \(m\)-th particle, representing the approximate marginal posterior
PDF of agent \(i\) at time step \(k\), pseudo-time step \(\lambda_{l}\) and message passing iteration \(u\) is given as
\[\mathbf{x}_{\lambda_{l},i}^{(k)[u],m}=\mathbf{x}_{\lambda_{l-1},i}^{(k)[0],m}+\tilde{ \mathbf{\zeta}}(\mathbf{x}_{\lambda_{l-1},i}^{(k)[0],m},\mathbf{x}_{\lambda i}^{(k)[u-1],m}, \lambda_{l})\Delta_{l}. \tag{21}\]
This recursive equation represents the particle-based multiplication of the incoming messages \(\varphi_{a}(\mathbf{x}_{i}^{(k)})\) and \(\nu_{j}^{[u-1]}(\mathbf{x}_{i}^{(k)})\) for \(a\in\mathcal{A}_{i}^{(k)}\) and \(j\in\mathcal{D}_{i}^{(k)}\). The message \(\eta(\mathbf{x}_{i}^{(k)})\) is obtained by propagating the particle representation through the motion model. Therefore, we define the \(m\)-th particle, drawn from the proposal PDF as \(\mathbf{x}_{\lambda_{l}=0,i}^{(k)[u=0],m}=\mathbf{x}_{i}^{(k|k-1),m}\), being equal to the predicted particle by the motion model.
The variable \(\mathbf{x}_{\rightarrow i}^{(k)[u]}\) can be seen as a joint state representing the beliefs of agents that perform measurements to agent \(i\) at time \(k\), evaluated at MP iteration \(u\). \(\mathbf{x}_{\rightarrow i}^{(k)[u],m}\) indicates the \(m\)-th particle of the stacked representation of this joint state. It will be explained in what follows. The particles represented in (21) at \(\lambda=1\) do not exactly match the particles drawn from the corresponding proposal density. Therefore, we have to use the invertible flow, as mentioned in [43] and recalculate the weights of the particles. This is done based on the particle representation at the end (\(\lambda=1\)) and the beginning (\(\lambda=0\)) of the flow as
\[w_{i}^{(k)[u],m}\propto \frac{f(\mathbf{x}_{\rightarrow i,1}^{(k)[u],m}|\mathbf{x}_{i}^{(k-1),m}) }{f(\mathbf{x}_{\lambda_{0}=0,i}^{(k)[u],m}|\mathbf{x}_{i}^{(k-1),m})}\] \[\times\prod_{j\in\mathcal{A}_{i}^{(k)}\cup\mathcal{D}_{i}^{(k)}} f(z_{i,j}|\mathbf{x}_{\lambda=1,i}^{(k)[u],m},\mathbf{x}_{j}^{(k)[u-1],m}). \tag{22}\]
The belief of agent state \(i\) at time \(k\) and MP iteration \(u\), given in (14), is represented by the weighted set of particles \(\{w_{i}^{(k)[u],m},\mathbf{x}_{\lambda=1,i}^{(k)[u],m}\}_{m=1}^{M}\). Using the weighted particle representation, we perform systematic resampling to approximate \(b^{[u]}(\mathbf{x}_{i}^{(k)})\) by a set of particles with uniform weights \(\{1/M,\mathbf{x}_{i}^{(k)[u],m}\}_{m=1}^{M}\) where we again drop the index \(\lambda\) to indicate the resampled particles. At this point, we want to mention that the final approximation of the marginal posterior PDF at MP iteration \(U\) is indicated by \(\{1/M,\mathbf{x}_{i}^{(k),m}\}_{m=1}^{M}\), neglecting the MP index.
We introduce a new variable \(\mathbf{\chi}_{i}^{(k)[u]}\) that corresponds to the resampled set of particles. The covariance matrix of the belief of agent \(i\) is indicated as \(\mathbf{P}_{i}^{(k)[u]}\). Even though it is possible, we do not determine \(\mathbf{P}_{i}^{(k)[u]}\) using the particle representation but based on the UKF update step as described in what follows. We chose this approach since it was observed that the particle representation could collapse after resampling.
For each MP iteration \(u\), we let the particles of the agent state \(i\) flow for all \(\lambda\)-steps. In addition, we define \(\mathbf{x}_{\rightarrow i}^{(k)[u-1]}=[\mathbf{\chi}_{j}^{(k)[u-1]}]_{j\in\mathcal{D} _{i}^{(k)}}\), which indicate the states of agents that perform a measurement to agent \(i\) at time \(k\), and the sample-based mean value of it as \(\mathbf{\overline{x}}_{\rightarrow i}^{(k)[u-1]}\). The states of the cooperating agents are represented by their beliefs at the previous iteration \([u-1]\). Furthermore we define the stacked representation of the joint state of agent \(i\) at pseudo time step \(\lambda_{l-1}\) and its cooperative partners at MP iteration \(u\) as \(\mathbf{\beta}_{\lambda_{l-1},i}^{(k)[u]=[\mathbf{x}_{\lambda_{l-1},i}^{(k)[0],m},\mathbf{ x}_{\rightarrow i}^{(k)[u-1]\text{T}}]}^{\text{T}}\) and its sample-based mean value as \(\mathbf{\overline{\beta}}_{\lambda_{l-1},i}^{(k)[u]}=[\mathbf{\overline{x}}_{\lambda_{l -1},i}^{(k)[0]\text{T}},\mathbf{\overline{x}}_{\rightarrow i}^{(k)[u-1]\text{T}}]^ {\text{T}}\). With that, we can write the drift of each particle \(m\) as
\[\mathbf{\zeta}(\mathbf{x}_{\lambda_{l-1},i}^{(k)[0],m},\mathbf{x}_{\rightarrow i}^{(k)[u -1],m},\lambda_{l})=\mathbf{A}_{i}\mathbf{\beta}_{\lambda_{l-1},i}^{(k)[u],m}+\mathbf{c}_ {i} \tag{23}\]
with \(\mathbf{A}_{i}\triangleq\mathbf{A}(\mathbf{x}_{\lambda_{l-1},i}^{(k)[0]},\mathbf{x}_{ \rightarrow i}^{(k)[u-1]},\lambda_{l})\) and \(\mathbf{c}_{i}\triangleq\mathbf{c}(\mathbf{x}_{\lambda_{l-1},i}^{(k)[0]},\mathbf{x}_{ \rightarrow i}^{(k)[u-1]},\lambda_{l})\). For the flow update in (21), \(\tilde{\mathbf{\zeta}}(\cdot)\) consists of the first \(N_{\text{D}}\) elements of \(\mathbf{\zeta}(\cdot)\) in (23). This corresponds to the drift of the marginal distribution of agent state \(i\), since the dimension of \(\mathbf{x}_{i}^{(k)}\) is \(N_{\text{D}}\). The flow of the mean value of the agent state is similar to (21) where we replace the particle representation of the agent state with the mean values as in (43).
With that in mind, we can define \(\mathbf{A}_{i}\) and \(\mathbf{c}_{i}\) as
\[\mathbf{A}_{i}= -\frac{1}{2}\tilde{\mathbf{P}}_{i}\mathbf{H}_{i}^{(k)\text{T}}(\lambda_{l} \mathbf{H}_{i}^{(k)}\tilde{\mathbf{P}}_{i}\mathbf{H}_{i}^{(k)\text{T}}+\mathbf{R}_{i}^{(k)})^{-1 }\mathbf{H}_{i}^{(k)} \tag{24}\] \[\mathbf{c}_{i}= (\mathbf{I}_{N_{\text{D}}([\mathcal{D}_{i}^{(k)}]+1)}\!+\!2\lambda_{l }\mathbf{A}_{i})\left[(\mathbf{I}_{N_{\text{D}}([\mathcal{D}_{i}^{(k)}]+1]}\!+\!\lambda_{l }\mathbf{A}_{i})\right.\] \[\times\left.\tilde{\mathbf{P}}_{i}\mathbf{H}_{i}^{(k)\text{T}}(\mathbf{R}_{i}^ {(k)})^{-1}(\mathbf{z}_{i}\!-\!\mathbf{\nu}_{i})+\mathbf{A}_{i}\mathbf{\overline{\beta}}_{ \lambda_{0},i}^{(k)[u]}\right] \tag{25}\]
with
\[\mathbf{\nu}_{i}=[h(\mathbf{\overline{x}}_{\lambda_{l-1},i}^{(k)[0]},\mathbf{\overline{\vartheta }}_{q}^{(k)})]_{\mathcal{G}_{i}^{(k)}\cup\mathcal{D}_{i}^{(k)}}-\mathbf{H}_{i}^{(k)} \mathbf{\overline{\beta}}_{\lambda_{l-1},i}^{(k)[u]} \tag{26}\]
where \(\mathbf{\nu}_{i}\) corresponds to the model mismatch due to the linearization and \(\mathbf{\overline{\vartheta}}^{(k)}=[\mathbf{x}_{\text{twe}^{\text{T}},\mathcal{A}_{1} }(1),\dots,\mathbf{x}_{\text{twe}^{\text{T}},\mathcal{A}_{i}(|\mathcal{A}_{i}|), \mathbf{\overline{x}}_{\rightarrow i}^{(k)[u-1]\text{T}}]}^{\text{T}}\), \(\mathbf{z}_{i}=[z_{i,j}]_{j\in\mathcal{A}_{i}^{(k)}\cup\mathcal{D}_{i}^{(k)}}\), and \(\mathbf{\overline{x}}_{\lambda_{l},i}^{(k)[u]}=(1/M)\sum_{m=1}^{M}\mathbf{x}_{\lambda_{l },i}^{(k)[u],m}\). In what follows, we define all other involved vectors and matrices.
The observation matrix \(\mathbf{H}_{i}^{(k)}\) has the dimensions \((|\mathcal{A}_{i}^{(k)}|+|\mathcal{D}_{i}^{(k)}|)\times N_{\text{D}}(1+| \mathcal{D}_{i}^{(k)}|)\), which is equivalent to the number of measurements of agent \(i\) times the sum of the dimensions of all involved states. \(\mathbf{H}_{i}^{(k)}\) consists of the \(N_{\text{D}}\)-dimensional elements
\[[\mathbf{H}_{i}^{(k)}]_{\tilde{o},N_{\text{D}}\tilde{p}-N_{\text{D}}+1:N_{\text{D}} \tilde{p}
where \(\mathbf{P}_{i}^{(k|k-1)}\) is the predicted covariance matrix of agent state \(i\) and \(\mathbf{P}_{\text{m}}^{(k)[u-1]}\) are the covariance matrices of the states of all other connected agents \(m\in\mathcal{D}_{i}^{(k)}\) determined at flow time \(\lambda=1\) of the previous MP iteration \(u-1\). Similarly to [43], these covariance matrices are calculated, respectively, using a UKF covariance matrix prediction and update, i.e.,
\[\mathbf{P}_{i}^{(k|k-1)} =\mathbf{F}\mathbf{P}_{i}^{(k-1)[U]}\mathbf{F}^{\text{T}}+\mathbf{Q} \tag{29}\] \[\mathbf{P}_{i}^{(k)[u]} =\mathbf{P}_{i}^{(k|k-1)}-\tilde{\mathbf{K}}^{[u]}\tilde{\mathbf{P}}_{zz} \tilde{\mathbf{K}}^{[u]\text{T}} \tag{30}\]
where \(\tilde{\mathbf{K}}^{[u]}\) again represents the Kalman gain at MP iteration \(u\) since it depends on the beliefs of the involved agent states, and \(\tilde{\mathbf{P}}_{zz}\) is the measurement covariance matrix. As discussed above, we perform systematic resampling at the end of each MP iteration resulting in \(\{1/M,\mathbf{x}_{i}^{(k)[u],m}\}_{m=1}^{M}\). Note that the covariance matrices \(\mathbf{P}_{i}^{(k)[u]}\) are calculated at sample-based mean value \(\overline{\mathbf{x}}_{i}^{(k)[u]}\). In addition to the particles, we represent the marginal posterior PDF of agent \(i\) at time \(k\) and MP iteration \(u\), with a mean value and a covariance matrix. At MP iteration \(U\), we determine the MMSE estimate of each agent state according to the sample-based mean value of each agent state. We use an exponentially spaced \(\lambda\) as suggested in [38], which results in a more accurate position estimate in our simulations compared to a linear spacing with the same number of steps. A summary of the particle-based implementation of PF-BP is provided in Algorithm 1.
```
1:for\(i=1:|\mathcal{C}|\)do
2: initialize Gaussian prior distribution with mean value \(\overline{\mathbf{x}}_{i}^{(0)}\) and covariance matrix \(\mathbf{P}_{i}^{(0)}\).
3: draw particles \(\{1/M,\mathbf{x}_{i}^{(0),m}\}_{m=1}^{M}\) from prior distribution
4:endfor
5:for k =1:K do
6:for\(i=1:|\mathcal{C}|\)do
7: predict particles and covariance matrix according to (1) and (29).
8: determine sample-based mean value \(\overline{\mathbf{x}}_{\lambda=0,i}^{(k)[0]}\)
9:endfor
10:for\(u=1:U\)do
11:for\(i=1:|\mathcal{C}|\)do
12: calculate flow according to (21) (using (23)-(28)) for all \(\lambda\)-steps
13: resample particles according to (22) to get \(\{1/M,\mathbf{x}_{i}^{(k)[u],m}\}_{m=1}^{M}\)
14: determine sample-based mean value \(\overline{\mathbf{x}}_{i}^{(k)[u]}\)
15: calculate \(\mathbf{P}_{i}^{(k)[u]}\) according to (30) at \(\overline{\mathbf{x}}_{i}^{(k)[u]}\)
16: optional: regularization of resampled particles and \(\mathbf{P}_{i}^{(k)[u]}\) according to (33)
17:endfor
18:endfor
19:for\(i=1:|\mathcal{C}|\)do
20: determine MMSE estimate according to sample-based mean value \(\overline{\mathbf{x}}_{i}^{(k)[U]}\)
21:endfor
22:endfor
```
**Algorithm 1** Proposed PF-BP Algorithm
## VI Evaluation of Algorithms
In this section, we evaluate the proposed algorithm based on dynamic networks for various network sizes and connectivities. We use a constant acceleration motion model in 9D (three-dimensional position, velocity, and acceleration state vectors) given in (1). We compare the performance to a bootstrap particle-based BP algorithm (termed SIR-BP) described in Section V-B, a SP-BP algorithm [25], and to a fully joint particle-based EDH filter [43]. Furthermore, we show the theoretical performance limit w.r.t. the PCRLB [5, 45]. We determine the performance in terms of the root-mean-square error (RMSE) of the MMSE estimates of position (RMSE\({}_{\text{p}}\)), velocity (RMSE\({}_{\text{r}}\)) and acceleration (RMSE\({}_{\text{a}}\)), the cumulative frequency (CF) of the position error, and the runtime per time step. In addition, we show the probability of outage of the position error versus a position error threshold. The outage is defined as position errors above the position error threshold. The uncertainty of the measurement model is \(\sigma=0.1\) m. In the following simulations, we use 9 anchors and two different numbers of agents defined as \(N_{\text{agent}}\in\{5,20\}\). The true agent positions are uniformly drawn for each realization in a volume of 20 m \(\times\) 20 m \(\times\) 20 m. The true velocity of each agent is initialized with a unit vector in the direction of the center of the scenario, while the true acceleration is initialized with zero. The agent trajectories are generated in 3D based on a constant acceleration model given in (1) with \(\Delta T=0.1\) s and the standard deviation of \(\mathbf{u}^{(k)}\) is \(\sigma_{a}\) = 0.15 m/s\({}^{2}\). The prior distribution for position (except for the
Fig. 3: A realization of the trajectories for 20 agents. Anchors are given in black. The initial positions of the agents are marked with red diamonds, and the trajectory is given in red. The colored scatter points indicate how many connections an agent has to anchors along its trajectory. The communication range is \(r_{\text{max}}=18\) m. Agents have at least one connection to an anchor at every time step.
SIR-BP algorithm), velocity and acceleration of each agent state \(\textbf{x}_{i}\) is initialized with a Gaussian distribution with a mean value of \(\overline{\textbf{x}}_{i}^{(0)}=[\overline{\textbf{p}}_{i}^{(0)\text{T}} \overline{\textbf{v}}_{i}^{(0)\text{T}}\overline{\textbf{a}}_{i}^{(0)\text{T}}] ^{\text{T}}\), which will be defined later on, and a covariance matrix according to
\[\boldsymbol{P}_{i}^{(0)}=\text{diag}([(\boldsymbol{\sigma}_{p}^{2})^{\text{T}},\Delta T^{2}(\boldsymbol{\sigma}_{a_{\text{min}}}^{(2)\text{T}},(\boldsymbol {\sigma}_{a_{\text{min}}}^{2})]) \tag{31}\]
where \(\boldsymbol{\sigma}_{p}^{2}=[\sigma_{px}^{2},\sigma_{py}^{2},\sigma_{px}^{2}] ^{\text{T}}\). We define the prior standard deviation of the position to be identical in all dimensions and set it to 20 m. For \(\boldsymbol{\sigma}_{a_{\text{min}}}^{2}\), we also define it to be identical in all dimensions. It is given as \(\boldsymbol{\sigma}_{a_{\text{min}}}^{2}=[(10\sigma_{a})^{2},(10\sigma_{a})^{2 },(10\sigma_{a})^{2}]^{\text{T}}\). The mean values \(\overline{\textbf{v}}_{i}^{(0)}\) and \(\overline{\textbf{a}}_{i}^{(0)}\), corresponding to velocity and acceleration respectively, are drawn from the zero-mean Gaussian distribution defined by the covariance matrix in (31). The mean value \(\overline{\textbf{p}}_{i}^{(0)}\), corresponding to the position, is drawn uniformly in the support volume. For the SIR-BP algorithm, the particles representing the position are drawn uniformly in the support volume. In contrast, for the EDH filter and the PF-BP algorithm, the particles are drawn from the Gaussian prior distribution. One realization of the dynamic scenario with 20 agents and a communication range of \(r_{\text{max}}=18\) m is given in Figure 3. This figure also shows the anchors' placement at the corners of the support volume and the placement of a single anchor in the center. In addition, we indicate in color how many anchor measurements an agent has at each point of its trajectory. The setup is chosen such that each agent lies within the communication range of at least one anchor at each time step. For an agent to be fully localizable based on anchor measurements, one needs measurements from four different anchors where the positions of the anchors do not lay on a plane. As we see in Figure 3, agents would not be localizable without cooperative measurements for most of the trajectories.
We simulate 200 trajectories of the agents for \(K=40\) time-steps. We use 20 \(\lambda\)-steps and 200 particles for the PF-based algorithms and 100 000 particles for the SIR-BP algorithm. As an additional benchmark, we use 1 000 000 particles for the SIR-BP algorithm indicated as SIR-BP\({}_{\text{Mil}}\). We fix the number of MP-iterations to 2. More iterations would be more time-consuming, and the benefit regarding the convergence behavior of the BP-based algorithms would be negligible. Further insights regarding this topic is provided later on in this section. Since it is common to use regularization to avoid particle degeneracy [55], we investigate the impact of regularization on all presented methods. For that purpose, we regularize velocity and acceleration with \(\sigma_{r_{\text{vol}}}=0.15\) m/s and \(\sigma_{r_{\text{max}}}=0.15\) m/s\({}^{2}\) for all investigate algorithms. This is done as follows: We define a Gaussian kernel with a covariance matrix
\[\boldsymbol{\Sigma}_{r}=\text{diag}([0,0,0,\sigma_{r\text{vol}}^{2},\sigma_{r \text{vol}}^{2},\sigma_{r\text{vol}}^{2},\sigma_{r\text{vol}}^{2},\sigma_{r \text{max}}^{2},\sigma_{r\text{max}}^{2}]). \tag{32}\]
For the UKF update and SP-BP, we add this covariance to the estimated covariance of each marginal state. Using for example (30), it would result in
\[\boldsymbol{P}_{i}^{(k)[u]}=\boldsymbol{P}_{i}^{(k|k-1)}-\bar{\boldsymbol{K} }^{[u]}\bar{\boldsymbol{P}}_{zz}\bar{\boldsymbol{K}}^{[u]\text{T}}+\boldsymbol {\Sigma}_{r}. \tag{33}\]
For the particle-based methods, we draw for each particle after resampling \(\boldsymbol{x}_{i}^{(k),m}\) a new particle \(\boldsymbol{\dot{x}}_{i}^{(k),m}\), which is distributed according to a Gaussian distribution with mean value \(\boldsymbol{x}_{i}^{(k),m}\) and covariance \(\boldsymbol{\Sigma}_{r}\). Results with regularization are indicated with dashed or dotted lines in the following figures and with "reg" in the legends.
#### Iv-B1 Scenario I
We evaluate a scenario with 5 agents for different communication ranges \(r_{\text{max}}\). For \(r_{\text{max}}=18\) m, agents have at least one connection to an anchor, which is a similar scenario as given in Figure 3. The results for that setting are given in Figure 4a-4d where we show the CF of the overall trajectory and the RMSE of position, velocity, and acceleration for each time step. We see clearly that the EDH filter and the proposed PF-BP algorithm outperform the SP-BP algorithm and the SIR-BP algorithm significantly in terms of accuracy without regularization. Table I shows the runtime per time-step for each algorithm with respect to a joint and a distributed processing. For a distributed processing, the runtime is given per time-step and agent. For a small number of agents and the chosen numbers of particles, the SP-BP algorithm outperforms all other methods in terms of runtime.
At the first few time-steps, some of the marginal posterior PDFs of the agent states are still multimodal, which can be well represented by the particles of the SIR-BP algorithm. Hence, the SIR-BP algorithm converges much faster to the "correct mode" of the posterior PDF leading to a much lower position error at the beginning of the agent trajectories (see Figure 4b). However, after a few steps, we can observe that the SIR-BP algorithm diverges in almost every simulation run since the chosen number of particles (100 000) is still too small to sufficiently represent the 9-D agent state vectors. With regularization, the SIR-BP algorithm achieves a much better performance. However, we can still observe a significant bias in the RMSE, indicating that the chosen number of particles is still too low. With 1 000 000 particles and regularization, the SIR-BP algorithm reaches almost PCRLB level; however, with the cost of a significant increase of runtime (see Table I) making it not applicable for real-time applications and systems with memory restrictions. The small bias, that occurs, can be avoided using even more particles (not shown). The SP-BP algorithm also benefits from the regularization since it leads to faster convergence of the MMSE estimate over time towards the PCRLB. However, the achievable accuracy is still very low compared to the PCRLB. Furthermore, it was observed that the posterior covariance matrices provided by SP-BP are significantly overconfident (not shown). For both PF-based methods, regularization has only a slight impact.
For a fully connected agent network (highly informative measurement models), we see clearly in Figure 4e-4h the superiority of both PF-based methods. The proposed PF-BP algorithm reaches the theoretical performance limit much faster compared to the other methods. The EDH filter reaches
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} & \(r_{\text{max}}\) & SP-BP & EDH & PF-BP & SIR-BP & SIR-BP\({}_{\text{Mil}}\) \\ \hline \hline joint & 18 & 3 ms & 10 ms & 50 ms & 0.44 s & 4.4 s \\ \cline{2-7} & \(\infty\) & 4 ms & 20 ms & 60 ms & 0.51 s & 5.1 s \\ \hline distr. & 18 & 0.6 ms & - & 10 ms & 0.09 s & 0.9 s \\ \cline{2-7} & \(\infty\) & 0.8 ms & - & 12 ms & 0.10 s & 1.2 s \\ \hline \end{tabular}
\end{table} TABLE I: Runtime per time step for the results with 5 agents with respect to a joint and a distributed (distr.) processing. For the distributed processing, the results are given in runtime per agent.
the PCRLBs after a few time-steps. The SP-BP algorithm needs significantly more time-steps until converging towards the PCRLBs. Using 100 000 particles, the SIR-BP algorithm obviously diverges with and without regularization in every simulation run. Even with 1 000 000 particles, the SIR-BP algorithm only converges if regularization is activated. Figure 4f shows that in this case, the SIR-BP algorithm also reaches the position PCRLB; however, due to the regularization, the velocity and acceleration RMSEs are biased. As a consequence of the large runtime and huge memory requirements, we do not present results with even more particles.
Both PF-based methods reach the PCRLBs without the need for regularization. Figure 4g shows that regularizing the PF-based methods only induces error biases to all states and is counterproductive for highly informative measurement models. Figure 4g also indicates that the SP-BP and SIR-BP algorithms benefit from the regularization since their estimates of velocity and acceleration need more time-steps to converge or even diverge without regularization. We conclude that regularization should be treated cautiously, as it has a sensitive effect on error biases.
The runtimes of the investigated algorithms for both agent network are reported in Table I. They were determined based on a centralized and a distributed processing. The results indicate that even though PF-BP has a higher computation time compared to the EDH filter if processed centralized, the per-agent computations for a distributed processing are lower or of similar computation time.
In addition, we investigated the convergence behaviour of our proposed method with respect to \(r_{\text{max}}=18\) m. Figure 5 depicts the convergence over time-steps of the trajectory towards the PCRLB with regard to different MP iterations and different numbers of \(\lambda\)-steps. It can be observed that a larger number of \(\lambda\)-steps is always more beneficial than more MP iterations. Therefore, we fixed the number of MP iterations to 2 and the number of \(\lambda\)-steps to 20 for all simulations as mentioned in the beginning of this section. The result with this set of parameters is indicated in green.
Furthermore, we show in Figure 6 the probability of outage
Fig. 4: Influence of the communication range \(r_{\text{max}}\) on the performance in terms of accuracy for 5 agents and \(\sigma=0.1\) m for 200 simulation runs. We show the CF of the position error over the whole trajectory as well as the RMSE of the agent states at each time step, where we look separately at the position, velocity, and acceleration. The theoretical performance limit is given in terms of the PCRLB. Regularization is indicated by reg.
Fig. 5: Convergence behaviour of PF-BP with respect to message passing iterations and pseudo-time-steps. The results are averaged over 200 simulation runs. The setting corresponding to the green line is used for all other simulations.
\(P_{\text{out}}(\epsilon>\tau)\) of the position error \(\epsilon\), where \(\tau\) is the position threshold in meters. We evaluate it at three time-steps \(k\in\{1,20,40\}\). At \(k=1\), we can see the benefits of the different algorithms. Figure 5(a) shows for \(r_{\text{max}}=18\) m at \(k=1\), that the SIR-BP algorithm with 1 000 000 provides the most accurate results, followed by SIR-BP with 100 000. This is because not every agent is localizable in the first step, and as mentioned above, SIR-BP can represent any PDF if enough particles are available. In Figure 5(d), there are no multimodalities in the position state due to the fully connected scenario. Therefore the unimodal approximation of the PF-BP algorithm is sufficient to represent the agent state correctly. Hence, it achieves higher accuracy than the SIR-BP with 1 000 000. For \(k=20\), all particle-based methods have a similar performance except the SIR-BP algorithm without regularization. The estimates of SP-BP are still biased in Figure 5(b), whereas they are close to the optimum result in Figure 5(e). At the last step, we see that if converged, all algorithms perform approximately the same, which is equivalent to the results in Figure 4(f) where all investigated methods reach the PCRLB at the last time step.
#### Vi-A2 Scenario II
In Figure 7, we show the results for 20 agents and a communication range of \(r_{\text{max}}\) = 18 m. The results look similar to those given in Figure 4 but with two major differences. At first, we can observe that none of the investigated methods reach the PCRLB with the defined parametrization. However, PF-BP has the smallest bias. Furthermore, we see that the estimates of the PF-based methods at \(k=1\) differ significantly. Since the joint state now has 180 dimensions compared to the 45 dimensions of the scenario with five agents, the EDH filter has many more problems representing the state correctly. The PF-BP algorithm determines the marginal posterior PDFs of the agents and calculates the flow only based on a subset of the joint state, i.e., the state of agent \(i\) and all other agents connected to it. Therefore the state dimension is much smaller, which also reduces the effect of particle degeneracy. This leads, with the same parameter setting, to a similar result to the one with five agents in Figure 5(b). The discrepancy to the SIR-BP algorithm at \(k=1\) shows that the PF-BP algorithm can not resolve multimodalities. We can observe that all investigated methods benefit from the regularization for this scenario and the specific parameter setting. The RMSE of the PF-BP algorithm has a constant bias without regularization in Figure 6(b). This could be resolved with more particles, which increases the runtime. The same is true for the EDH filter. We can also see that the PF-based methods are the only ones that can reach the PCRLB within the time of the trajectory with a reasonable calculation time. The runtimes per time step are summarized in Table II for a joint and a distributed processing. We see that the SIR-BP algorithm has a long runtime and is, therefore, unsuitable for real-time applications. The PF-BP algorithm also has a larger runtime than the EDH filter but only if processed jointly, hence making it suitable for real-time applications. SP-BP outperforms all other methods in terms of runtime but does not converge at all to the theoretical limit of the estimation
\begin{table}
\begin{tabular}{c|c|c|c|c|c} & SP-BP & EDH & PF-BP & SIR-BP & SIR-BP\({}_{\text{Mi}}\) \\ \hline \hline joint & 0.07 s & 0.25 s & 0.9 s & 3.6 s & 40 s \\ \hline dist. & 0.004 s & - & 0.05 s & 0.18 s & 2 s \\ \end{tabular}
\end{table} TABLE II: Runtime per time step for the results with 20 agents with respect to a joint and a distributed (distr). processing. For the distributed processing, the results are given in runtime per agent.
Fig. 6: Probability of outage of the position error for the investigated algorithms for the scenario with five agents. The first row shows the probability of an outage for a communication range of \(r_{\text{max}}\)=18 m, whereas the second row presents the probability of an outage for the fully connected case. It is evaluated at certain time-steps \(k\). Regularization is indicated by reg.
performance.
Note that for highly informative prior distributions of the agent states at time \(k=1\), the PF-based methods would still have higher accuracy than the SIR-BP and SP-BP algorithms. However, specifically for the SP-BP algorithm, the difference is significantly smaller.
In what follows, we summarize the advantages and disadvantages of the comparison methods and the proposed algorithm.
* The SIR-BP algorithm requires many particles to represent the posterior PDFs of the 9-D agent states correctly. Therefore, the algorithm has a long runtime and requires significant memory. However, the SIR-BP algorithm has the potential to correctly represent the posterior PDFs of the agent states asymptotically in the number of particles. It can therefore capture multimodalities in the posterior PDFs.
* The SP-BP algorithm has low computational demand and, therefore, a low run time. However, it shows slow convergence toward smaller RMSEs for high dimensional agent states over time.
* The particle-based EDH filter is suitable for small agent networks since it provides PCRLB-level position accuracy and has a low runtime. However, for larger networks, the convergence of the MMSE estimates over time is relatively slow, i.e., it needs many time-steps to reach PCRLB-level. Due to the joint state representation, it also does not scale well in the number of agents.
* The proposed PF-BP algorithm provides high position accuracy at the PCRLB level and exhibits low running time per time step for distributed processing. It also converges quickly over time and scales well in the number of agents due to the possibility of a distributed implementation.
Regarding the communication overhead, we can draw the following conclusions: SP-BP and PF-BP use a Gaussian approximation, which means that Gaussian distributions represent the agent states. Therefore, each agent has to transmit only the mean value and the covariance corresponding to its belief instead of all particles, as is the case for SIR-BP. For PF-BP, each agent has to sample locally from that Gaussian distribution to perform the particle flow process in the measurement update step. The EDH cannot be implemented in a distributed manner, leading to the case where a central computation unit has to collect all measurements and perform the computation.
To make the advantages of the proposed method even clearer, the runtimes of the investigated algorithms were determined for centralized and distributed processing. The results indicate that even though PF-BP has a higher computation time compared to the EDH filter if processed centralized, the per-agent computations for a distributed processing are lower or of similar computation time.
## VII Conclusion
We have proposed a Bayesian method based on belief propagation (BP) and particle flow for cooperative localization and navigation. Our method is particularly suitable for scenarios with high-dimensional agent states and informative nonlinear measurement models. To avoid particle degeneracy in such scenarios, invertible PF is used to compute BP messages. As a result, the proposed PF-BP algorithm can reach position
Fig. 7: Visualization of the performance of the investigated algorithms in terms of accuracy for the scenario with 20 agents and \(r_{\text{max}}=18\) m. The first row shows the CF of the position error over the whole trajectory as well as the RMSE of the agent states at each time step, where we look separately at the position, velocity, and acceleration. The second row depicts the probability of outage of the position error at certain time-steps \(k\). The results are given for \(\sigma=0.1\) m for 200 simulation runs. Regularization is indicated by reg.
accuracy at PCRLB level in a cooperative localization scenario with 9-D agent states and range measurements. Our numerical results demonstrate a reduced computational demand and memory requirement compared to the conventional SIR-BP algorithm and a particle-based EDH filter applied to cooperative localization. In addition, the communication overhead is reduced significantly with respect to SIR-BP and is comparable to SP-BP, which relies on a similar Gaussian representation. We performed simulations with different numbers of agents and communication ranges, demonstrating the superior estimation performance of the proposed PF-BP approach compared to state-of-the-art reference methods. We highlight the benefits and disadvantages of each investigated method in various scenarios.
Possible future work is to extent the measurement model beyond Gaussian noise, like missed detections, clutter/false alarm measurements, and data association uncertainty of measurements [31, 33, 46, 47], or to cooperative radio signal-based SLAM algorithm with highly informative measurement models [28, 29, 56].
## Appendix A Derivation of the PF Equation
The drift term \(\boldsymbol{\zeta}(\boldsymbol{x}^{(k)},\lambda)\) can be determined using the FPE, which is given as
\[\frac{\partial f(\boldsymbol{x}^{(k)};\lambda)}{\partial\lambda}= -\nabla_{\boldsymbol{x}}^{\mathrm{T}}(f(\boldsymbol{x}^{(k)}; \lambda)\boldsymbol{\zeta}(\boldsymbol{x}^{(k)},\lambda))\] \[+\frac{1}{2}\nabla_{\boldsymbol{x}}^{\mathrm{T}}(f(\boldsymbol{ x}^{(k)};\lambda)\boldsymbol{Q}(\boldsymbol{x}^{(k)},\lambda))\nabla_{\boldsymbol{x}} \tag{34}\]
where \(\boldsymbol{Q}(\boldsymbol{x}^{(k)},\lambda)\) corresponds to the diffusion term. The solutions of (34) for \(\boldsymbol{\zeta}(\boldsymbol{x}^{(k)},\lambda)\) can be categorized into zero-diffusion, i.e., \(\boldsymbol{Q}(\boldsymbol{x}^{(k)},\lambda)=0\)[37, 43] and nonzero-diffusion [38, 40]. The following two useful relations are used in the further derivation of the method:
1. Using the chain rule of the divergence, the fist term in (34) can be rewritten as \[\nabla_{\boldsymbol{x}}^{\mathrm{T}}(f(\boldsymbol{x}^{(k)}; \lambda)\boldsymbol{\zeta}(\boldsymbol{x}^{(k)},\lambda))\] \[= f(\boldsymbol{x}^{(k)};\lambda)\nabla_{\boldsymbol{x}}^{ \mathrm{T}}\boldsymbol{\zeta}(\boldsymbol{x}^{(k)},\lambda)+(\nabla_{ \boldsymbol{x}}^{\mathrm{T}}f(\boldsymbol{x}^{(k)};\lambda))\boldsymbol{ \zeta}(\boldsymbol{x}^{(k)},\lambda).\] (35)
2. Using (11), the left side of the FPE, namely the partial derivative with respect to \(\lambda\), can be rewritten as \[\frac{\partial f(\boldsymbol{x}^{(k)};\lambda)}{\partial\lambda}\] \[=f(\boldsymbol{x}^{(k)}|\boldsymbol{x}^{(k-1)})\left[\frac{ \partial f(\boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)})^{\lambda}}{\partial \lambda}\right]Z(\lambda)^{-1}\] \[\quad+f(\boldsymbol{x}^{(k)}|\boldsymbol{x}^{(k-1)})\ f( \boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)})^{\lambda}\left[\frac{\partial Z( \lambda)^{-1}}{\partial\lambda}\right]\] \[=f(\boldsymbol{x}^{(k)}|\boldsymbol{x}^{(k-1)})\ f( \boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)})^{\lambda}\] \[\quad\times\text{log}f(\boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)} )\ Z(\lambda)^{-1}-f(\boldsymbol{x}^{(k)}|\boldsymbol{x}^{(k-1)})\] \[\quad\times f(\boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)})^{\lambda }Z(\lambda)^{-2}\left[\frac{\partial Z(\lambda)}{\partial\lambda}\right]\] \[=f(\boldsymbol{x}^{(k)};\lambda)\left[\text{log}f(\boldsymbol{ z}^{(k)}|\boldsymbol{x}^{(k)})-Z(\lambda)^{-1}\frac{\partial Z(\lambda)}{\partial \lambda}\right]\] \[=f(\boldsymbol{x}^{(k)};\lambda)\left[\text{log}f(\boldsymbol{ z}^{(k)}|\boldsymbol{x}^{(k)})-\frac{\partial\text{log}Z(\lambda)}{\partial \lambda}\right].\] (36)
By assuming zero-diffusion, (34) simplifies to
\[\frac{\partial f(\boldsymbol{x}^{(k)};\lambda)}{\partial\lambda}=-\nabla_{ \boldsymbol{x}}^{\mathrm{T}}(f(\boldsymbol{x}^{(k)};\lambda)\boldsymbol{\zeta }(\boldsymbol{x}^{(k)},\lambda)). \tag{37}\]
Neglecting the derivative of the evidence \(Z(\lambda)\) with respect to \(\lambda\)[37], and substituting (36) and (35) into (37), we get
\[\text{log}f(\boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)})= -[f(\boldsymbol{x}^{(k)};\lambda)^{-1}\nabla_{\boldsymbol{x}}^{ \mathrm{T}}f(\boldsymbol{x}^{(k)};\lambda)]\boldsymbol{\zeta}(\boldsymbol{x}^ {(k)},\lambda)\] \[-\nabla_{\boldsymbol{x}}^{\mathrm{T}}\boldsymbol{\zeta}( \boldsymbol{x}^{(k)},\lambda) \tag{38}\]
resulting in
\[\nabla_{\boldsymbol{x}}^{\mathrm{T}}\boldsymbol{\zeta}(\boldsymbol {x}^{(k)},\lambda)= -\text{log}f(\boldsymbol{z}^{(k)}|\boldsymbol{x}^{(k)})\] \[-(\nabla_{\boldsymbol{x}}\text{log}f(\boldsymbol{x}^{(k)}; \lambda))^{\mathrm{T}}\boldsymbol{\zeta}(\boldsymbol{x}^{(k)},\lambda). \tag{39}\]
## Appendix B Implementation of the EDH filter
Given (12) and (13), we will describe here the state representation, matrices and vectors for the implementation of the EDH. Regarding (13), \(\boldsymbol{A}_{\lambda}^{(k)}\) and \(\boldsymbol{c}_{\lambda}^{(k)}\) are given as
\[\boldsymbol{A}_{\lambda}^{(k)}= -\frac{1}{2}\boldsymbol{P}^{(k|k-1)}\boldsymbol{H}^{(k)\mathrm{T}}\] \[\times(\lambda\boldsymbol{H}^{(k)}\boldsymbol{P}^{(k|k-1)} \boldsymbol{H}^{(k)\mathrm{T}}+\boldsymbol{R}^{(k)})^{-1}\boldsymbol{H}^{(k)} \tag{40}\] \[\boldsymbol{c}_{\lambda}^{(k)}= (\boldsymbol{I}_{N_{[0]}|\boldsymbol{c}|}+2\lambda\boldsymbol{A})\] \[\times([\boldsymbol{I}_{N_{[0]}|\boldsymbol{c}|}+\lambda \boldsymbol{A})\boldsymbol{P}^{(k|k-1)}\boldsymbol{H}^{(k)\mathrm{T}}( \boldsymbol{R}^{(k)})^{-1}\] \[\times(\boldsymbol{z}^{(k)}+\boldsymbol{\nu}^{(k)})+\boldsymbol{A} \boldsymbol{\overline{x}}_{\lambda=0}^{(k)}] \tag{41}\]
where \(\boldsymbol{\nu}^{(k)}=h(\boldsymbol{\overline{x}}_{\lambda}^{(k)})-\boldsymbol{ H}^{(k)}\boldsymbol{\overline{x}}_{\lambda}^{(k)}\) and \(\boldsymbol{H}^{(k)}=\frac{\partial h(\boldsymbol{x})}{\partial\boldsymbol{x }}\Big{|}_{\boldsymbol{x}=\boldsymbol{\overline{x}}_{\lambda}^{(k)}}\), \(h(\boldsymbol{x})\) represents a shorthand notation to indicate all measurement hypotheses for all connected agents and anchors, and, \(\boldsymbol{\overline{x}}_{\lambda}^{(k)}\) represents the mean value of the state at pseudo time \(\lambda\) and time step \(k\)[43]. For \(\lambda=0\), \(\boldsymbol{\overline{x}}_{\lambda=0}^{(k)}\) corresponds to the mean value of the proposal PDF. Due to the Gaussian assumption, the proposal PDF is fully described by the mean value \(\boldsymbol{\overline{x}}_{\lambda=0}^{(k)}\triangleq\boldsymbol{\overline{x}} ^{(k|k-1)}\) and the covariance matrix \(\boldsymbol{P}^{(k|k-1)}\) of the predicted agent state \(\boldsymbol{x}^{(k|k-1)}\). The predicted mean and the predicted covariance matrix can either be determined by the set of particles, i.e., \(\boldsymbol{\overline{x}}_{\lambda=0}^{(k)}=(1/M)\sum_{m=1}^{M}\boldsymbol{x} _{\lambda=0}^{(k),m}\) and \(\boldsymbol{P}^{(k|k-1)}=(1/M)\sum_{m=1}^{M}(\boldsymbol{x}_{\lambda=0}^{(k),m }-\boldsymbol{\overline{x}}_{\lambda=0}^{(k)})(\boldsymbol{x}_{\lambda=0}^{(k),m }-\boldsymbol{\overline{x}}_{\lambda=0}^{(k)})^{\mathrm{T}}\) or by means of the Kalman-filter prediction equation as it will be described later on in this section.
The particle representation \(\{1/M,\boldsymbol{x}_{\lambda_{\lambda}}^{(k),m}\}_{m=1}^{M}\) of the joint state at pseudo-time-step \(\lambda_{l}\) with \(l\in\{1,\ldots,N_{\lambda}\}\), where \(N_{\lambda}\) is the maximum number of pseudo-time-steps, as well as the mean value of the particle representation can now be determined as
\[\boldsymbol{x}_{\lambda_{l}}^{(k),m}=\boldsymbol{x}_{\lambda_{l-1}}^{(k),m}+ \boldsymbol{\zeta}(\boldsymbol{x}_{\lambda_{l-1}}^{(k),m},\lambda_{l})\Delta_{l} \tag{42}\]
\[\boldsymbol{\overline{x}}_{\lambda_{l}}^{(k)}=\boldsymbol{\overline{x}}_{ \lambda_{l-1}}^{(k)}+\boldsymbol{\zeta}(\boldsymbol{\overline{x}}_{\lambda_{l -1}}^{(k)},\lambda_{l})\Delta_{l} \tag{43}\]
with \(\Delta_{l}=\lambda_{l}-\lambda_{l-1}\) being the step size of the flow process between two consecutive pseudo time steps. This corresponds to the solution of (12).
To evaluate the proposal distribution corresponding to the particles (42) at the end of the flow (\(\lambda=1\)), we make use of
the invertible flow principle introduced in [43]. Following that principle, the weights of the particles are recalculated based on the particle representation at the end (\(\lambda=1\)) and the beginning (\(\lambda=0\)) of the flow, i.e.,
\[w^{(k),m}\propto\frac{f(\mathbf{x}_{\lambda=1}^{(k),m}|\mathbf{x}_{\lambda=0}^{(k),m}) \ f(\mathbf{z}^{(k)}|\mathbf{x}_{\lambda=1}^{(k),m})}{f(\mathbf{x}_{\lambda=0}^{(k),m})}. \tag{44}\]
Here, \(\mathbf{x}_{\lambda=0}^{(k),m}\) is a particle sampled from the proposal PDF, represented by a Gaussian distribution. The posterior PDF of the joint agent state \(\mathbf{\mathbf{x}}^{(k)}\) is then represented by the set of weighted particles \(\{w^{(k),m},\mathbf{x}_{\lambda=1}^{(k),m}\}_{m=1}^{M}\). As final operation, we perform systematic resampling of the joint state resulting in the posterior PDF of the joint agent state at time \(k\) given by \(\{1/M,\mathbf{x}^{(k),m}\}_{m=1}^{M}\)[36] where we drop the index \(\lambda\).
Similar to [43] we calculate the posterior covariance matrix \(\mathbf{P}^{(k)}\) based on an unscented-Kalman-filter (UKF) update step [25, 57] at the sample-based mean value of the particle representation of the posterior PDF \(\mathbf{\overline{x}}_{\lambda=1}^{(k)}=(1/M)\sum_{m=1}^{M}\mathbf{x}_{\lambda=1}^{( k),m}\) and the predicted covariance \(\mathbf{P}^{(k|k-1)}\). The predicted covariance matrix is given by
\[\mathbf{P}^{(k|k-1)}=\tilde{\mathbf{F}}\mathbf{P}^{(k-1)}\tilde{\mathbf{F}}^{\mathrm{T}}+\mathbf{W} \tag{45}\]
where
\[\tilde{\mathbf{F}} =\mathbf{I}_{|\mathcal{C}|}\otimes\mathbf{F} \tag{46}\] \[\mathbf{W} =\mathbf{I}_{|\mathcal{C}|}\otimes\mathbf{Q}\] (47) \[\mathbf{Q} =\mathbf{G}(\mathbf{I}_{3}\odot\sigma_{a}^{2})\mathbf{G}^{\mathrm{T}} \tag{48}\]
The update step is given as
\[\mathbf{P}^{(k)}=\mathbf{P}^{(k|k-1)}-\mathbf{K}\mathbf{P}_{zz}\mathbf{K}^{\mathrm{T}} \tag{49}\]
with \(\mathbf{K}\) being the Kalman gain defined in [25, 57] and the measurement covariance matrix \(\mathbf{P}_{zz}\). More details on the UKF filter can be found in [25, 57].
It is possible to reduce the computation time of the EDH filter by comparing the rank of \(\mathbf{R}^{(k)}\) and \(\mathbf{P}^{(k|k-1)}\). If the rank of \(\mathbf{R}^{(k)}\) is larger than the rank of \(\mathbf{P}^{(k|k-1)}\), (40) is reformulated using the Woodbury matrix identity.
|
2303.00414 | Singularity Models for High Codimension Mean Curvature Flow in
Riemannian Manifolds | We study the mean curvature flow of smooth $n$-dimensional compact
submanifolds with quadratic pinching in a Riemannian manifold
$\mathcal{N}^{n+m}$. Our main focus is on the case of high codimension, $m\geq
2$. We establish a codimension estimate that shows in regions of high
curvature, the submanifold becomes approximately codimension one in a
quantifiable way. This estimate enables us to prove at a singular time of the
flow, there exists a rescaling that converges to a smooth codimension-one
limiting flow in Euclidean space. Under a cylindrical type pinching, this
limiting flow is weakly convex and moves by translation. Our approach relies on
the preservation of the quadratic pinching condition along the flow and a
gradient estimate that controls the mean curvature in regions of high
curvature. These estimates allow us to analyse the behaviour of the flow near
singularities and establish the existence of the limiting flow. | Artemis A. Vogiatzi, Huy T. Nguyen | 2023-03-01T11:09:07Z | http://arxiv.org/abs/2303.00414v1 | # Singularity Models for High Codimension Mean Curvature Flow in Riemannian Manifolds
###### Abstract
We study the mean curvature flow of smooth \(n\)-dimensional compact submanifolds with quadratic pinching in a Riemannian manifold \(\mathcal{N}^{n+m}\). Our main focus is on the case of high codimension, \(m\geq 2\). We establish a codimension estimate that shows in regions of high curvature, the submanifold becomes approximately codimension one in a quantifiable way. This estimate enables us to prove at a singular time of the flow, there exists a rescaling that converges to a smooth codimension-one limiting flow in Euclidean space. Under a cylindrical type pinching, this limiting flow is weakly convex and moves by translation. Our approach relies on the preservation of the quadratic pinching condition along the flow and a gradient estimate that controls the mean curvature in regions of high curvature. These estimates allow us to analyse the behaviour of the flow near singularities and establish the existence of the limiting flow.
## 1 Introduction
Let \(F_{0}\colon\mathcal{M}^{n}\to\mathcal{N}^{n+m}\) be a smooth immersion of a compact manifold \(\mathcal{M}^{n}\). The mean curvature flow starting from \(F_{0}\) is the following family of submanifolds
\[F\colon\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\]
such that
\[\left\{\begin{array}{rl}\partial_{t}F(p,t)&=H(p,t),\ \ \text{for}\ \ p\in \mathcal{M},t\in[0,T)\\ F(p,0)&=F_{0}(p)\end{array}\right. \tag{1.1}\]
where \(H(p,t)\) denotes the mean curvature vector of \(\mathcal{M}_{t}=F_{t}(p)=F(p,t)\) at \(p\). It is well known this is a system of quasilinear weakly parabolic partial differential equations for \(F\). Geometrically, the mean curvature flow is the steepest descent flow for the area functional of a submanifold and hence it is a natural curvature flow.
In the case of codimension one, a crucial step in the study of singularity formation in the mean convex mean curvature flow is the convexity estimate. This states that in regions of large mean curvature, the second fundamental form is almost positive definite.
In the paper Huisken [9] proved that closed convex hypersurfaces under the mean curvature flow evolve into spherical singularities, using Stampacchia iteration, the Michael-Simons-Sobolev inequality together with recursion formulae for symmetric polynomials. In [10], Huisken then generalises this theorem to Riemannian background curvature spaces with strict convexity depending on the background curvature.
In contrast, White [33, 34] uses compactness theorems of geometric measure theory together with the rigidity of strong maximum principle for the second fundamental form. Haslhofer-Kleiner [8] developed an alternative approach to White's results based on Andrews's non-collapsing [2] result for the mean curvature flow.
The case of mean curvature flow of mean convex hypersurfaces in Euclidean space has been investigated by White [33] and Huisken-Sinestrari [13], who have developed a deep and far reaching analysis of the formation of singularities. Recently there has been a number of works generalising these results to high codimension mean curvature flow [23],[19], [20]. The purpose of this paper is to obtain a suitable generalisation of these results for high codimension mean curvature flow in Riemannian submanifolds.
Most of the work done on mean curvature flow in higher codimension uses assumptions on the image of the Gauss map. They have either considered graphical submanifolds, [6],[17], [29],[31], submanifolds with additional symplectic or Lagrangian structure [26],[7],[28],[25], [22] or using convex subsets of the Grassmannian are preserved by the mean curvature flow, [27],[30],[32]. Therefore, we will focus on conditions on the norm of the second fundamental form. In high codimension, the mean curvature flow is more complex than in the hypersurface case, where there is only one normal direction. In the hypersurface setting, the second fundamental form is a symmetric real-valued two-tensor, and the mean curvature is a real-valued function, which simplifies the analysis of the flow. However, the presence of normal curvature complicates reaction terms in the evolution equations for the second fundamental form, making the analysis of high codimension mean curvature flow more challenging.
An alternative condition was introduced by Andrews-Baker in [3]. On a compact submanifold, if \(|H|>0\), there exists a \(c>0\), such that
\[|A|^{2}\leq c|H|^{2}, \tag{1.2}\]
which is preserved by codimension one mean curvature flow. Also, this condition makes sense for all codimensions. In fact, Andrews-Baker showed that for \(c\leq\frac{4}{3n}(<\frac{1}{n-1}\) for \(n<4\)), then (1.2) is preserved along the mean curvature flow. For \(c=\min\{\frac{4}{3n},\frac{1}{n-1}\}\), remarkably they were able to prove convergence to a round sphere. We note the condition \(|A|^{2}-\frac{1}{n-1}|H|^{2}<0,H>0\) implies convexity in codimension one. This lead Andrews-Baker to consider the pinching condition:
\[|A|^{2}-c_{n}|H|^{2}+d_{n}\leq 0, \tag{1.3}\]
which, is preserved by mean curvature flow, for \(c_{n}\leq\frac{4}{3n}\) and \(d_{n}>0\). In the paper [23], a surgery construction was developed allowing high codimension mean curvature flow with
cylindrical pinching to pass through singularities. This generalised the codimension one result of [13] (see also [8]) to high codimension. A key aspect of this surgery procedure is the codimension estimate presented in [20], which shows that near regions of high curvature, singularities become approximately codimension one. Another crucial component is the cylindrical estimate, which shows that nears regions of high curvature, the submanifold becomes approximately cylindrical of the form \(\mathbb{S}^{n-1}\times\mathbb{R}\). These estimates are essential for the surgery to work and allow us to control the geometry of the submanifold in regions of high curvature.
In this paper, we study singularity formation in high codimension mean curvature flow in Riemannian manifolds and will consider the following curvature pinching condition of the length of the second fundamental form
\[|A|^{2}-c_{n}|H|^{2}\leq-d_{n}(K_{1},K_{2},L)\]
for some positive constant \(d_{n}\) depending on the background curvature, where
\[-K_{1}\leq K_{\mathcal{N}}\leq K_{2},\quad|\bar{\nabla}\bar{R}|\leq L,\quad \mathrm{inj}(\mathcal{N})\geq\imath_{\mathcal{N}}.\]
and
\[c_{n}:=\min\left\{\frac{4}{3n},\frac{1}{n-2}\right\},\quad\text{ if }n\geq 5.\]
This was shown to be preserved in the paper of [18] and represents a natural generalisation of Huisken's condition in [10] to high codimension background Riemannian manifolds. We will show in regions of high curvature where the mean curvature is large, the submanifold becomes approximately codimension one in a quantifiable sense. In particular, we will prove a theorem that extends of the main theorem of [20] to Riemannian background spaces.
**Theorem 5.1**.: Let \(F:\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\) be a smooth solution to mean curvature flow (1.1) so that \(F_{0}(p)=F(p,0)\) is compact and quadratically pinched. Then \(\forall\varepsilon>0,\exists H_{0}>0\), such that if \(f\geq H_{0}\), then
\[\left|A^{-}\right|^{2}\leq\varepsilon f+C_{\varepsilon}\]
\(\forall t\in[0,T)\) where \(C_{\varepsilon}=C_{\varepsilon}(n,m)\).
Assuming the quadratic pinching condition, we prove singularity models for the pinched flow must always have codimension one, regardless of the original flow's codimension.
The outline of the paper is as follows. In section 2, we give all the technical tools needed for our work and set up our notation. In section 3, we give the proof for the preservation of the quadratic pinching condition along the mean curvature flow. In section 4, we prove the gradient estimate. The importance of the gradient estimate is that it allows us to control the mean curvature and hence the full second fundamental form on a neighbourhood of fixed size. In section 5, we prove the codimension estimate, which is the main theorem
of this paper. This means that in regions of high curvature, the submanifold becomes codimension one quantitatively. In section 6, we show how the codimension estimate in Riemannian manifolds actually falls into the Euclidean case. Finally, in section 7, we prove the codimension estimate in the case of constant negative curvature.
**Acknowledgements.** The first author would like to acknowledge the support of the EPSRC through the grant EP/S012907/1.
## 2 Preliminaries
This chapter presents the necessary preliminary results and establishes our notation. We derive evolution equations for the length and squared length of the second fundamental form, as well as for the mean curvature vectors, in an arbitrary Riemannian background space of any codimension. Additionally, we provide a proof for a Kato-type inequality we will utilise throughout this paper. Let \(F\colon\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\) be an \(n\)-dimensional smooth, closed and connected submanifold in an \((n+m)\)-dimensional smooth complete Riemannian manifold. We adopt the following convention for indices:
\[1\leq i,j,k,\cdots\leq n,\ 1\leq a,b,c,\cdots\leq n+m\quad\text{and}\ \ 1\leq\alpha,\beta,\gamma,\cdots\leq m.\]
We denote by \(A\) to be the normal vector valued second fundamental form tensor and denote by \(H\) the mean curvature vector which is the trace of the second fundamental form given by \(H^{\alpha}=\sum_{i}A^{\alpha}_{ii}\). The tracefree second fundamental form \(\mathring{A}\) is defined by \(\mathring{\mathring{A}}=A-\frac{1}{n}Hg\), whose components are given by \(\mathring{A}^{\alpha}_{ij}=A^{\alpha}_{ij}-\frac{1}{n}H^{\alpha}g_{ij}\). Obviously, we have \(\sum_{i}\mathring{A}^{\alpha}_{ii}=0\).
We define the principal normal direction to be given by \(\nu_{1}=\frac{H}{|H|}\). This is well defined since in our setting \(|H|\neq 0\). We denote by \(A^{-}\) the second fundamental form tensor orthogonal to the principal direction and \(h\) to be the tensor valued second fundamental form in the principal direction, that is \(h_{ij}=\frac{\langle A_{ij},H\rangle}{|H|}\). Therefore, we have \(A=A^{-}+h\nu_{1}\). Also, \(A^{+}_{ij}=\langle A_{ij},\nu_{1}\rangle\nu_{1}\). From the definition of \(A^{-}\), it is natural to define the connection \(\mathring{\nabla}^{\perp}\) acting on \(A^{-}\), by
\[\mathring{\nabla}^{\perp}_{i}A^{-}_{jk}:=\nabla^{\perp}_{i}A^{-}_{jk}- \langle\nabla^{\perp}_{i}A^{-}_{jk},\nu_{1}\rangle\nu_{1}.\]
We denote \(\mathring{h}\) to be the traceless part of the second fundamental form in the principal direction. For the choice of \(\nu_{1}\), we have for \(\alpha\geq 2\) and \(H^{\alpha}=\operatorname{tr}A^{\alpha}=0\). The traceless second fundamental form can be rewritten as \(\mathring{\mathring{A}}=\sum_{\alpha}\mathring{A}^{\alpha}\nu_{\alpha}\), where
\[\mathring{h}=h-\frac{|H|}{n}Id,\quad\text{for}\ \ \alpha=1\]
and
\[\mathring{A}^{-}=A^{\alpha},\quad\text{for}\ \ \alpha\geq 2.\]
We set
\[|A|^{2}=|h|^{2}+|A^{-}|^{2}\quad\text{and}\ \ |\hat{A}|^{2}=|\hat{h}|^{2}+|\hat{A}^{-}|^{2}.\]
Let
\[R_{ijkl}=g\big{(}R(e_{i},e_{j})e_{k},e_{l}\big{)},\ \ \bar{R}_{ abcd}=\langle\bar{R}(e_{a},e_{b})e_{c},e_{d}\rangle\ \ \text{and}\ \ R_{ij\alpha\beta}^{\perp}=\langle R^{\perp}(e_{i},e_{j})e_{\alpha},e_{ \beta}\rangle.\]
**Proposition 2.1** ([3], Section 3).: _With the summation convention, the evolution equations of \(A_{ij}\) and \(H\) are_
\[\Big{(}\partial_{t}-\Delta\Big{)}A_{ij} =\sum_{p,q}\langle A_{ij},A_{pq}\rangle A_{pq}+\sum_{p,q}\langle A _{iq},A_{pq}\rangle A_{pj}+\sum_{p,q}\langle A_{jq},A_{pq}\rangle A_{pi}-2\sum _{p,q}\langle A_{ip},A_{jq}\rangle A_{pq}\] \[+2\sum_{p,q}\bar{R}_{ipq}A_{pq}-\sum_{k,p}\bar{R}_{kjkp}A_{pi}- \sum_{k,p}\bar{R}_{kikp}A_{pj}+\sum_{k,\alpha,\beta}A_{ij}^{\alpha}\bar{R}_{ k\alpha k\beta}\nu_{\beta}\] \[-2\sum_{p,\alpha,\beta}A_{jp}^{\alpha}\bar{R}_{ip\alpha\beta} \nu_{\beta}-2\sum_{p,\alpha,\beta}A_{ip}^{\alpha}\bar{R}_{jp\alpha\beta}\nu_ {\beta}\] \[+\sum_{k,\beta}\bar{\nabla}_{k}\bar{R}_{kij\beta}\nu_{\beta}- \sum_{k,\beta}\bar{\nabla}_{i}\bar{R}_{jk\beta}\nu_{\beta}, \tag{2.1}\]
\[\Big{(}\partial_{t}-\Delta\Big{)}H=\sum_{p,q}\langle H,A_{pq}\rangle A_{pq}+ \sum_{k,\alpha,\beta}H^{\alpha}\bar{R}_{k\alpha k\beta}\nu_{\beta}. \tag{2.2}\]
**Lemma 2.2** ([4], Section 5.1).: _Let us consider a family of immersions \(F\colon\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\) moving by mean curvature flow. Then, we have the following evolution equations_
\[\partial_{t}d\mu_{t}=-|H|^{2}d\mu_{t}, \tag{2.3}\]
\[\partial_{t}|A|^{2} =\Delta|A|^{2}-2|\nabla A|^{2}+2\sum_{\alpha,\beta}\big{(}\sum_{ i,j}A_{ij}^{\alpha}A_{ij}^{\beta}\big{)}^{2}+2\sum_{i,j,\alpha,\beta}\Big{(} \sum_{p}\big{(}A_{ip}^{\alpha}A_{jp}^{\beta}-A_{jp}^{\alpha}A_{ip}^{\beta}\big{)} \Big{)}^{2}\] \[+4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{\alpha}A_{pq}^{\alpha} A_{ij}^{\alpha}\big{)}-4\sum_{j,k,p}\bar{R}_{kjkp}\big{(}\sum_{i,\alpha}A_{pi}^{ \alpha}A_{ij}^{\alpha}\big{)}+2\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta} \big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij}^{\beta}\big{)}\] \[-8\sum_{j,p,\alpha,\beta}\bar{R}_{jp\alpha\beta}\big{(}\sum_{i}A_ {ip}^{\alpha}A_{ij}^{\beta}\big{)}+2\sum_{i,j,k,\beta}\bar{\nabla}_{k}\bar{R}_ {kij\beta}A_{ij}^{\beta}-2\sum_{i,j,k,\beta}\bar{\nabla}_{i}\bar{R}_{jkk \beta}A_{ij}^{\beta}, \tag{2.4}\]
\[\partial_{t}|H|^{2}=\Delta|H|^{2}-2|\nabla H|^{2}+2\sum_{i,j}\big{(}\sum_{ \alpha}H^{\alpha}A_{ij}^{\alpha}\big{)}^{2}+2\sum_{k,\alpha,\beta}\bar{R}_{ k\alpha k\beta}H^{\alpha}H^{\beta}. \tag{2.5}\]
By Berger's inequality,
\[|\bar{R}_{abc}| \leq\frac{1}{2}(K_{1}+K_{2}),\quad\text{for $a\neq b$} \tag{2.6}\] \[|\bar{R}_{abcd}| \leq\frac{2}{3}(K_{1}+K_{2}),\ \ \text{for all distinct indices $a,b,c,d$}.\]
**Lemma 2.3** ([18], Lemma 3.1).: _For any \(\eta>0\) we have the following inequalities_
\[|\nabla^{\perp}A|^{2}\geq\bigg{(}\frac{3}{n+2}-\eta\bigg{)}|\nabla^{\perp}H|^{2} -\frac{2}{n+2}\left(\frac{2}{n+2}\eta^{-1}-\frac{n}{n-1}\right)|w|^{2} \tag{2.7}\]
_and_
\[|\nabla^{\perp}A|^{2}-\frac{1}{n}|\nabla^{\perp}H|^{2} \geq\frac{n-1}{2n+1}|\nabla^{\perp}A|^{2}-\frac{2n}{(n-1)(2n+1)}|w |^{2}\] \[\geq\frac{n-1}{2n+1}|\nabla^{\perp}A|^{2}-C(n,d)\left(K_{1}+K_{2} \right)^{2}. \tag{2.8}\]
_Here \(w=\sum_{i,j,\alpha}\bar{R}_{\alpha jij}e_{i}\otimes\omega_{\alpha}\) and \(C(n,d)=\frac{n^{4}d}{2(n-1)(2n+1)}\)._
Proof.: Inequality (2.8) follows from (2.7) with \(\eta=\frac{n-1}{n(n+2)}\). To prove (2.7), we set
\[E_{ijk} =\frac{1}{n+2}\left(\nabla_{i}^{\perp}Hg_{jk}+\nabla_{j}^{\perp }Hg_{ik}+\nabla_{k}^{\perp}Hg_{ij}\right)\] \[-\frac{2}{(n+2)(n-1)}w_{i}g_{jk}+\frac{n}{(n+2)(n-1)}\left(w_{j} g_{ik}+w_{k}g_{ij}\right). \tag{2.9}\]
Let \(F_{ijk}=\nabla_{i}^{\perp}h_{jk}-E_{ijk}\). By the Codazzi equation we have \(\langle E_{ijk},F_{ijk}\rangle=0\). Hence, \(|\nabla^{\perp}A|^{2}\geq|E|^{2}\). By a direct computation, we have
\[|E|^{2}=\frac{3}{n+2}|\nabla^{\perp}H|^{2}+\frac{2n}{(n+2)(n-1)}|w|^{2}+\frac {4}{n+2}\langle\nabla^{\perp}H,w\rangle.\]
But from Cauchy-Schwarz inequality and Young's inequality for products, we have
\[\frac{4}{n+2}\langle\nabla^{\perp}H,w\rangle\geq-\eta|\nabla^{\perp}H|^{2}- \frac{4}{(n+2)^{2}}\eta^{-1}|w|^{2}.\]
Plugging the above inequality into (2.9), we get (2.7).
**Proposition 2.4** ([20], Proposition 2.2).: \[|\nabla^{\perp}A|^{2} =\sum_{i,j,k}|\hat{\nabla}_{i}^{\perp}A_{jk}^{-}+h_{jk}\nabla_{i} ^{\perp}\nu_{1}|^{2}+\sum_{i,j,k}|\langle\nabla_{i}^{\perp}A_{jk}^{-},\nu_{1} \rangle+\nabla_{i}h_{jk}|^{2}.\] (2.10) \[|\nabla^{\perp}H|^{2} =|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}+|\nabla|H||^{2}.\] \[|\nabla^{\perp}A^{-}|^{2} =|\hat{\nabla}^{\perp}A^{-}|^{2}+|\langle\nabla^{\perp}A^{-},\nu_ {1}\rangle|^{2}.\]
We will use these identities in Sections 5 and 8. It is very useful to consider the implications of the Codazzi equation for the decomposition of \(\nabla_{i}^{\perp}A_{jk}\) above. Projecting the Codazzi equation onto \(E_{1}\) and \(\hat{E}\) implies the both of the tensors
\[\nabla_{i}h_{jk}+\langle\nabla_{i}^{\perp}A_{jk}^{-},\nu_{1}\rangle\quad\text { and }\quad\hat{\nabla}_{i}^{\perp}A_{jk}^{-}+h_{jk}\nabla_{i}^{\perp}\nu_{1}\]
are symmetric in \(i,j,k\). Consequently, it is equivalent to trace over \(j,k\) or trace over \(i,k\), and this implies
\[\sum_{k=1}^{n}(\nabla_{k}h_{ik}+\langle\nabla_{k}^{\perp}A_{ik}^{-},\nu_{1}\rangle)=\nabla_{i}|H|, \tag{2.11}\]
\[\sum_{k=1}^{n}(\hat{\nabla}_{k}^{\perp}A_{ik}^{-}+h_{ik}\nabla_{ k}^{\perp}\nu_{1})=|H|\nabla_{i}^{\perp}\nu_{1}. \tag{2.12}\]
## 3 Preservation of the Quadratic Pinching
This section demonstrates the quadratic pinching condition (3.2) is preserved throughout the mean curvature flow, for a suitable positive constant \(d_{n}\) that depends on the background curvature. The proof, presented in [18], generalises Huisken's pinching condition [10] to high codimension. As we require a slight refinement of this pinching, we provide the proof for completeness.
**Theorem 3.1** ([18], Section 3).: _Let \(F\colon\mathcal{M}^{n}\to\mathcal{N}^{n+m}\) be an \(n\)-dimensional, smooth, closed and connected submanifold in an \(n+m\)-dimensional smooth complete Riemannian manifold, such that_
\[-K_{1}\leq K_{\mathcal{N}}\leq K_{2},\quad|\bar{\nabla}\bar{R}| \leq L,\quad\mathrm{inj}(\mathcal{N})\geq\imath_{\mathcal{N}}. \tag{3.1}\]
_Then, there is a constant \(d_{n}=d_{n}(K_{1},K_{2},L)\) depending only on the dimension \(n\), the bounds for the sectional curvature \(K_{1},K_{2}\) and the bound for the derivative of the curvature \(L\), such that for \(c_{n}\leq\frac{4}{3n}\)_
\[|A|^{2}\leq c_{n}|H|^{2}-d_{n} \tag{3.2}\]
_is preserved by the mean curvature flow._
Proof.: Set \(g=|A|^{2}-c_{n}|H|^{2}+d_{n}\), where \(c_{n}\leq\frac{4}{3n},d_{n}>d\) where \(d\) is a positive constant to be determined. We compute the evolution equation for \(g\) along the mean curvature flow and show if \(g=0\) at a point in the space-time, then \((\partial_{t}-\Delta)g\) is negative at this point. By the maximum principle, the theorem follows. More precisely, by Lemma 2.2, we have
\[\left(\partial_{t}-\Delta\right)g=-2(|\nabla A|^{2}-c_{n}|\nabla H |^{2})+2R_{1}-2c_{n}R_{2}+P_{\alpha}, \tag{3.3}\]
where
\[R_{1}=\sum_{\alpha,\beta}\big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij}^ {\beta}\big{)}^{2}+\sum_{i,j,\alpha,\beta}\Big{(}\sum_{p}\big{(}A_{ip}^{\alpha }A_{jp}^{\beta}-A_{jp}^{\alpha}A_{ip}^{\beta}\big{)}\Big{)}^{2},\]
\[R_{2}=\sum_{i,j}\big{(}\sum_{\alpha}H^{\alpha}A^{\alpha}_{ij}\big{)}^{2}\]
and
\[P_{\alpha}=I+II+III+IV, \tag{3.4}\]
with
\[I=4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{\alpha}A^{\alpha}_{pq}A^{\alpha}_{ ij}\big{)}-4\sum_{j,k,p}\bar{R}_{kjkp}\big{(}\sum_{i,\alpha}A^{\alpha}_{pi}A^{ \alpha}_{ij}\big{)},\]
\[II=2\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}\big{(}\sum_{i,j}A^{\alpha}_{ ij}A^{\beta}_{ij}\big{)}-2c_{n}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{ \alpha}H^{\beta},\]
\[III=-8\sum_{j,p,\alpha,\beta}\bar{R}_{jp\alpha\beta}\big{(}\sum_{i}A^{\alpha}_ {ip}A^{\beta}_{ij}\big{)},\]
\[IV=2\sum_{i,j,k,\beta}\bar{\nabla}_{k}\bar{R}_{kij\beta}A^{\beta}_{ij}-2\sum_ {i,j,k,\beta}\bar{\nabla}_{i}\bar{R}_{jkk\beta}A^{\beta}_{ij}.\]
To estimate the reaction terms, it is convenient to work with the traceless part of the second fundamental form \(\mathring{A}=A-\frac{1}{n}HId\). The lengths of \(A\) and \(\mathring{A}\) are related by
\[|\mathring{A}|^{2}=|A|^{2}-\frac{1}{n}|H|^{2}.\]
At the point where \(g=0\), that is \(|A|^{2}=c_{n}|H|^{2}-d_{n}\), the mean curvature vector is not zero. We choose a local orthonormal frame \(\{\nu_{\alpha},1\leq\alpha\leq m\}\) for the normal bundle, such that \(\nu_{1}=\frac{H}{|H|}\), the principal normal direction. For the choice of \(\nu_{1}\), we have \(H^{+}=\operatorname{tr}A^{+}=|H|\) and for \(\alpha\geq 2\) and \(H^{\alpha}=\operatorname{tr}A^{\alpha}=0\). The traceless second fundamental form can be rewritten as \(\mathring{A}=\sum_{\alpha}\mathring{A}^{\alpha}\nu_{\alpha}\), where
\[\mathring{h}=h-\frac{|H|}{n}Id,\quad\text{for}\ \ \alpha=1\]
and
\[\mathring{A}^{-}=A^{\alpha},\quad\text{for}\ \ \alpha\geq 2.\]
We set
\[|A|^{2}=|h|^{2}+|A^{-}|^{2}\quad\text{and}\ \ |\mathring{A}|^{2}=|\mathring{h}|^{ 2}+|\mathring{A}^{-}|^{2}.\]
Since \(|A|^{2}=c_{n}|H|^{2}-d_{n}\) at this point, we have \(|H|^{2}=\frac{|\hat{A}|^{2}+d_{n}}{c_{n}-\frac{1}{n}}\) and from [3] we see that
\[\begin{split} 2R_{1}-2c_{n}R_{2}&\leq\left(6-\frac{2}{n(c_{n} -\frac{1}{n})}\right)|\hat{h}|^{2}|\hat{A}^{-}|^{2}+\left(3-\frac{2}{n(c_{n}- \frac{1}{n})}\right)|\hat{A}^{-}|^{4}\\ &-\frac{2c_{n}d_{n}}{c_{n}-\frac{1}{n}}|\hat{h}|^{2}-\frac{4d_{n}} {n(c_{n}-\frac{1}{n})}|\hat{A}^{-}|^{2}-\frac{2d_{n}^{2}}{n(c_{n}-\frac{1}{n})}.\end{split} \tag{3.5}\]
To estimate \(I\), for a fixed \(\alpha\) we choose a basis for the tangent space \(e_{i}\)'s, such that \(h\) is diagonal. Denote by \(\lambda_{i}\) and \(\hat{\lambda}_{i}\) the diagonal entries of \(h\) and \(\hat{h}\), respectively. Therefore, \(A^{\alpha}_{ij}=\lambda^{\alpha}_{i}\delta_{ij}\).
\[I =4\sum_{i,j,p,q}\bar{R}_{ipip}A^{\alpha}_{pq}A^{\alpha}_{ij}-4 \sum_{j,k,p}\bar{R}_{kjkp}\big{(}\sum_{i,\alpha}A^{\alpha}_{pi}A^{\alpha}_{ij} \big{)}\] \[=4\sum_{i,p}\bar{R}_{ipip}\big{(}\lambda^{\alpha}_{i}\lambda^{ \alpha}_{p}-(\lambda^{\alpha}_{i})^{2}\big{)}\] \[=-2\sum_{i,p}\bar{R}_{ipip}\big{(}\lambda^{\alpha}_{i}-\lambda^{ \alpha}_{p}\big{)}^{2}\] \[\leq 4nK_{1}|\hat{A}^{\alpha}|^{2}.\]
Hence, we get
\[I\leq 4nK_{1}(|\hat{h}|^{2}+|\hat{A}^{-}|^{2}). \tag{3.6}\]
By the choice of \(\nu_{1}\), we have \(II=II_{1}+II_{2}+II_{3}\), where
\[II_{1}=2\sum_{i,j,k}\bar{R}_{k1k1}(A^{+}_{ij})^{2}-2c_{n}\sum_{k}\bar{R}_{k1k1 }(H^{+})^{2},\]
\[II_{2}=4\sum_{k,\alpha\geq 2}\bar{R}_{k\alpha k1}\big{(}\sum_{i,j}A^{\alpha}_{ ij}A^{+}_{ij}\big{)}-4c_{n}\sum_{k,\alpha\geq 2}\bar{R}_{k\alpha k1}H^{+}H^{ \alpha},\]
\[II_{3}=2\sum_{k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta}\big{(}\sum_{i,j}A^{ \alpha}_{ij}A^{\beta}_{ij}\big{)}-2c_{n}\sum_{k,\alpha,\beta\geq 2}\bar{R}_{k \alpha k\beta}H^{\alpha}H^{\beta}.\]
Since, \(|H|^{2}=\frac{|\hat{A}|^{2}+d_{n}}{c_{n}-\frac{1}{n}}\) at that point, we have
\[II_{1} \leq 2nK_{2}|h|^{2}+2nc_{n}K_{1}|H|^{2}\] \[=2nK_{2}|\hat{h}|^{2}+2(nc_{n}K_{1}+K_{2})\frac{|\hat{A}|^{2}+d_ {n}}{c_{n}-\frac{1}{n}}\] \[=\left(2nK_{2}+\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}} \right)|\hat{h}|^{2}+\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}}|\hat{A}^{- }|^{2}+\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}}d_{n}. \tag{3.7}\]
Since \(H^{\alpha}=0\), for \(\alpha\geq 2\), we have the following estimates for \(II_{2},II_{3}\).
\[II_{2} =4\sum_{k,\alpha\geq 2}\bar{R}_{kak1}\big{(}\sum_{i,j}A_{ij}^{ \alpha}A_{ij}^{+}\big{)}\] \[=4\sum_{k,\alpha\geq 2}\bar{R}_{kak1}\big{(}\sum_{i,j}\hat{A}_{ij}^{ \alpha}\hat{A}_{ij}^{+}\big{)}\] \[\leq(K_{1}+K_{2})\sum_{k,\alpha\geq 2}\big{(}\frac{1}{\rho}\sum_{ i,j}(\hat{A}_{ij}^{\alpha})^{2}+\rho\sum_{i,j}(\hat{A}_{ij}^{+})^{2}\big{)}\] \[=\rho n(m-1)(K_{1}+K_{2})|\hat{h}|^{2}+\frac{n}{\rho}(K_{1}+K_{2} )|\hat{A}^{-}|^{2}, \tag{3.8}\]
for any positive constant \(\rho\).
\[II_{3} =2\sum_{k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta}\big{(}\sum_{ i,j}A_{ij}^{\alpha}A_{ij}^{\beta}\big{)}\] \[=2\sum_{k,\alpha\geq 2}\bar{R}_{k\alpha k\alpha}\big{(}\sum_{ i,j}(A_{ij}^{\alpha})^{2}\big{)}+2\sum_{k,\alpha,\beta\geq 2,\alpha\neq\beta}\bar{R} _{k\alpha k\beta}\big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij}^{\beta}\big{)}\] \[\leq 2nK_{2}|\hat{A}^{-}|^{2}+2\sum_{k,\alpha,\beta\geq 2, \alpha\neq\beta}\bar{R}_{k\alpha k\beta}\big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij} ^{\beta}\big{)}\] \[\leq 2nK_{2}|\hat{A}^{-}|^{2}+\sum_{k,\alpha,\beta\geq 2, \alpha\neq\beta}|\bar{R}_{k\alpha k\beta}|\sum_{i,j}\big{(}(A_{ij}^{\alpha})^{ 2}+(A_{ij}^{\beta})^{2}\big{)}\] \[\leq 2nK_{2}|\hat{A}^{-}|^{2}+(K_{1}+K_{2})\sum_{i,j,k,\alpha, \beta\geq 2,\alpha\neq\beta}\big{(}(A_{ij}^{\alpha})^{2}+(A_{ij}^{\beta})^{2} \big{)}\] \[=2nK_{2}|\hat{A}^{-}|^{2}+n(m-2)(K_{1}+K_{2})|\hat{h}|^{2}. \tag{3.9}\]
From (3.7),(3.8) and (3.9), we get the following estimate for \(II\) :
\[II \leq\left(2nK_{2}+\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}}+ \big{(}\rho n(m-1)+n(m-2)\big{)}(K_{1}+K_{2})\right)|\hat{h}|^{2} \tag{3.10}\] \[+\left(\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}}+\frac{n}{ \rho}(K_{1}+K_{2})+2nK_{2}\right)|\hat{A}^{-}|^{2}+\frac{2(nc_{n}K_{1}+K_{2})} {c_{n}-\frac{1}{n}}d_{n}.\]
For \(III\) we have \(III=III_{1}+III_{2}\), where
\[III_{1}=-16\sum_{j,p,\alpha\geq 2}\bar{R}_{jp\alpha 1}\big{(}\sum_{i}A_{ip}^{ \alpha}A_{ij}^{+}\big{)},\]
\[III_{2}=-8\sum_{j,p,\alpha,\beta\geq 2,\alpha\neq\beta}\bar{R}_{jp\alpha\beta} \big{(}\sum_{i}A_{ip}^{\alpha}A_{ij}^{\beta}\big{)}.\]
We have the following estimates for arbitrary positive constant \(\rho\) :
\[III_{1}=-16\sum_{j,p,\alpha\geq 2}\bar{R}_{jp\alpha 1}\sum_{i}\hat{A}_{ip}^{ \alpha}\big{(}\hat{A}_{ij}^{+}+\frac{|H|}{n}\delta_{ij}\big{)}\]
\[=-16\sum_{j\neq p,\alpha\geq 2}\bar{R}_{jp\alpha 1}\sum_{i}\mathring{A} ^{\alpha}_{ip}\mathring{A}^{+}_{ij}\] \[\leq\frac{16}{3}(K_{1}+K_{2})\sum_{i,j\neq p,\alpha\geq 2}\big{(} \frac{1}{\rho}(\mathring{A}^{\alpha}_{ip})^{2}+\rho(\mathring{A}^{+}_{ij})^{2} \big{)}\] \[=\frac{16}{3}\rho(n-1)(m-1)(K_{1}+K_{2})|\mathring{h}|^{2}+\frac{1 6}{3\rho}(n-1)(K_{1}+K_{2})|\mathring{A}^{-}|^{2}. \tag{3.11}\]
For the second inequality, we use \(\sum_{j,p}\bar{R}_{jp\alpha 1}\mathring{A}^{\alpha}_{jp}=0\), since \(\bar{R}_{jp\alpha 1}\) is anti-symmetric for \(j,p\) and \(\mathring{A}^{\alpha}_{jp}\) is symmetric for \(j,p\). For any fixed \(\beta\geq 2\), we choose \(\nu_{i}\)'s, such that \(\mathring{A}^{\beta}_{ij}=\mathring{\lambda}^{\beta}_{i}\delta_{ij}\). Then,
\[III_{2} =-8\sum_{j\neq p,\beta\geq 2}\sum_{\alpha\geq 2,\alpha\neq\beta} \bar{R}_{jp\alpha\beta}\mathring{A}^{\alpha}_{jp}\mathring{\lambda}^{\beta}_{j}\] \[\leq\frac{8}{3}(K_{1}+K_{2})\sum_{\beta\geq 2}\left((n-1)^{ \frac{1}{2}}\sum_{j\neq p,\alpha\geq 2,\alpha\neq\beta}(\mathring{A}^{\alpha}_{jp})^{ 2}+\frac{1}{(n-1)^{\frac{1}{2}}}\sum_{j\neq p,\alpha\geq 2,\alpha\neq\beta}( \mathring{\lambda}^{\beta}_{j})^{2}\right)\] \[\leq\frac{8}{3}(K_{1}+K_{2})\Big{(}(n-1)^{\frac{1}{2}}(m-2)| \mathring{A}^{-}|^{2}+\sum_{\beta\geq 2}(n-1)^{\frac{1}{2}}(m-2)|\mathring{A}^{ \beta}|^{2}\Big{)}\] \[=\frac{8}{3}(n-1)^{\frac{1}{2}}(m-2)(K_{1}+K_{2})|\mathring{A}^{- }|^{2}. \tag{3.12}\]
From (3) and (3), we have
\[III \leq\frac{16}{3}\rho(n-1)(m-1)(K_{1}+K_{2})|\mathring{h}|^{2}\] \[+\left(\frac{16}{3\rho}(n-1)+\frac{8}{3}(n-1)^{\frac{1}{2}}(m-2) \right)(K_{1}+K_{2})|\mathring{A}^{-}|^{2}. \tag{3.13}\]
For \(IV\), we choose \(\nu_{i}\)'s, such that \(A^{+}_{ij}=\lambda_{i}\delta_{ij}\). If \(K_{1}+K_{2}\neq 0\), we have
\[IV =2\sum_{i,k}\bar{\nabla}_{k}\bar{R}_{kii1}(\lambda_{i}-\lambda_{ k})-2\sum_{i,j,k,\beta\geq 2}(\bar{\nabla}_{k}\bar{R}_{kij\beta}-\bar{ \nabla}_{i}\bar{R}_{jk\delta\beta})\mathring{A}^{\beta}_{ij}\] \[\leq\sum_{i,k}\left(\frac{1}{\theta}(\bar{\nabla}_{k}\bar{R}_{kii 1})^{2}+\theta(\lambda_{i}-\lambda_{k})^{2}\right)+\sum_{i,j,k,\beta\geq 2} \left(\frac{2}{\vartheta}\big{(}(\bar{\nabla}_{k}\bar{R}_{kij\beta})^{2}+( \bar{\nabla}_{i}\bar{R}_{jk\delta\beta})^{2}\big{)}+\vartheta(\mathring{A}^{ \beta}_{ij})^{2}\right)\] \[\leq\frac{L^{2}}{\theta}+\theta|\mathring{h}|^{2}+\frac{4L^{2}}{ \vartheta}+n\vartheta|\mathring{A}^{-}|^{2}, \tag{3.14}\]
for positive constants \(\theta,\vartheta\). If \(K_{1}+K_{2}=0\), then \(L=0\) and we may choose \(\theta,\vartheta=0\). Combining (3.6),(3.10),(3) and (3), we have
\[(\partial_{t}-\Delta)\,g \leq-2(|\nabla A|^{2}-c_{n}|\nabla H|^{2})+\left(6-\frac{2}{n(c_{ n}-\frac{1}{n})}\right)|\mathring{h}|^{2}|\mathring{A}^{-}|^{2}+\left(3-\frac{2}{n(c_{ n}-\frac{1}{n})}\right)|\mathring{A}^{-}|^{4}\] \[-\frac{2c_{n}d_{n}}{c_{n}-\frac{1}{n}}|\mathring{h}|^{2}-\frac{4d _{n}}{n(c_{n}-\frac{1}{n})}|\mathring{A}^{-}|^{2}-\frac{2d_{n}^{2}}{n(c_{n}- \frac{1}{n})}\]
\[+C_{1}|\hat{h}|^{2}+C_{2}|\hat{A}^{-}|^{2}+C_{3}d_{n}+C_{4}. \tag{3.15}\]
Here,
\[C_{1} =4nK_{1}+2nK_{2}+\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}}\] \[+\Big{(}\rho n(m-1)+n(m-2)+\frac{16}{3}\rho(n-1)(m-1)\Big{)}(K_{1 }+K_{2})+\theta,\]
\[C_{2} =4nK_{1}+2nK_{2}+\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}}\] \[+\Big{(}\frac{n}{\rho}+\frac{16}{3\rho}(n-1)+\frac{8}{3}(n-1)^{ \frac{1}{2}}(m-2)\Big{)}(K_{1}+K_{2})+n\vartheta,\]
\[C_{3}=\frac{2(nc_{n}K_{1}+K_{2})}{c_{n}-\frac{1}{n}},\]
\[C_{4}=\frac{L^{2}}{\theta}+\frac{4L^{2}}{\vartheta},\quad\text{for}\ \ K_{1}+K_{2}\neq 0\quad\text{and}\ \ C_{4}=0,\quad\text{for}\ \ K_{1}+K_{2}=0,\]
From the Kato type inequality in (2.3), we have that
\[-2(|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2}) \leq-2\Big{(}\frac{3}{n+2}-\eta-c_{n}\Big{)}|\nabla^{\perp}H|^{2}\] \[+\Big{(}\frac{8}{\eta(n+2)^{2}}-\frac{4n}{(n+2)(n-1)}\Big{)}|w|^ {2}\] \[\leq 0, \tag{3.16}\]
for a suitable positive constant \(\eta\). If \(K_{1}+K_{2}\neq 0\), set
\[d=\max\left\{\frac{C_{1}}{2c_{n}}(c_{n}-\frac{1}{n}),\frac{C_{2}}{4}n(c_{n}- \frac{1}{n}),\frac{1}{4}n(c_{n}-\frac{1}{n})\left(C_{3}+\sqrt{C_{3}^{2}+\frac{ 8C_{4}}{n(c_{n}-\frac{1}{n})}}\right)\right\},\]
with \(\rho=\rho=\theta=\vartheta=1\). If \(K_{1}+K_{2}=0\), set \(d=0\). So, if \(d_{n}>d\), we have
\[\left(\partial_{t}-\Delta\right)g<0.\]
Then, by the maximum principle, \(|A|^{2}\leq c_{n}|H|^{2}-d_{n}\) is preserved along the mean curvature flow.
**Remark 3.2**.: We see that as \(K_{1},K_{2},L\to 0\) that \(d_{n}\to 0\). In particular, since any sufficiently small region of a smooth Riemannian manifold is locally Euclidean we see that perturbations of manifolds satisfying \(|A|^{2}-c_{n}|H|^{2}<0\) in an exponential neighbourhood of any point satisfy this inequality hence there are many submanifolds to which this inequality applies.
## 4 Gradient Estimate
This section presents a proof of the gradient estimate for the mean curvature flow. We establish this estimate directly from the quadratic curvature bound \(|A|^{2}<c_{n}|H|^{2}-d_{n}\), where \(c\leq\frac{4}{3n}\), without relying on the asymptotic cylindrical estimates. In fact, we demonstrate the cylindrical estimates follow as a consequence of the gradient estimates we derive here. These estimates are pointwise gradient estimates that rely solely on the mean curvature (or, equivalently, the second fundamental form) at a point and not on the maximum of curvature, as is the case with more general parabolic-type derivative estimates. Specifically, we obtain
\[\frac{3}{n+2}-c>0.\]
This inequality enables us to combine the derivative terms in the evolution equation of \(|A|^{2}-c_{n}|H|^{2}+d_{n}\) with the Kato-type inequality from Lemma 2.3.
**Theorem 4.1** (cf.[13], Section 6).: _Let \(\mathcal{M}_{t},t\in[0,T)\) be a closed \(n\)-dimensional quadratically bounded solution to the mean curvature flow in the Riemannian manifold \(\mathcal{N}^{n+m}\), with \(n\geq 8\), that is_
\[|A|^{2}-c|H|^{2}+d<0,|H|>0\]
_with \(c=\frac{1}{n-2}\). Then, there exists a constant \(\gamma_{1}=\gamma_{1}(n,\mathcal{M}_{0})\) and a constant \(\gamma_{2}=\gamma_{2}(n,\mathcal{M}_{0})\), such that the flow satisfies the uniform estimate_
\[|\nabla A|^{2}\leq\gamma_{1}|A|^{4}+\gamma_{2},\]
_for every \(t\in[0,T)\)._
Proof.: We choose here \(\kappa_{n}=\left(\frac{3}{n+2}-c\right)>0\). Since \(n\geq 8\), \(\kappa_{n}\) is strictly positive. We will consider here the evolution equation for
\[\frac{|\nabla A|^{2}}{g^{2}},\]
where \(g=c|H|^{2}-|A|^{2}-d>0\). Since \(|A|^{2}-c|H|^{2}<0,|H|>0\) and \(\mathcal{M}_{0}\) is compact, there exists an \(\eta(\mathcal{M}_{0})>0,C_{\eta}(\mathcal{M}_{0})>0\), so that
\[\left(c-\eta\right)|H|^{2}-|A|^{2}\geq C_{\eta}>0. \tag{4.1}\]
Hence, we set
\[g=c|H|^{2}-|A|^{2}\geq\eta|H|^{2}>\frac{\eta}{c}|A|^{2}>\varepsilon_{1}|A|^{2} +\varepsilon_{2},\]
where \(\varepsilon_{1}=\frac{\eta}{c}\) and \(\varepsilon_{2}>0\). From (3.3) and (2.3) in Theorem 3.1 and a suitable constant \(d\), we get
\[\partial_{t}g=\Delta g-2\left(c|\nabla H|^{2}-|\nabla A|^{2}\right)+2\left(cR_ {2}-R_{1}\right)+P_{\alpha}\]
\[\geq\Delta g-2\left(\Big{(}\frac{3}{n+2}-\eta\Big{)}^{-1}c-1\right)| \nabla A|^{2}\] \[\geq\Delta g-2\Big{(}\frac{n+2}{3}c-1\Big{)}|\nabla A|^{2}\] \[=\Delta g+2\kappa_{n}\frac{n+2}{3}|\nabla A|^{2},\]
for a suitable positive constant \(\eta\). The evolution equation for \(|\nabla A|^{2}\) is given by
\[\Big{(}\partial_{t}-\Delta\Big{)}|\nabla A|^{2}\leq-2|\nabla^{2}A|^{2}+c|A|^{2 }|\nabla A|^{2}+d|\nabla A|^{2}.\]
Let \(w,z\) satisfy the evolution equations
\[\partial_{t}w=\Delta w+W,\quad\partial_{t}z=\Delta z+Z\]
then, we find
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{w}{z} =\frac{2}{z}\left\langle\nabla\left(\frac{w}{z}\right),\nabla z \right\rangle+\frac{W}{z}-\frac{w}{z^{2}}Z\] \[=2\frac{\langle\nabla w,\nabla z\rangle}{z^{2}}-2\frac{w|\nabla z |^{2}}{z^{3}}+\frac{W}{z}-\frac{w}{z^{2}}Z.\]
Furthermore, for any function \(g\), we have by Kato's inequality
\[\langle\nabla g,\nabla|\nabla A|^{2}\rangle\leq 2|\nabla g||\nabla^{2}A|| \nabla A|\leq\frac{1}{g}|\nabla g|^{2}|\nabla A|^{2}+g|\nabla^{2}A|^{2}.\]
We then get
\[-\frac{2}{g}|\nabla^{2}A|^{2}+\frac{2}{g}\left\langle\nabla g,\nabla\left( \frac{|\nabla A|^{2}}{g}\right)\right\rangle\leq-\frac{2}{g}|\nabla^{2}A|^{2}- \frac{2}{g^{3}}|\nabla g|^{2}|\nabla A|^{2}+\frac{2}{g^{2}}\langle\nabla g, \nabla|\nabla A|^{2}\rangle\leq 0.\]
Then, if we let \(w=|\nabla A|^{2}\) and \(z=g\), with \(W\leq-2|\nabla^{2}A|^{2}+c|A|^{2}|\nabla A|^{2}+d|\nabla A|^{2}\) and \(Z\geq 2\kappa_{n}\frac{n+2}{3}|\nabla A|^{2}\), we get
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\nabla A|^{2}}{g} \leq\frac{2}{g}\left\langle\nabla g,\nabla\left(\frac{|\nabla A|^ {2}}{g}\right)\right\rangle+\frac{1}{g}(-2|\nabla^{2}A|^{2}+c|A|^{2}|\nabla A |^{2}\] \[+d|\nabla A|^{2})-2\kappa_{n}\frac{n+2}{3}\frac{|\nabla A|^{4}}{ g^{2}}\] \[\leq c|A|^{2}\frac{|\nabla A|^{2}}{g}+d\frac{|\nabla A|^{2}}{g}-2 \kappa_{n}\frac{n+2}{3}\frac{|\nabla A|^{4}}{g^{2}}.\]
We repeat the above computation with \(w=\frac{|\nabla A|^{2}}{g},z=g\),
\[W\leq c|A|^{2}\frac{|\nabla A|^{2}}{g}+d\frac{|\nabla A|^{2}}{g}-2\kappa_{n} \frac{n+2}{3}\frac{|\nabla A|^{4}}{g^{2}}\]
and \(Z\geq 0\), to get
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\nabla A|^{2}}{g^{2}} \leq\frac{2}{g}\left\langle\nabla g,\nabla\left(\frac{|\nabla A|^{2} }{g^{2}}\right)\right\rangle\] \[+\frac{1}{g}\left(c|A|^{2}\frac{|\nabla A|^{2}}{g}+d\frac{|\nabla A |^{2}}{g}-2\kappa_{n}\frac{n+2}{3}\frac{|\nabla A|^{4}}{g^{2}}\right).\]
The nonlinearity then is
\[\frac{|\nabla A|^{2}}{g^{2}}\left(c|A|^{2}+d-\frac{2\kappa_{n}(n+2)}{3}\frac{| \nabla A|^{2}}{g}\right).\]
Since
\[g>\varepsilon_{1}|A|^{2}+\varepsilon_{2},\]
there exists a constant \(N\), such that
\[Ng\geq c|A|^{2}+d.\]
Hence, by the maximum principle, there exists a constant (with \(\eta,\varepsilon_{1},\varepsilon_{2}\) chosen sufficiently small so that N is sufficiently large, this estimate holds at the initial time), such that
\[\frac{|\nabla A|^{2}}{g^{2}}\leq\frac{3N}{2\kappa_{n}(n+2)}.\]
Therefore, we see there exists a constant \(\mathcal{C}=\frac{3N}{2\kappa_{n}(n+2)}=\mathcal{C}(n,\mathcal{M}_{0})\), such that
\[\frac{|\nabla A|^{2}}{g^{2}}\leq\mathcal{C}\]
and from the definition of \(g\), we get the result of the lemma.
**Theorem 4.2**.: _Let \(\mathcal{M}_{t},t\in[0,T)\) be a solution of the mean curvature flow with surgery and normalised initial data. Then there exists constants \(\gamma_{3},\gamma_{4}\) depending only on the dimension, so that_
\[|\nabla^{2}A|^{2}\leq\gamma_{3}|A|^{6}+\gamma_{4}, \tag{4.2}\]
_for any \(t\in[0,T)\)._
Proof.: We have the following evolution equation
\[\Big{(}\partial_{t}-\Delta\Big{)}|\nabla^{2}A|^{2} \leq-2|\nabla^{3}A|^{2}+k_{1}|A|^{2}|\nabla^{2}A|^{2}+k_{2}|A|| \nabla A|^{2}|\nabla^{2}A|\] \[\leq-2|\nabla^{3}A|^{2}+\left(k_{1}+\frac{k_{2}}{2}\right)|A|^{2 }|\nabla^{2}A|^{2}+\frac{k_{2}}{2}|\nabla A|^{4}.\]
We now consider the evolution equation of the term \(\frac{|\nabla^{2}A|^{2}}{|H|^{5}}\). Firstly we see
\[\partial_{t}|H|^{\alpha} =\Delta|H|^{\alpha}+\alpha|H|^{\alpha-1}(\partial_{t}-\Delta)|H|- \alpha(\alpha-1)|H|^{\alpha-2}|\nabla H|^{2}\] \[\geq\Delta|H|^{\alpha}-\alpha(\alpha-1)|H|^{\alpha-2}|\nabla H|^{2},\]
since \(|\nabla|H||^{2}\geq|\nabla H|^{2}\) and \(|H|>0\). Therefore, we get
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\nabla^{2}A|^{2}}{|H|^{5}} \leq\frac{1}{|H|^{5}}\left(-2|\nabla^{3}A|^{2}+\left(k_{1}+\frac{k _{2}}{2}\right)|A|^{2}|\nabla^{2}A|^{2}+\frac{k_{2}}{2}|\nabla A|^{4}\right)\] \[+\frac{20|H|^{3}|\nabla^{2}A|^{2}|\nabla H|^{2}}{|H|^{10}}+\frac{ 2}{|H|^{10}}\left\langle\nabla|H|^{5},\nabla|\nabla^{2}A|^{2}\right\rangle\] \[-\frac{2|\nabla^{2}A|^{2}}{|H|^{15}}|\nabla|H|^{5}]^{2}.\]
We have the terms
\[\frac{20|H|^{3}|\nabla^{2}A|^{2}|\nabla H|^{2}}{|H|^{10}}-\frac{2 |\nabla^{2}A|^{2}}{|H|^{15}}|\nabla|H|^{5}|^{2} \leq\frac{20|\nabla^{2}A|^{2}|\nabla H|^{2}}{|H|^{7}}-\frac{50| \nabla^{2}A|^{2}|\nabla|H||^{2}}{|H|^{7}}\] \[\leq\frac{20|\nabla^{2}A|^{2}|\nabla H|^{2}}{|H|^{7}}.\]
and
\[\frac{2}{|H|^{10}}\left\langle\nabla|H|^{5},\nabla|\nabla^{2}A|^{ 2}\right\rangle =\frac{10\left\langle\nabla|H|,\nabla|\nabla^{2}A|^{2}\right\rangle }{|H|^{6}}\] \[\leq\frac{20\langle|\nabla H|,|\nabla^{2}A||\nabla^{3}A|\rangle}{ |H|^{6}}\] \[\leq\frac{1}{|H|^{5}}|\nabla^{3}A|^{2}+\frac{100|\nabla H|^{2}| \nabla^{2}A|^{2}}{|H|^{7}}.\]
Together with the gradient estimate, Theorem 4.1 this gives the following evolution equation
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\nabla^{2}A|^{2}}{|H|^{5}} \leq-\frac{|\nabla^{3}A|^{2}}{|H|^{5}}+k_{3}\frac{|\nabla^{2}A|^ {2}}{|H|^{3}}+\frac{120|\nabla H|^{2}|\nabla^{2}A|^{2}}{|H|^{7}}+\frac{k_{2}}{ 2}\frac{|\nabla A|^{4}}{|H|^{5}}\] \[\leq-\frac{|\nabla^{3}A|^{2}}{|H|^{5}}+k_{4}\frac{|\nabla^{2}A|^ {2}}{|H|^{3}}+C_{1}\frac{|\nabla^{2}A|^{2}}{|H|^{7}}+\frac{k_{5}|H|^{8}+C_{2}} {|H|^{5}}.\]
Similar computations give us
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\nabla A|^{2}}{|H|^{3}} \leq-\frac{|\nabla^{2}A|^{2}}{|H|^{3}}+\frac{k_{6}|H|^{8}+C_{3}} {|H|^{5}},\] \[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\nabla A|^{2}}{|H|^{7}} \leq-\frac{|\nabla^{2}A|^{2}}{|H|^{7}}+\frac{k_{7}|H|^{8}+C_{4}} {|H|^{9}}.\]
We now set
\[f=\frac{|\nabla^{2}A|^{2}}{|H|^{5}}+N\frac{|\nabla A|^{2}}{|H|^{3}}+M\frac{| \nabla A|^{2}}{|H|^{7}}-\kappa\sqrt{c|H|-|A|^{2}},\]
and so we have
\[\Big{(}\partial_{t}-\Delta\Big{)}f \leq k_{4}\frac{|\nabla^{2}A|^{2}}{|H|^{3}}+k_{5}|H|^{3}+C_{1}\frac{ |\nabla^{2}A|^{2}}{|H|^{7}}+\frac{C_{2}}{|H|^{5}}\] \[-N\frac{|\nabla^{2}A|^{2}}{|H|^{3}}+Nk_{6}|H|^{3}+\frac{NC_{3}}{| H|^{5}}\] \[-\frac{M|\nabla^{2}A|^{2}}{|H|^{7}}+\frac{k_{7}M}{|H|}+\frac{C_{ 4}M}{|H|}-\kappa\varepsilon_{0}|H|^{3}.\]
Therefore, we choose
\[N>k_{4},\quad\varepsilon_{0}\kappa>Nk_{6}+k_{5},\quad M>C_{1}.\]
Since \(H>0\), there exists a constant \(\alpha_{1}\), such that \(|H|\geq\alpha_{1}\) and we find
\[\Big{(}\partial_{t}-\Delta\Big{)}f\leq C_{5}\]
which implies
\[\max_{\mathcal{M}_{t}}f\leq\max_{\mathcal{M}_{t_{0}}}f+C_{5}\left(t-t_{0} \right).\]
Given the bound on the maximal time of existence, we have
\[f\leq C(n),\]
which implies
\[|\nabla^{2}A|^{2}\leq c(n)|H|^{6}+C(n)|H|^{5}.\]
Applying the quadratic pinching, we get (4.2).
Higher order estimates on \(|\nabla^{m}A|\) for all \(m\) follow by an analogous method. Furthermore, we derive estimates on the time derivative of the second fundamental form since we have the evolution equation
\[|\partial_{t}A|=|\Delta A+A*A*A|\leq C|\nabla^{2}A|^{2}+C|A|^{3} \leq c_{1}|A|^{3}+c_{2}.\]
## 5 Codimension Estimates
In this section, we want to show in regions of high curvature, the submanifold becomes approximately codimension one in a quantifiable sense. Our goal is to separate the second fundamental form in the principal direction and the second fundamental form in the other directions and compute their evolution equations separately. Later, we find estimates for the reaction and gradient terms as well as for the lower order terms, which appear due to the Riemannian ambient space. Then, we start by computing the evolution equation of the quantity \(\frac{|A^{-}|^{2}}{f}\), which since in the limit the background space is Euclidean, the result will follow from the maximum principle. The theorem we will prove is the following.
**Theorem 5.1**.: _Let \(F:\mathcal{M}^{n}\times[0,T)\rightarrow\mathcal{N}^{n+m}\) be a smooth solution to mean curvature flow so that \(F_{0}(p)=F(p,0)\) is compact and quadratically pinched. Then \(\forall\varepsilon>0,\exists H_{0}>0\), such that if \(f\geq H_{0}\), then_
\[\left|A^{-}\right|^{2}\leq\varepsilon f+C_{\varepsilon}\]
\(\forall t\in[0,T)\) _where \(C_{\varepsilon}=C_{\varepsilon}(n,m)\)._
### The Evolution Equation of \(|A^{-}|^{2}\)
We start by computing the evolution equation of \(|A^{-}|^{2}\). We define the tensor \(A^{-}\) by
\[A^{-}(X,Y)=A(X,Y)-\frac{\langle A(X,Y),H\rangle}{|H|^{2}}H,\]
for vector fields \(X,Y\) tangent to \(\mathcal{M}_{t}\). The tensor \(A^{-}\) is well defined, since \(|H|>0\). Therefore, we will need to compute the evolution equations of \(|A|^{2}\) and \(\frac{|\langle A,H\rangle|^{2}}{|H|^{2}}.\) Using (2.5) and the quotient rule, we have
\[\Big{(}\partial_{t} -\Delta\Big{)}\frac{\sum_{i,j}|\langle A_{ij},H\rangle|^{2}}{|H|^ {2}}=|H|^{-2}\Big{(}\partial_{t}-\Delta\Big{)}\sum_{i,j}|\langle A_{ij},H \rangle|^{2}\] \[+2|H|^{-2}\sum_{k}\Big{\langle}\nabla_{k}|H|^{2},\nabla_{k}\frac{ \sum_{i,j}|\langle A_{ij},H\rangle|^{2}}{|H|^{2}}\Big{\rangle}\] \[-|H|^{-4}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}\Big{(}-2|\nabla^ {\perp}H|^{2}+2\sum_{i,j}|\langle A_{ij},H\rangle|^{2}+2\sum_{k,\alpha,\beta} \bar{R}_{k\alpha k\beta}H^{\alpha}H^{\beta}\Big{)}.\]
Before computing the evolution equation of \(\sum_{i,j}|\langle A_{ij},H\rangle|^{2}\), we simplify the other terms. In particular, using \(\sum_{i,j}|\langle A_{ij},H\rangle|^{2}=|H|^{2}|h|^{2}\) and
\[|\nabla^{\perp}H|^{2}=|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}+|\nabla|H||^{2},\]
we write
\[2|H|^{-4}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}|\nabla^{\perp}H|^{2}=2|h|^{2 }|\nabla^{\perp}\nu_{1}|^{2}+2|H|^{-2}|h|^{2}|\nabla|H||^{2},\]
\[-2|H|^{-4}\sum_{i,j}|\langle A_{ij},H\rangle|^{4}=-2|h|^{4},\]
\[2|H|^{-4}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}\sum_{k,\alpha,\beta}\bar{R}_ {k\alpha k\beta}H^{\alpha}H^{\beta}=2|h|^{2}|H|^{-2}\sum_{k,\alpha,\beta}\bar{ R}_{k\alpha k\beta}H^{\alpha}H^{\beta}.\]
As for the remaining gradient terms, we have
\[\nabla_{k}|H|^{2}=2\langle\nabla_{k}^{\perp}H,H\rangle\]
and
\[\nabla_{k}(|H|^{-2}\sum_{i,j}|\langle A_{ij},H\rangle|^{2})=\nabla_{k}|h|^{2}=2 \sum_{i,j}h_{ij}\nabla_{k}h_{ij}.\]
Therefore, since \(H=|H|\nu_{1}\) and \(\langle\nabla_{k}^{\perp}\nu_{1},\nu_{1}\rangle=0\), we have
\[2|H|^{-2}\sum_{k}\Big{\langle}\nabla_{k}|H|^{2},\nabla_{k}\frac{ \sum_{i,j}|\langle A_{ij},H\rangle|^{2}}{|H|^{2}}\Big{\rangle} =8|H|^{-2}\sum_{i,j,k}\langle\nabla_{k}^{\perp}H,H\rangle h_{ij} \nabla_{k}h_{ij}\] \[=8|H|^{-1}\sum_{i,j,k}\nabla_{k}|H|h_{ij}\nabla_{k}h_{ij}.\]
To summarise, we have shown so far that
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{\sum_{i,j}|\langle A_{ij},H\rangle|^{2}}{|H|^{2}} =|H|^{-2}\Big{(}\partial_{t}-\Delta\Big{)}\sum_{i,j}|\langle A_{ ij},H\rangle|^{2}-2|h|^{4}+2|h|^{2}|\nabla_{k}^{\perp}\nu_{1}|^{2}\] \[+2|H|^{-2}|h|^{2}|\nabla|H|\|^{2}+8|H|^{-1}\sum_{i,j,k}\nabla_{k} |H|h_{ij}\nabla_{k}h_{ij}\] \[-2|h|^{2}|H|^{-2}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^ {\alpha}H^{\beta}.\]
For the evolution equation of \(\langle A_{ij},H\rangle\), we have the following lemma.
**Lemma 5.2**.: _The evolution equation of \(|\langle A_{ij},H\rangle|^{2}\) is_
\[|H|^{-2}\Big{(}\partial_{t}-\Delta\Big{)}|\langle A_{ij},H\rangle| ^{2} =4|\mathring{h}_{ij}A_{ij}^{-}|^{2}+2|R_{ij}^{\perp}(\nu_{1})|^{2} +4|h|^{4}-4|H|^{-1}\mathring{h}_{ij}\nabla_{k}|H|\langle\nabla_{k}^{\perp}A_{ ij}^{-},\nu_{1}\rangle\] \[-4\mathring{h}_{ij}\langle\nabla_{k}^{\perp}A_{ij}^{-},\nabla_{k }^{\perp}\nu_{1}\rangle-4|h|^{2}|\nabla_{k}^{\perp}\nu_{1}|^{2}-2|H|^{-2}|h| ^{2}|\nabla|H||^{2}\] \[-8|H|^{-1}\nabla_{k}|H|h_{ij}\nabla_{k}h_{ij}-2|\nabla h|^{2}+2B^ {\prime}\] \[-2|\bar{R}_{ij}(\nu_{1})|^{2}-4\langle\bar{R}_{ij}(\nu_{1}), \mathring{h}_{ip}A_{jp}^{-}-\mathring{h}_{jp}A_{ip}^{-}\rangle,\]
_where_
\[B^{\prime} :=2|H|^{-2}\bar{R}_{ipj\alpha}\langle A_{pq},H\rangle\langle A_{ ij},H\rangle-2|H|^{-2}\bar{R}_{jkp}\langle A_{pi},H\rangle\langle A_{ij},H\rangle\] \[+|H|^{-2}A_{ij}^{\alpha}\bar{R}_{k\alpha k\beta}H^{\beta}\langle A _{ij},H\rangle+|H|^{-2}H^{\alpha}\bar{R}_{k\alpha k\beta}A_{ij}^{\beta}\langle A _{ij},H\rangle\] \[-2|H|^{-2}A_{jp}^{\alpha}\bar{R}_{ip\alpha\beta}H^{\beta}\langle A _{ij},H\rangle-2|H|^{-2}A_{ip}^{\alpha}\bar{R}_{jp\alpha\beta}H^{\beta}\langle A _{ij},H\rangle\] \[+|H|^{-2}\bar{\nabla}_{k}\bar{R}_{kij\beta}H^{\beta}\langle A_{ij},H\rangle-|H|^{-2}\bar{\nabla}_{i}\bar{R}_{jk\delta}H^{\beta}\langle A_{ij},H\rangle.\]
Proof.: Whenever \(h\) is traced with \(A^{-}\) or its derivative, we may replace \(h\) with \(\hat{h}\), because \(A^{-}\) is traceless. Also, for simplicity, we avoid the summation notation. To begin with, using (2.1), we substitute formulas
\[\Big{\langle}\Big{(}\partial_{t}-\Delta\Big{)}^{\perp}A_{ij},H\Big{\rangle} =\langle A_{ij},A_{pq}\rangle\langle A_{pq},H\rangle+\langle A_{ iq},A_{pq}\rangle\langle A_{pj},H\rangle+\langle A_{jq},A_{pq}\rangle\langle A_{ pi},H\rangle\] \[-2\langle A_{ip},A_{jq}\rangle\langle A_{pq},H\rangle+2\bar{R}_ {ipq}\langle A_{pq},H\rangle-\bar{R}_{kjkp}\langle A_{pi},H\rangle-\bar{R}_{ kikp}\langle A_{pj},H\rangle\] \[+A_{ij}^{\alpha}\bar{R}_{k\alpha k\beta}\langle\nu_{\beta},H \rangle-2A_{jp}^{\alpha}\bar{R}_{ip\alpha\beta}\langle\nu_{\beta},H\rangle-2A _{ip}^{\alpha}\bar{R}_{jp\alpha\beta}\langle\nu_{\beta},H\rangle\] \[+\bar{\nabla}_{k}\bar{R}_{kij\beta}\langle\nu_{\beta},H\rangle- \bar{\nabla}_{i}\bar{R}_{jkk\beta}\langle\nu_{\beta},H\rangle,\] \[\Big{\langle}A_{ij},\Big{(}\partial_{t}-\Delta\Big{)}^{\perp}H \Big{\rangle} =\langle A_{pq},H\rangle\langle A_{pq},A_{ij}\rangle+H^{\alpha} \bar{R}_{k\alpha k\beta}\langle\nu_{\beta},A_{ij}\rangle.\]
Tracing each of the equations with a copy of \(\langle A_{ij},H\rangle\), we get
\[\Big{\langle}\Big{(}\partial_{t}-\Delta\Big{)}^{\perp}A_{ij},H \Big{\rangle}\langle A_{ij},H\rangle =\langle A_{ij},A_{pq}\rangle\langle A_{pq},H\rangle\langle A_{ ij},H\rangle+2\langle A_{iq},A_{pq}\rangle\langle A_{pj},H\rangle\langle A_{ij},H\rangle\] \[-2\langle A_{ip},A_{jq}\rangle\langle A_{pq},H\rangle\langle A_{ ij},H\rangle+2\bar{R}_{ipjq}\langle A_{pq},H\rangle\langle A_{ij},H\rangle\] \[-2\bar{R}_{kjkp}\langle A_{pi},H\rangle\langle A_{ij},H\rangle+A _{ij}^{\alpha}\bar{R}_{k\alpha k\beta}\langle\nu_{\beta},H\rangle\langle A_{ ij},H\rangle\] \[-2A_{jp}^{\alpha}\bar{R}_{ip\alpha\beta}\langle\nu_{\beta},H \rangle\langle A_{ij},H\rangle-2A_{ip}^{\alpha}\bar{R}_{jp\alpha\beta}\langle \nu_{\beta},H\rangle\langle A_{ij},H\rangle\] \[+\bar{\nabla}_{k}\bar{R}_{kij\beta}\langle\nu_{\beta},H\rangle \langle A_{ij},H\rangle-\bar{\nabla}_{i}\bar{R}_{jkk\beta}\langle\nu_{\beta},H \rangle\langle A_{ij},H\rangle,\] \[\Big{\langle}A_{ij},\Big{(}\partial_{t}-\Delta\Big{)}^{\perp}H \Big{\rangle}\langle A_{ij},H\rangle =\langle A_{pq},H\rangle\langle A_{pq},A_{ij}\rangle\langle A_{ ij},H\rangle+H^{\alpha}\bar{R}_{k\alpha k\beta}\langle\nu_{\beta},A_{ij}\rangle \langle A_{ij},H\rangle.\]
Putting the above equations together and keeping in mind that \(\langle\nu_{\beta},H\rangle=H^{\beta}\) we have,
\[\Big{(}\Big{(}\partial_{t}-\Delta\Big{)}\langle A_{ij},H\rangle \Big{)}\langle A_{ij},H\rangle =2\langle A_{ij},A_{pq}\rangle\langle A_{pq},H\rangle\langle A_{ ij},H\rangle+2\langle A_{iq},A_{pq}\rangle\langle A_{pj},H\rangle\langle A_{ij},H\rangle\] \[-2\langle A_{ip},A_{jq}\rangle\langle A_{pq},H\rangle\langle A_{ ij},H\rangle+2\bar{R}_{ipjq}\langle A_{pq},H\rangle\langle A_{ij},H\rangle\] \[-2\bar{R}_{kjkp}\langle A_{pi},H\rangle\langle A_{ij},H\rangle+A _{ij}^{\alpha}\bar{R}_{k\alpha k\beta}H^{\beta}\langle A_{ij},H\rangle\] \[-2A_{jp}^{\alpha}\bar{R}_{ip\alpha\beta}H^{\beta}\langle A_{ij},H \rangle-2A_{ip}^{\alpha}\bar{R}_{jp\alpha\beta}H^{\beta}\langle A_{ij},H\rangle\] \[+\bar{\nabla}_{k}\bar{R}_{kij\beta}H^{\beta}\langle A_{ij},H \rangle-\bar{\nabla}_{i}\bar{R}_{jk\beta}H^{\beta}\langle A_{ij},H\rangle\] \[+H^{\alpha}\bar{R}_{k\alpha k\beta}A_{ij}^{\beta}\langle A_{ij},H \rangle-2\langle\nabla_{k}^{\perp}A_{ij},\nabla_{k}^{\perp}H\rangle\langle A_{ ij},H\rangle.\]
Define
\[B :=2\bar{R}_{ipq}\langle A_{pq},H\rangle\langle A_{ij},H\rangle-2 \bar{R}_{kjkp}\langle A_{pi},H\rangle\langle A_{ij},H\rangle\] \[+A_{ij}^{\alpha}\bar{R}_{k\alpha k\beta}H^{\beta}\langle A_{ij},H \rangle+H^{\alpha}\bar{R}_{k\alpha k\beta}A_{ij}^{\beta}\langle A_{ij},H\rangle\] \[-2A_{jp}^{\alpha}\bar{R}_{ip\alpha\beta}H^{\beta}\langle A_{ij},H \rangle-2A_{ip}^{\alpha}\bar{R}_{jp\alpha\beta}H^{\beta}\langle A_{ij},H\rangle\] \[+\bar{\nabla}_{k}\bar{R}_{kij\beta}H^{\beta}\langle A_{ij},H \rangle-\bar{\nabla}_{i}\bar{R}_{jkk\beta}H^{\beta}\langle A_{ij},H\rangle.\]
We use the Uhlenbeck's trick to suppose that we are in an orthogonal frame. That is, suppose \(g^{ij}=\delta_{ij}\) remains orthogonal along the flow. More precisely, for any \(e_{i},e_{j}\) orthonormal, we have
\[\partial_{t}g^{ij}=\partial_{t}\langle e_{i},e_{j}\rangle=0.\]
Therefore, excluding the time derivative of the inverse of the metric, which is the term
\[2\big{(}\partial_{t}g^{ij}\big{)}g^{pq}\langle A_{ip},H\rangle\langle A_{jq},H\rangle,\]
we have
\[\Big{(}\partial_{t}-\Delta\Big{)}|\langle A_{ij},H\rangle|^{2} =2\Big{(}\Big{(}\partial_{t}-\Delta\Big{)}\langle A_{ij},H\rangle \Big{)}\langle A_{ij},H\rangle-2|\nabla\langle A_{ij},H\rangle|^{2}\] \[=4\langle A_{ij},A_{pq}\rangle\langle A_{pq},H\rangle\langle A_{ ij},H\rangle+4\langle A_{iq},A_{pq}\rangle\langle A_{pj},H\rangle\langle A_{ij},H\rangle\] \[-4\langle A_{ip},A_{jq}\rangle\langle A_{pq},H\rangle\langle A_{ ij},H\rangle-4\langle\nabla^{\perp}_{k}A_{ij},\nabla^{\perp}_{k}H\rangle\langle A_{ ij},H\rangle\] \[-2|\nabla\langle A_{ij},H\rangle|^{2}+2B. \tag{5.1}\]
To finish the proof, we multiply \(|H|^{-2}\) and then rewrite each of the remaining terms using \(A=A^{-}+h\nu_{1}\). For the first term on the first line of (5.1), we have
\[4|H|^{-2}\langle A_{ij},A_{pq}\rangle\langle A_{pq},H\rangle \langle A_{ij},H\rangle =4|H|^{-2}|H|^{2}h_{ij}h_{pq}\langle A_{ij},A_{pq}\rangle\] \[=4|h|^{4}+4h_{ij}h_{pq}\langle A^{-}_{ij},A^{-}_{pq}\rangle\] \[=4|h|^{4}+4\hat{h}_{ij}\hat{h}_{pq}\langle A^{-}_{ij},A^{-}_{pq}\rangle\] \[=4|h|^{4}+4\hat{|}\hat{h}_{ij}A^{-}_{ij}|^{2}. \tag{5.2}\]
Also, B can be rewritten as
\[B^{\prime} :=2|H|^{-2}\bar{R}_{ipjq}\langle A_{pq},H\rangle\langle A_{ij},H \rangle-2|H|^{-2}\bar{R}_{jkkp}\langle A_{pi},H\rangle\langle A_{ij},H\rangle\] \[+|H|^{-2}A^{\alpha}_{ij}\bar{R}_{k\alpha k\beta}H^{\beta}\langle A _{ij},H\rangle+|H|^{-2}H^{\alpha}\bar{R}_{k\alpha k\beta}A^{\beta}_{ij} \langle A_{ij},H\rangle\] \[-2|H|^{-2}A^{\alpha}_{jp}\bar{R}_{ip\alpha\beta}H^{\beta}\langle A _{ij},H\rangle-2|H|^{-2}A^{\alpha}_{ip}\bar{R}_{jpq\alpha\beta}H^{\beta} \langle A_{ij},H\rangle\] \[+|H|^{-2}\bar{\nabla}_{k}\bar{R}_{kij\beta}H^{\beta}\langle A_{ ij},H\rangle-|H|^{-2}\bar{\nabla}_{i}\bar{R}_{jkk\beta}H^{\beta}\langle A_{ij},H\rangle.\]
In higher codimension, the fundamental Gauss, Codazzi and Ricci equations on Riemannian manifold in local frame take the form
\[R_{ijpq}=\bar{R}_{ijpq}+A^{\alpha}_{ip}A^{\alpha}_{jq}-A^{\alpha}_{jp}A^{ \alpha}_{iq},\]
\[(\nabla^{\perp}_{i}A)^{\alpha}_{jp}-(\nabla^{\perp}_{j}A)^{\alpha}_{ip}=-\bar {R}_{ijpa},\]
and
\[R^{\perp}_{ij\alpha\beta}=\bar{R}_{ij\alpha\beta}+A^{\alpha}_{ip}A^{\beta}_{jp }-A^{\beta}_{ip}A^{\alpha}_{jp}.\]
Define a vector-valued version of the normal curvature by
\[R^{\perp}_{ij}(\nu_{\alpha})=R^{\perp}_{ij\alpha\beta}\nu_{\beta}=\bar{R}_{ ij\alpha\beta}\nu_{\beta}+A^{\alpha}_{ip}A^{\beta}_{jp}-A^{\beta}_{ip}A^{ \alpha}_{jp}\nu_{\beta}. \tag{5.3}\]
In particular, we note that \(R^{\perp}_{ij}(\nu_{1})=\bar{R}_{ij}(\nu_{1})+h_{ip}A^{\beta}_{jp}-h_{jp}A^{ \beta}_{ip}\), which in view of
\[A_{ij}=A^{-}_{ij}+h_{ij}\nu_{1}=A^{-}_{ij}+\hat{h}_{ij}\nu_{1}+\frac{1}{n}|H| g_{ij}\nu_{1},\]
gives
\[R^{\perp}_{ij}(\nu_{1})=\bar{R}_{ij}(\nu_{1})+\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^ {-}_{ip}. \tag{5.4}\]
For the difference of second and third term of (5.1), we notice the resemblance to \(|R^{\perp}_{ij}(\nu_{1})|^{2}\) in (5.4). We compute
\[\begin{split}|\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^{-}_{ip}|^{2}& =|h_{ip}A_{jp}-h_{jp}A_{ip}|^{2}=\langle h_{ip}A_{jp}-h_{jp}A_{ip},h_ {iq}A_{jq}-h_{jq}A_{iq}\rangle\\ &=2h_{ip}h_{iq}\langle A_{jp},A_{jq}\rangle-2h_{ip}h_{jq}\langle A _{jp},A_{iq}\rangle\\ &=2|H|^{-2}\big{(}\langle A_{jp},A_{jq}\rangle\langle A_{ip},H \rangle\langle A_{iq},H\rangle-\langle A_{jp},A_{iq}\rangle\langle A_{ip},H \rangle\langle A_{jq},H\rangle\big{)}.\end{split} \tag{5.5}\]
Therefore,
\[\begin{split}|R^{\perp}_{ij}(\nu_{1})|^{2}&=|\bar{R }_{ij}(\nu_{1})|^{2}+2|H|^{-2}\big{(}\langle A_{jp},A_{jq}\rangle\langle A_{ip },H\rangle\langle A_{iq},H\rangle-\langle A_{jp},A_{iq}\rangle\langle A_{ip },H\rangle\langle A_{jq},H\rangle\big{)}\\ &+2\langle\bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp }A^{-}_{ip}\rangle.\end{split}\]
After reindexing (e.g. \(j\to p\to q\to i\to j\) on the second term and \(j\to i\to q\to j,p\to p\) on the third term), this gives
\[\begin{split} 2|R^{\perp}_{ij}(\nu_{1})|^{2}&=2|\bar{R}_{ ij}(\nu_{1})|^{2}+4|H|^{-2}\big{(}\langle A_{ip},A_{pq}\rangle\langle A_{jq},H \rangle\langle A_{ij},H\rangle-\langle A_{ip},A_{jq}\rangle\langle A_{pq},H \rangle\langle A_{ij},H\rangle\big{)}\\ &+4\langle\bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp }A^{-}_{ip}\rangle.\end{split}\]
Thus, we have shown the reaction terms of our lemma statement are correct. For the gradient terms, it follows from the identities
\[\begin{split}\langle\nabla^{\perp}_{k}A_{ij},\nu_{1}\rangle= \langle\nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle+\nabla_{k}h_{ij},\\ \langle\nabla^{\perp}_{k}A_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle =\langle\nabla^{\perp}_{k}A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle+h_{ij}| \nabla^{\perp}_{k}\nu_{1}|^{2},\end{split} \tag{5.6}\]
\[\begin{split}\nabla^{\perp}_{k}H=\nabla_{k}|H|\nu_{1}+|H|\nabla^ {\perp}_{k}\nu_{1}.\end{split}\]
Therefore, we have
\[\begin{split}-4|H|^{-2}\langle\nabla^{\perp}_{k}A_{ij},\nabla^{ \perp}_{k}H\rangle\langle A_{ij},H\rangle&=-4|H|^{-1}h_{ij} \nabla_{k}|H|\langle\nabla^{\perp}_{k}A_{ij},\nu_{1}\rangle-4|H|^{-1}h_{ij} \langle\nabla^{\perp}_{k}A_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle\\ &=-4|H|^{-1}\hat{h}_{ij}\nabla_{k}|H|\langle\nabla^{\perp}_{k}A^{ -}_{ij},\nu_{1}\rangle-4|H|^{-1}h_{ij}\nabla_{k}|H|\nabla_{k}h_{ij}\\ &-4\hat{h}_{ij}\langle\nabla^{\perp}_{k}A^{-}_{ij},\nabla^{\perp }_{k}\nu_{1}\rangle-4|h|^{2}|\nabla^{\perp}_{k}\nu_{1}|^{2},\end{split} \tag{5.7}\]
\[\begin{split}-2|H|^{-2}|\nabla\langle A_{ij},H\rangle|^{2}& =-2|H|^{-2}|\nabla(|H|h_{ij})|^{2}\\ &=-2|H|^{-2}|h|^{2}|\nabla|H||^{2}-2|\nabla h|^{2}-4|H|^{-1}h_{ij} \nabla_{k}|H|\nabla_{k}h_{ij},\end{split}\]
since \(A^{-}_{ii}=0\), meaning that it's trace free. Combining (5.2)-(5.7), we get the desired result.
Substituting the result of the above lemma into our equation for the evolution of \(|H|^{-2}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}\) and combining like terms, we have
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{\sum_{i,j}|\langle A_{ij},H \rangle|^{2}}{|H|^{2}} =4\sum_{i,j}|\mathring{h}_{ij}A_{ij}^{-}|^{2}+2\sum_{i,j}|R_{ij}^ {\perp}(\nu_{1})|^{2}+2|h|^{4}\] \[-4|H|^{-1}\sum_{i,j,k}\mathring{h}_{ij}\nabla_{k}|H|\langle\nabla_ {k}^{\perp}A_{ij}^{-},\nu_{1}\rangle-4\sum_{i,j,k}\mathring{h}_{ij}\langle \nabla_{k}^{\perp}A_{ij}^{-},\nabla_{k}^{\perp}\nu_{1}\rangle\] \[-2|h|^{2}\sum_{k}|\nabla_{k}^{\perp}\nu_{1}|^{2}-2|\nabla h|^{2}+ 2B^{\prime}-2|h|^{2}|H|^{-2}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{ \alpha}H^{\beta}\] \[-2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}-4\sum_{i,j,p}\langle\bar{ R}_{ij}(\nu_{1}),\mathring{h}_{ip}A_{jp}^{-}-\mathring{h}_{jp}A_{ip}^{-}\rangle.\]
We negate the expression above, add in the evolution equation of \(|A|^{2}\) and use (3.4) to get
\[\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2} =-2|\nabla^{\perp}A|^{2}+2\sum_{i,j,p,q}|\langle A_{ij},A_{pq} \rangle|^{2}+2\sum_{i,j}|R_{ij}^{\perp}|^{2}+\Big{(}P_{\alpha}-2B^{\prime} \Big{)}\] \[-4\sum_{i,j}|\mathring{h}_{ij}A_{ij}^{-}|^{2}-2\sum_{i,j}|R_{ij}^ {\perp}(\nu_{1})|^{2}-2|h|^{4}+4|H|^{-1}\sum_{i,j,k}\mathring{h}_{ij}\nabla_{ k}|H|\langle\nabla_{k}^{\perp}A_{ij}^{-},\nu_{1}\rangle\] \[+4\sum_{i,j,k}\mathring{h}_{ij}\langle\nabla_{k}^{\perp}A_{ij}^{ -},\nabla_{k}^{\perp}\nu_{1}\rangle+2|\nabla h|^{2}+2|h|^{2}\sum_{k}|\nabla_{ k}^{\perp}\nu_{1}|^{2}\] \[+2|h|^{2}|H|^{-2}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{ \alpha}H^{\beta}+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}\] \[+4\sum_{i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\mathring{h}_{ip}A_{jp }^{-}-\mathring{h}_{jp}A_{ip}^{-}\rangle.\]
Taking the term \(2|H|^{-2}H^{\alpha}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}A_{ij}^{\beta} \langle A_{ij},H\rangle\) out of \(2B^{\prime}\) and the last term of the evolution equation of \(\frac{\sum_{i,j}|\langle A_{ij},H\rangle|^{2}}{|H|^{2}}\), we have
\[2|H|^{-2}\sum_{i,j,k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{ \alpha}A_{ij}^{\beta}\langle A_{ij},H\rangle-2|h|^{2}|H|^{-2}\sum_{k,\alpha, \beta}\bar{R}_{k\alpha k\beta}H^{\alpha}H^{\beta}\] \[=2|H|^{-2}\sum_{i,j,k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{ \alpha}\big{(}A_{ij}^{-,\beta}+\frac{|\langle A_{ij},H\rangle|}{|H|^{2}}H^{ \beta}\big{)}\langle A_{ij},H\rangle-2|h|^{2}|H|^{-2}\sum_{k,\alpha,\beta}\bar {R}_{k\alpha k\beta}H^{\alpha}H^{\beta}\] \[=2|H|^{-2}\sum_{i,j,k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta}H^{ \alpha}A_{ij}^{\beta}\langle A_{ij},H\rangle.\]
The reaction terms satisfy
\[2\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-4\sum_{i,j}|\mathring{h}_{ij }A_{ij}^{-}|^{2}-2|h|^{4}=2\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-} \rangle|^{2},\]
\[2\sum_{i,j}|R_{ij}^{\perp}|^{2}-2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}=2| \mathring{R}^{\perp}|^{2}+2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}. \tag{5.8}\]
where
\[|\hat{R}^{\perp}|^{2}=\sum_{i,j,\alpha,\beta\geq 2}\Big{(}\sum_{p}|A^{\alpha}_{ip}A ^{\beta}_{jp}-A^{\alpha}_{jp}A^{\beta}_{ip}|^{2}+|\bar{R}_{ij\alpha\beta}|^{2}+2 \sum_{p}\langle\bar{R}_{ij\alpha\beta},A^{\alpha}_{ip}A^{\beta}_{jp}-A^{\alpha} _{jp}A^{\beta}_{ip}\rangle\Big{)} \tag{5.9}\]
As for the gradient terms, taking the form of \(\nabla^{\perp}_{i}A_{jp}=\nabla^{\perp}_{i}A^{-}_{jp}+\nabla_{i}h_{jp}\nu_{1}+ h_{jp}\nabla^{\perp}_{i}\nu_{1}\), we see
\[|\nabla^{\perp}A|^{2}=|\nabla^{\perp}A^{-}|^{2}+|\nabla h|^{2}+|h|^{2}|\nabla^ {\perp}\nu_{1}|^{2}+2\sum_{i,j,k}\hat{h}_{ij}\langle\nabla^{\perp}A^{-}_{ij}, \nabla^{\perp}_{k}\nu_{1}\rangle+2\sum_{i,j,k}\nabla_{k}\hat{h}_{ij}\langle \nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle.\]
Thus,
\[-2|\nabla^{\perp}A|^{2}+2|\nabla h|^{2}+2|h|^{2}|\nabla^{\perp} \nu_{1}|^{2}+4\sum_{i,j,k}\hat{h}_{ij}\langle\nabla^{\perp}_{k}A^{-}_{ij}, \nabla^{\perp}_{k}\nu_{1}\rangle =-2|\nabla^{\perp}A^{-}|^{2}\] \[-4\sum_{i,j,k}\nabla_{k}\hat{h}_{ij}\langle\nabla^{\perp}_{k}A^{- }_{ij},\nu_{1}\rangle.\]
Putting this all together gives
\[\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2} =2\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq}\rangle|^{2}+2|\hat {R}^{\perp}|^{2}+2\sum_{i,j}|R^{\perp}_{ij}(\nu_{1})|^{2}\] \[-2|\nabla^{\perp}A^{-}|^{2}+4|H|^{-1}\sum_{i,j,k}\hat{h}_{ij} \nabla_{k}|H|\langle\nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle-4\sum_{i,j,k} \nabla_{k}\hat{h}_{ij}\langle\nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle\] \[+2|H|^{-2}\sum_{i,j,k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta} H^{\alpha}A^{\beta}_{ij}\langle A_{ij},H\rangle+\Big{(}P_{\alpha}-2B^{\prime \prime}\Big{)}\] \[+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p}\langle\bar{ R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^{-}_{ip}\rangle,\]
where
\[B^{\prime\prime} :=2|H|^{-2}\sum_{i,j,p,q}\bar{R}_{ipjq}\langle A_{pq},H\rangle \langle A_{ij},H\rangle-2|H|^{-2}\sum_{i,j,p,k}\bar{R}_{kjkp}\langle A_{pi},H \rangle\langle A_{ij},H\rangle\] \[+|H|^{-2}\sum_{i,j,k,\alpha,\beta}A^{\alpha}_{ij}\bar{R}_{k \alpha k\beta}H^{\beta}\langle A_{ij},H\rangle-4|H|^{-2}\sum_{i,j,p,\alpha, \beta}A^{\alpha}_{ip}\bar{R}_{jp\alpha\beta}H^{\beta}\langle A_{ij},H\rangle\] \[+|H|^{-2}\sum_{i,j,k,\beta}\bar{\nabla}_{k}\bar{R}_{kij\beta}H^{ \beta}\langle A_{ij},H\rangle-|H|^{-2}\sum_{i,j,k,\beta}\bar{\nabla}_{i}\bar{ R}_{jkk\beta}H^{\beta}\langle A_{ij},H\rangle.\]
and we let
\[P_{\alpha} =4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{\alpha}A^{\alpha}_{pq} A^{\alpha}_{ij}\big{)}-4\sum_{j,k,p}\bar{R}_{kjkp}\big{(}\sum_{i,\alpha}A^{ \alpha}_{pi}A^{\alpha}_{ij}\big{)}+2\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k \beta}\big{(}\sum_{i,j}A^{\alpha}_{ij}A^{\beta}_{ij}\big{)}\] \[-8\sum_{j,p,\alpha,\beta}\bar{R}_{jp\alpha\beta}\big{(}\sum_{i}A^ {\alpha}_{ip}A^{\beta}_{ij}\big{)}+2\sum_{i,j,k,\beta}\bar{\nabla}_{k}\bar{R}_ {kij\beta}A^{\beta}_{ij}-2\sum_{i,j,k,\beta}\bar{\nabla}_{i}\bar{R}_{jkk\beta} A^{\beta}_{ij},\]
to be the lower order terms appearing in (2.4). Because \(\langle A^{-}_{ij},\nu_{1}\rangle=0\), differentiating with respect to \(\nabla_{k}\) gives
\[\langle\nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle=-\langle A^{-}_{ij},\nabla^ {\perp}_{k}\nu_{1}\rangle=-\langle\mathring{A}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle.\]
Since \(\mathring{\hat{h}}_{ij}=\langle\mathring{\hat{A}}_{ij},\nu_{1}\rangle\), from the equation above, we get
\[\nabla_{k}\mathring{h}_{ij}=\langle\nabla^{\perp}_{k}\mathring{A}_{ij},\nu_{1 }\rangle+\langle\mathring{\hat{A}}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle= \langle\nabla^{\perp}_{k}\mathring{A}_{ij},\nu_{1}\rangle-\langle\nabla^{ \perp}_{k}A^{-}_{ij},\nu_{1}\rangle.\]
To simplify our final expression, let us define the tensor
\[Q_{ijk}:=\langle\nabla^{\perp}_{k}\mathring{A}_{ij},\nu_{1}\rangle-\langle \nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle-|H|^{-1}\mathring{h}_{ij}\nabla_ {k}|H|.\]
Here we have the lower order terms in the evolution equation for the evolution of \(|A^{-}|^{2}\). We match them to the evolution of the pinching quantity \(f>0\). For the term \(P_{\alpha}-2B^{\prime\prime}\), we have
\[P_{\alpha}-2B^{\prime\prime} =4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{\alpha\geq 2}A^{ \alpha}_{pq}A^{\alpha}_{ij}\big{)}-4\sum_{j,k,p}\bar{R}_{kjkp}\big{(}\sum_{i, \alpha\geq 2}A^{\alpha}_{pi}A^{\alpha}_{ij}\big{)}\] \[+2\sum_{k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta}\big{(}\sum_{ i,j}A^{\alpha}_{ij}A^{\beta}_{ij}\big{)}-8\sum_{j,p,\alpha,\beta\geq 2}\bar{R}_{ jp\alpha\beta}\big{(}\sum_{i}A^{\alpha}_{ip}A^{\beta}_{ij}\big{)}\] \[+2\sum_{i,j,k,\beta\geq 2}\bar{\nabla}_{k}\bar{R}_{kij\beta}A^{ \beta}_{ij}-2\sum_{i,j,k,\beta\geq 2}\bar{\nabla}_{i}\bar{R}_{jkk\beta}A^{ \beta}_{ij}.\]
In conclusion, according to Theorem 5.2 and (3.4), we get the following proposition.
**Proposition 5.3**.: _The evolution equation of \(|A^{-}|^{2}\) is_
\[\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2} =2\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq}\rangle|^{2}+2| \mathring{R}^{\perp}|^{2}+2\sum_{i,j}|R^{\perp}_{ij}(\nu_{1})|^{2}\] \[-2|\nabla^{\perp}A^{-}|^{2}+4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij },\nabla^{\perp}_{k}\nu_{1}\rangle\] \[+2|H|^{-2}\sum_{i,j,k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta} H^{\alpha}A^{\beta}_{ij}\langle A_{ij},H\rangle+\Big{(}P_{\alpha}-2B^{\prime \prime}\Big{)}\] \[+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p}\langle\bar{ R}_{ij}(\nu_{1}),\mathring{h}_{ip}A^{-}_{jp}-\mathring{h}_{jp}A^{-}_{ip}\rangle,\]
_where_
\[Q_{ijk}:=\langle\nabla^{\perp}_{k}\mathring{A}_{ij},\nu_{1} \rangle-\langle\nabla^{\perp}_{k}A^{-}_{ij},\nu_{1}\rangle-|H|^{-1}\mathring{ h}_{ij}\nabla_{k}|H|.\]
We consider the function \(f=-d_{n}+c_{n}|H|^{2}-|A|^{2}\). The assumption of the theorem is \(f>0\) (and consequently \(|H|>0\)) everywhere on \(\mathcal{M}_{0}\). As \(\mathcal{M}_{0}\) is compact, there exist constants \(\varepsilon_{0},\varepsilon_{1}>0\) depending on \(\mathcal{M}_{0}\), such that \(f\geq\varepsilon_{1}|H|^{2}+\varepsilon_{0}\), on \(\mathcal{M}_{0}\). By Theorem 2 in [3],
\(f\geq\varepsilon_{1}|H|^{2}+\varepsilon_{0}\), on \(\mathcal{M}_{t}\), for every \(t\in[0,T)\) and consequently \(|H|>0\) is preserved as well. Recall
\[c_{n}=\frac{4}{3n},\quad\text{if}\ \ n\geq 8\quad\text{ and}\quad c_{n}=\frac{3(n+1)}{2n(n+2)},\quad\text{if}\ \ n=5,6\ \ \text{or}\ 7.\]
We will require additional pinching for our estimates when \(n=5,6\) or \(7\). Since \(|A|^{2}+\varepsilon_{0}\leq(c_{n}-\varepsilon_{1})|H|^{2}\), for every \(t\in[0,T)\), without loss of generality, we may replace \(c_{n}\) by \(c_{n}-\varepsilon_{1}\) and assume throughout the proof that
\[c_{n}\leq\frac{4}{3n},\quad\text{if}\ \ n\geq 8\quad\text{ and}\quad c_{n}\leq\frac{3(n+1)}{2n(n+2)},\quad\text{if}\ \ n=5,6\ \ \text{or}\ 7.\]
The strictness of the latter inequality depends on initial data through \(\varepsilon_{1}\). We still have \(f\geq\varepsilon_{0}>0\) and \(|H|>0\), for every \(t\). Let \(\delta>0\) be a small constant to be determined later in the proof. By previous work, the evolution equation for \(f\) is
\[\begin{split}\Big{(}\partial_{t}-\Delta\Big{)}f&=2 (|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2})+2\Big{(}c_{n}\sum_{i,j}| \langle A_{ij},H\rangle|^{2}-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2} -\sum_{i,j}|R_{ij}^{\perp}|^{2}\Big{)}\\ &+2c_{n}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{\alpha}H ^{\beta}-P_{\alpha}.\end{split} \tag{5.10}\]
The pinching condition implies both terms on the right hand side of the equation for \(f\) are non negative at each point in space-time. The first step of the proof and the main effort is to analyse the evolution equation \(\frac{|A^{-}|^{2}}{f}\). We will show this ratio satisfies a favourable evolution equation with a right hand side has a nonpositive term. Specifically, we will show that
\[\begin{split}\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}} {f}&\leq 2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f \Big{\rangle}-\delta\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)} f+C^{\prime}\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{\sqrt{f}}\\ &+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_ {i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A_{jp}^{-}-\hat{h}_{jp}A_{ip}^ {-}\rangle\Big{)},\end{split} \tag{5.11}\]
for \(C^{\prime},C^{\prime\prime}\) constants, that depend on \(n,K_{1},K_{2}\) and \(d_{n}\). Then, since at the limit the background space is Euclidean, the result will follow from the maximum principle. By what we have shown this far, the evolution equation of \(\frac{|A^{-}|^{2}}{f}\) is
\[\begin{split}\Big{(}\partial_{t}&-\Delta\Big{)} \frac{|A^{-}|^{2}}{f}=\frac{1}{f}\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2} -|A^{-}|^{2}\frac{1}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f+2\Big{\langle} \nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}\\ &=\frac{1}{f}\Big{(}2\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-} \rangle|^{2}+2|\hat{R}^{\perp}|^{2}+2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2} \Big{)}\\ &+\frac{1}{f}\Big{(}-2|\nabla^{\perp}A^{-}|^{2}+4\sum_{i,j,k}Q_{ ijk}\langle A_{ij}^{-},\nabla_{k}^{\perp}\nu_{1}\rangle+2|H|^{-2}\sum_{i,j,k, \alpha,\beta\geq 2}\bar{R}_{k\alpha k\beta}H^{\alpha}A_{ij}^{\beta}\langle A_{ij},H\rangle\Big{)}\\ &+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_ {i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A_{jp}^{-}-\hat{h}_{jp}A_{ip}^ {-}\rangle\Big{)}\end{split}\]
\[-|A^{-}|^{2}\frac{1}{f^{2}}\Big{(}2(|\nabla^{\perp}A|^{2}-c_{n}| \nabla^{\perp}H|^{2})\Big{)}\] \[-|A^{-}|^{2}\frac{1}{f^{2}}\Big{(}2\Big{(}c_{n}\sum_{i,j}|\langle A _{ij},H\rangle|^{2}-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j }|R_{ij}^{\perp}|^{2}\Big{)}\Big{)}\] \[+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}\] \[+\frac{1}{f}\Big{(}P_{\alpha}-2B^{\prime\prime}\Big{)}-|A^{-}|^{ 2}\frac{1}{f^{2}}\Big{(}2c_{n}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{ \alpha}H^{\beta}-P_{\alpha}\Big{)}.\]
Rearranging these terms, we have
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}=\frac{1}{f }\Big{(}2\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}+2|\hat{R}^{ \perp}|^{2}+2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}\Big{)}\] \[+\frac{1}{f}\Big{(}-2\frac{|A^{-}|^{2}}{f}\Big{(}c_{n}\sum_{i,j} |\langle A_{ij},H\rangle|^{2}-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2 }-\sum_{i,j}|R_{ij}^{\perp}|^{2}\Big{)}\Big{)}\] \[+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_ {i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A_{jp}^{-}-\hat{h}_{jp}A_{ip}^ {-}\rangle\Big{)}\] \[+\frac{1}{f}\Big{(}4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{ \alpha\geq 2}A_{pq}^{\alpha}A_{ij}^{\alpha}\big{)}-4\sum_{j,k,p}\bar{R}_{ kjkp}\big{(}\sum_{i,\alpha\geq 2}A_{pi}^{\alpha}A_{ij}^{\alpha}\big{)} \Big{)}\] \[+\frac{1}{f}\Big{(}2\sum_{k,\alpha,\beta\geq 2}\bar{R}_{k \alpha k\beta}\big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij}^{\beta}\big{)}-8\sum_{j,p,\alpha,\beta\geq 2}\bar{R}_{jp\alpha\beta}\big{(}\sum_{i}A_{ip}^{ \alpha}A_{ij}^{\beta}\big{)}\Big{)}\] \[+\frac{1}{f}\Big{(}2|H|^{-2}\sum_{i,j,k,\alpha,\beta\geq 2}\bar{R}_{ k\alpha k\beta}H^{\alpha}A_{ij}^{\beta}\langle A_{ij},H\rangle+2\sum_{i,j,k, \beta\geq 2}\bar{\nabla}_{k}\bar{R}_{kij\beta}A_{ij}^{\beta}-2\sum_{i,j,k, \beta\geq 2}\bar{\nabla}_{i}\bar{R}_{jkk\beta}A_{ij}^{\beta}\Big{)}\] \[+\frac{1}{f}\Big{(}4\sum_{i,j,k}Q_{ijk}\langle A_{ij}^{-},\nabla_ {k}^{\perp}\nu_{1}\rangle-2|\nabla^{\perp}A^{-}|^{2}-2\frac{|A^{-}|^{2}}{f}(| \nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2})\Big{)}\] \[+\frac{1}{f}\Big{(}\frac{|A^{-}|^{2}}{f}\big{(}4\sum_{i,j,p,q} \bar{R}_{ipjq}\big{(}\sum_{\alpha}A_{pq}^{\alpha}A_{ij}^{\alpha}\big{)}-4\sum_ {j,k,p}\bar{R}_{kjkp}\big{(}\sum_{i,\alpha}A_{pi}^{\alpha}A_{ij}^{\alpha}\big{)} \big{)}-2\sum_{k}\bar{R}_{k1k1}\sum_{i,j}(A_{ij}^{1})^{2}\Big{)}\] \[+\frac{1}{f}\Big{(}\frac{|A^{-}|^{2}}{f}\big{(}2\sum_{k,\alpha, \beta}\bar{R}_{k\alpha k\beta}\big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij}^{\beta} \big{)}+4c_{n}\sum_{k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{\alpha}H^{\beta} -8\sum_{j,p,\alpha,\beta}\bar{R}_{jp\alpha\beta}\big{(}\sum_{i}A_{ip}^{\alpha} A_{ij}^{\beta}\big{)}\big{)}\Big{)}\] \[+\frac{1}{f}\Big{(}\frac{|A^{-}|^{2}}{f}\big{(}2\sum_{i,j,k,\beta} \bar{\nabla}_{k}\bar{R}_{kij\beta}A_{ij}^{\beta}-2\sum_{i,j,k,\beta}\bar{ \nabla}_{i}\bar{R}_{jkk\beta}A_{ij}^{\beta}\big{)}\Big{)}\] \[+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}.\]
Let us provide a brief explanation of the above evolution equation. The first two lines on the right hand side are the higher order terms and the terms in the third line are Euclidean terms. The terms from the third line to the sixth line are lower order terms, that are orthogonal to the principal direction. The terms on the seventh and eleventh line are gradient terms
and the terms from the eighth to the tenth line are lower order terms, both in the principal direction and orthogonal to the principal direction.
We begin by estimating the reaction terms. We will make use of two estimates. The first estimate is proven on page 372 in [3], Section 3. The second estimate is a matrix inequality, which is Lemma 3.3 in [16].
**Lemma 5.4**.: \[\sum_{i,j}\big{|}\mathring{h}_{ij}A^{-}_{ij}\big{|}^{2}+\sum_{i,j}|R^{\perp}_{ ij}(\nu_{1})|^{2}\leq 2|\mathring{h}|^{2}|A^{-}|^{2}+\sum_{i,j}|\bar{R}_{ij}( \nu_{1})|^{2}+4|\bar{R}_{ij}(\nu_{1})||\mathring{h}||A^{-}|,\] (5.12)
\[\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq}\rangle|^{2}+|\hat{R}^{\perp}|^{2} \leq\frac{3}{2}|A^{-}|^{4}+\sum_{\alpha,\beta\geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{-}|^{2}\Big{)}. \tag{5.13}\]
Proof.: The arguments given in [3] to prove inequality (5.12) are simple and short, so we will repeat them in our notation here. We will express inequality (5.13) so that it is an immediate consequence of Lemma 3.3 in [16]. Fix any point \(p\in\mathcal{M}\) and time \(t\in[0,T)\). Let \(e_{1},\ldots,e_{n}\) be an orthonormal basis which identifies \(T_{p}\mathcal{M}\cong\mathbb{R}^{n}\) at time \(t\) and then choose \(\nu_{2},\ldots,\nu_{m}\) to be a basis of the orthogonal complement of principal normal \(\nu_{1}\) in \(N_{p}\mathcal{M}\) at time \(t\). For each \(\beta\in\{2,\ldots,m\}\), define a matrix \(A_{\beta}=\langle A,\nu_{\beta}\rangle\) whose components are given by \((A_{\beta})_{ij}=A^{\beta}_{ij}\).
Then \(A^{-}=\sum_{\beta\geq 2}A_{\beta}\nu_{\beta}\). We also have \(\mathring{h}=\langle\mathring{A},\nu_{1}\rangle\). To prove (5.12), let \(\lambda_{1},\ldots,\lambda_{n}\) denote the eigenvalues of \(\mathring{h}\). Assume the orthonormal basis is an eigenbasis of \(\mathring{h}\). Now
\[\sum_{i,j}|\mathring{h}_{ij}A^{-}_{ij}|^{2}=\sum_{\beta\geq 2}\sum_{i,j,p,q} \mathring{h}_{ij}\mathring{h}_{pq}A^{\beta}_{ij}A^{\beta}_{pq}=\sum_{\beta \geq 2}\big{(}\sum_{i,j}\mathring{h}_{ij}A^{\beta}_{ij}\big{)}^{2}=\sum_{ \beta\geq 2}\big{(}\sum_{i}\lambda_{i}A^{\beta}_{ii}\big{)}^{2}.\]
By Cauchy-Schwarz,
\[\sum_{i,j}|\mathring{h}_{ij}A^{-}_{ij}|^{2}\leq\sum_{\beta\geq 2}\big{(}\sum_{i }\lambda_{j}^{2}\big{)}\big{(}\sum_{i}(A^{\beta}_{ii})^{2}\big{)}=|\mathring{h }|^{2}\sum_{\beta\geq 2}\sum_{i}(A^{\beta}_{ii})^{2}. \tag{5.14}\]
Now, using
\[\sum_{i,j}|R^{\perp}_{ij}(\nu_{1})|^{2}=\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2} +\sum_{i,j,k}|\mathring{h}_{ik}A^{-}_{jk}-\mathring{h}_{jk}A^{-}_{ik}|^{2}+2 \sum_{i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\mathring{h}_{ip}A^{-}_{jp}-\mathring{ h}_{jp}A^{-}_{ip}\rangle, \tag{5.15}\]
and (2.6) we have
\[\sum_{i,j}|R^{\perp}_{ij}(\nu_{1})|^{2} =\sum_{\beta\geq 2}\sum_{i,j,k}\big{(}\mathring{h}_{ik}A^{ \beta}_{jk}-\mathring{h}_{jk}A^{\beta}_{ik}\big{)}^{2}+\sum_{i,j}|\bar{R}_{ij} (\nu_{1})|^{2}+2\sum_{i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\mathring{h}_{ip}A^{- }_{jp}-\mathring{h}_{jp}A^{-}_{ip}\rangle\] \[=\sum_{\beta\geq 2}\sum_{i,j}\big{(}\lambda_{i}-\lambda_{j}\big{)}^{2} (A^{\beta}_{ij})^{2}+\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+2\sum_{i,j,p} \langle\bar{R}_{ij}(\nu_{1}),\mathring{h}_{ip}A^{-}_{jp}-\mathring{h}_{jp}A^{-} _{ip}\rangle\]
\[=\sum_{\beta\geq 2}\sum_{i\neq j}\big{(}\lambda_{i}-\lambda_{j}\big{)}^{2}(A_{ ij}^{\beta})^{2}+\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+2\sum_{i,j,p}\langle\bar{R}_{ij}( \nu_{1}),\hat{h}_{ip}A_{jp}^{-}-\hat{h}_{jp}A_{ip}^{-}\rangle.\]
Since \(\left(\lambda_{i}-\lambda_{j}\right)^{2}\leq 2\left(\lambda_{i}^{2}+\lambda_{j}^{2 }\right)\leq 2|\hat{h}|^{2}\), we have
\[\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}\leq 2|\hat{h}|^{2}\sum_{\beta\geq 2} \sum_{i\neq j}(A_{ij}^{\beta})^{2}+\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4|\bar {R}_{ij}(\nu_{1})||\hat{h}||A^{-}|. \tag{5.16}\]
Summing (5.14) and (5.16), we obtain
\[\sum_{i,j}|\hat{h}_{ij}A_{ij}^{-}|^{2}+\sum_{i,j}|R_{ij}^{\perp} (\nu_{1})|^{2} \leq|\hat{h}|^{2}\sum_{\beta\geq 2}\sum_{i}(A_{ii}^{\beta})^{2}+ 2|\hat{h}|^{2}\sum_{\beta\geq 2}\sum_{i\neq j}(A_{ij}^{\beta})^{2}+\sum_{i,j}| \bar{R}_{ij}(\nu_{1})|^{2}\] \[+4|\bar{R}_{ij}(\nu_{1})||\hat{h}||A^{-}|\] \[\leq 2|\hat{h}|^{2}|A^{-}|^{2}+\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^ {2}+4|\bar{R}_{ij}(\nu_{1})||\hat{h}||A^{-}|,\]
which is (5.12). To establish (5.8), for \(\alpha,\beta\in\{2,\ldots,m\}\) define
\[S_{\alpha\beta}:=\operatorname{tr}\left(A_{\alpha}A_{\beta}\right)=\sum_{i,j, \alpha}A_{ij}^{\alpha}A_{ij}^{\beta}\quad\text{ and }\quad S_{\alpha}:=|A_{\alpha}|^{2}=\sum_{i,j, \alpha}A_{ij}^{\alpha}A_{ij}^{\alpha}\]
Let \(S:=S_{2}+\cdots+S_{m}=|A^{-}|^{2}\). Now
\[\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2} =\sum_{i,j,p,q}\sum_{\alpha,\beta\geq 2}A_{ij}^{\alpha}A_{pq}^{ \alpha}A_{ij}^{\beta}A_{pq}^{\beta}\] \[=\sum_{\alpha,\beta\geq 2}\big{(}\sum_{i,j}A_{ij}^{\alpha}A_{ij}^{ \beta}\big{)}(\sum_{p,q}A_{pq}^{\alpha}A_{pq}^{\beta})\] \[=\sum_{\alpha,\beta\geq 2}S_{\alpha\beta}^{2}.\]
In addition, recalling (5.9), we may write
\[|\hat{R}^{\perp}|^{2}=\sum_{\alpha,\beta\geq 2}\Big{(}|A_{\alpha}A_{\beta}-A_{ \beta}A_{\alpha}|^{2}+\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+2\sum_{i,j,p} \langle\bar{R}_{ij\alpha\beta},A_{ip}^{\alpha}A_{jp}^{\beta}-A_{jp}^{\alpha}A_ {ip}^{\beta}\rangle\Big{)}\]
where \(\left(A_{\alpha}A_{\beta}\right)_{ij}=\left(A_{\alpha}\right)_{ik}\left(A_{ \beta}\right)_{kj}=\left(A_{\alpha}\right)_{ik}\left(A_{\beta}\right)_{jk}\) denotes standard matrix multiplication and \(|\cdot|\) is the usual square norm of the matrix. We see that inequality (5.13) is equivalent to
\[\sum_{\alpha,\beta\geq 2}|A_{\alpha}A_{\beta}-A_{\beta}A_{\alpha}|^{2}+\sum_{ \alpha,\beta\geq 2}S_{\alpha\beta}^{2}\leq\frac{3}{2}S^{2}. \tag{5.17}\]
Therefore, we have
\[\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}+|\hat{R}^{\perp}|^{2} \leq\frac{3}{2}|A^{-}|^{4}+\sum_{i,j,\alpha,\beta\geq 2}\Big{(}|\bar{R}_{ij \alpha\beta}|^{2}+2\sum_{p}\langle\bar{R}_{ij\alpha\beta},A_{ip}^{\alpha}A_{jp }^{\beta}-A_{jp}^{\alpha}A_{ip}^{\beta}\rangle\Big{)}\]
\[\leq\frac{3}{2}|A^{-}|^{4}+\sum_{\alpha,\beta\geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{-}|^{2}\Big{)}.\]
Now if \(m=2\), inequality (5.13) is trivial since \(|\hat{R}^{\perp}|^{2}=0\) and \(\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq}\rangle|^{2}=|A^{-}|^{4}\). Otherwise, if \(m\geq 3\), inequality (5.17) follows Lemma 3.3 in [16]. This completes the proof.
As an immediate consequence of the previous lemma, we have the following estimate for the reaction terms coming from the evolution of \(|A^{-}|^{2}\).
**Lemma 5.5** (Upper bound for the reaction terms of \((\partial_{t}-\Delta)|A^{-}|^{2}\)).: \[\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq}\rangle|^{2}+|\hat{R}^{ \perp}|^{2}+\sum_{i,j,}|R^{\perp}_{ij}(\nu_{1})|^{2} \leq\frac{3}{2}|A^{-}|^{4}+\sum_{\alpha,\beta\geq 2}\Big{(}\sum_{i,j}| \bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{-}|^{2}\Big{)}\] \[+2|\mathring{h}|^{2}|A^{-}|^{2}+\sum_{i,j}|\bar{R}_{ij}(\nu_{1})| ^{2}+4|\bar{R}_{ij}(\nu_{1})||\mathring{h}||A^{-}|.\] (5.18)
Proof.: The proof follows from Lemma 5.4.
Next we express the reaction term in the evolution of \(f\) in terms of \(A^{-},\mathring{h}\), and \(|H|\). In view of the definition of \(f\), observe that
\[\frac{nc_{n}-1}{n}|H|^{2}=|A^{-}|^{2}+|\mathring{h}|^{2}+f+d_{n}. \tag{5.19}\]
In the following lemma, we get a lower bound for the reaction terms in the evolution of \(f\).
**Lemma 5.6** (Lower bound for the reaction terms of \((\partial_{t}-\Delta)\,f\)).: _If \(\frac{1}{n}<c_{n}\leq\frac{4}{3n}\), then_
\[\frac{|A^{-}|^{2}}{f}\Big{(}c_{n}\sum_{i,j}|\langle A_{ij},H \rangle|^{2}-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R^{ \perp}_{ij}|^{2}\Big{)}\geq\frac{2}{nc_{n}-1}|A^{-}|^{4}+\frac{nc_{n}}{nc_{n}- 1}|\mathring{h}|^{2}|A^{-}|^{2}\] \[\qquad-\frac{|A^{-}|^{2}}{f}\Big{(}\sum_{\alpha,\beta\geq 2} \Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{ -}|^{2}\Big{)}+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+8|\bar{R}_{ij}(\nu_{1})| |\mathring{h}||A^{-}|\Big{)}. \tag{5.20}\]
Proof.: We do a computation that is similar to a computation in [3], except we do not throw away the pinching term \(f\). By the following equations
\[|h|^{2}=|\mathring{h}|^{2}+\frac{1}{n}|H|^{2},\]
\[\sum_{i,j}|\langle A_{ij},H\rangle|^{2}=|H|^{2}|h|^{2},\]
\[\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}=|h|^{4}+2\sum_{i,j}\big{|}\hat{ h}_{ij}A_{ij}^{-}\big{|}^{2}+\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-} \rangle|^{2}\]
and
\[2\sum_{i,j}|R_{ij}^{\perp}|^{2}-2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}=|\hat{ R}^{\perp}|^{2}+2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}=|R^{\perp}|^{2}, \tag{5.21}\]
we have
\[c_{n}\sum_{i,j}|\langle A_{ij},H\rangle|^{2} -\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ ij}^{\perp}|^{2}=\frac{1}{n}c_{n}|H|^{4}+c_{n}|\hat{h}|^{2}|H|^{2}-|\hat{h}|^{4}\] \[-\frac{2}{n}|\hat{h}|^{2}|H|^{2}-\frac{1}{n^{2}}|H|^{4}-2\sum_{i, j}|\hat{h}_{ij}A_{ij}^{-}|^{2}-\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-} \rangle|^{2}-|\hat{R}^{\perp}|^{2}\] \[-2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}\] \[=\frac{1}{n}\left(c_{n}-\frac{1}{n}\right)|H|^{4}+\left(c_{n}- \frac{1}{n}\right)|\hat{h}|^{2}|H|^{2}-\frac{1}{n}|\hat{h}|^{2}|H|^{2}-|\hat{h }|^{4}\] \[-2\sum_{i,j}|\hat{h}_{ij}A_{ij}^{-}|^{2}-2\sum_{i,j}|R_{ij}^{ \perp}(\nu_{1})|^{2}-\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2} -|\hat{R}^{\perp}|^{2}.\]
Use (5.19) and cancel terms to get
\[c_{n} \sum_{i,j}|\langle A_{ij},H\rangle|^{2}-\sum_{i,j,p,q}|\langle A _{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ij}^{\perp}|^{2}\] \[=\frac{1}{n}\big{(}|A^{-}|^{2}+|\hat{h}|^{2}+f+d_{n}\big{)}|H|^{2 }+|\hat{h}|^{2}\big{(}|A^{-}|^{2}+|\hat{h}|^{2}+f+d_{n}\big{)}\] \[-\frac{1}{n}|\hat{h}|^{2}|H|^{2}-|\hat{h}|^{4}-2\sum_{i,j}|\hat{h }_{ij}A_{ij}^{-}|^{2}-2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})\,|^{2}-\sum_{i,j,p,q }|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}-|\hat{R}^{\perp}|^{2}\] \[=\frac{1}{n}\left(f+|A^{-}|^{2}+d_{n}\right)|H|^{2}+\left(f+|A^{- }|^{2}+d_{n}\right)|\hat{h}|^{2}\] \[-2\sum_{i,j}|\hat{h}_{ij}A_{ij}^{-}|^{2}-2\sum_{i,j}|R_{ij}^{ \perp}(\nu_{1})|^{2}-\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2} -|\hat{R}^{\perp}|^{2}.\]
Using (5.19) once more for the remaining factor of \(|H|^{2}\) gives
\[c_{n}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}-\sum_{i,j,p,q}| \langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ij}^{\perp}|^{2}\] \[=\frac{1}{n}\left(f+|A^{-}|^{2}+d_{n}\right)\left(c_{n}-\frac{1}{ n}\right)^{-1}\left(f+|A^{-}|^{2}+|\hat{h}|^{2}+d_{n}\right)+\left(f+|A^{-}|^{2}+d_{n }\right)|\hat{\hat{h}}|^{2}\]
\[-2\sum_{i,j}|\hat{h}_{ij}A_{ij}^{-}|^{2}-2\sum_{i,j}|R_{ij}^{\perp}( \nu_{1})|^{2}-\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}-|\hat{R}^ {\perp}|^{2}\] \[=\frac{1}{nc_{n}-1}f(f+2|A^{-}|^{2}+|\hat{\bar{h}}|^{2}+2d_{n})+f| \hat{\bar{h}}|^{2}+\frac{1}{nc_{n}-1}|A^{-}|^{4}+\frac{nc_{n}}{nc_{n}-1}|A^{-}| ^{2}|\hat{\bar{h}}|^{2}\] \[+\frac{nc_{n}}{nc_{n}-1}d_{n}|\hat{\bar{h}}|^{2}+\frac{1}{nc_{n}- 1}d_{n}|A^{-}|^{2}-2\sum_{i,j}|\hat{\bar{h}}_{ij}A_{\bar{i}j}^{-}|^{2}-2\sum_{ i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}-\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-} \rangle|^{2}\] \[-|\hat{R}^{\perp}|^{2}.\]
Now by the two estimates in Lemma 5.4
\[2\sum_{i,j}|\hat{h}_{ij}A_{ij}^{-}|^{2} +2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}+\sum_{i,j,p,q}|\langle A _{ij}^{-},A_{pq}^{-}\rangle|^{2}+|\hat{R}^{\perp}|^{2}\leq 4|\hat{\bar{h}}|^{2}|A^{-}| ^{2}+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}\] \[+8|\bar{R}_{ij}(\nu_{1})||\hat{\bar{h}}||A^{-}|+\frac{3}{2}|A^{-} |^{4}+\sum_{\alpha,\beta\geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4| \bar{R}_{ij\alpha\beta}||A^{-}|^{2}\Big{)}.\]
Therefore,
\[\frac{1}{nc_{n}-1} |A^{-}|^{4}+\frac{nc_{n}}{nc_{n}-1}|A^{-}|^{2}|\hat{\bar{h}}|^{2}- 2\sum_{i,j}|\hat{h}_{ij}A_{ij}^{-}|^{2}-2\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}\] \[-\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}-|\hat{R }^{\perp}|^{2}\] \[\geq\left(\frac{1}{nc_{n}-1}-\frac{3}{2}\right)|A^{-}|^{4}+\left( \frac{nc_{n}}{nc_{n}-1}-4\right)|\hat{\bar{h}}|^{2}|A^{-}|^{2}-2\sum_{i,j}| \bar{R}_{ij}(\nu_{1})|^{2}\] \[-8|\bar{R}_{ij}(\nu_{1})||\hat{\bar{h}}||A^{-}|-\sum_{\alpha, \beta\geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij \alpha\beta}||A^{-}|^{2}\Big{)}.\]
Since \(c_{n}\leq\frac{4}{3n}\), we have
\[\frac{1}{nc_{n}-1}-\frac{3}{2}\geq\frac{3}{2},\quad\frac{nc_{n}}{nc_{n}-1}-4 \geq 0.\]
Consequently, we have
\[c_{n}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}- \sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ij }^{\perp}|^{2}\geq\frac{2}{nc_{n}-1}f|A^{-}|^{2}+\frac{nc_{n}}{nc_{n}-1}f|\hat{ \bar{h}}|^{2}\] \[-8|\bar{R}_{ij}(\nu_{1})||\hat{\bar{h}}||A^{-}|-\sum_{\alpha,\beta \geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha \beta}||A^{-}|^{2}\Big{)}\] \[\geq\frac{2}{nc_{n}-1}f|A^{-}|^{2}+\frac{nc_{n}}{nc_{n}-1}f|\hat{ \bar{h}}|^{2}-8|\bar{R}_{ij}(\nu_{1})||\hat{\bar{h}}||A^{-}|\] \[-\sum_{\alpha,\beta\geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ij\alpha \beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{-}|^{2}\Big{)}-2\sum_{i,j}|\bar{R}_{ ij}(\nu_{1})|^{2}. \tag{5.22}\]
Multiplying both sides by \(\frac{|A^{-}|^{2}}{f}\) completes the proof of the lemma.
Putting Lemmas 5.5 and 5.6 together, we have
**Lemma 5.7** (Reaction term estimate).: _If \(0<\delta\leq\frac{1}{2}\) and \(\frac{1}{n}<c_{n}\leq\frac{4}{3n}\), then_
\[\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}+|\hat{R} ^{\perp}|^{2} +\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}\leq(1-\delta)\frac{|A^{- }|^{2}}{f}\Big{(}c_{n}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}\] \[-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ ij}^{\perp,2}\Big{)}\] \[+(1-\delta)\frac{|A^{-}|^{2}}{f}\Big{(}\sum_{\alpha,\beta\geq 2} \Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{ -}|^{2}\Big{)}\] \[+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+8|\bar{R}_{ij}(\nu_{1})| |\hat{h}||A^{-}|\Big{)}. \tag{5.23}\]
Proof.: In view of (5.18) and (5.20), we have
\[\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}+|\hat{R }^{\perp}|^{2}+\sum_{i,j}|R_{ij}^{\perp}(\nu_{1})|^{2}-(1-\delta)\frac{|A^{-} |^{2}}{f}\Big{(}c_{n}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}\] \[-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ ij}^{\perp}|^{2}\Big{)}\] \[\leq\frac{3}{2}|A^{-}|^{4}+2|\mathring{h}|^{2}|A^{-}|^{2}-\frac{2 (1-\delta)}{nc_{n}-1}|A^{-}|^{4}-\frac{nc_{n}(1-\delta)}{nc_{n}-1}|\mathring{h }|^{2}|A^{-}|^{2}\] \[+(1-\delta)\frac{|A^{-}|^{2}}{f}\Big{(}\sum_{\alpha,\beta\geq 2} \Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{ -}|^{2}\Big{)}\] \[+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+8|\bar{R}_{ij}(\nu_{1})| |\mathring{h}||A^{-}|\Big{)}\] \[=\left(\frac{3}{2}-\frac{2(1-\delta)}{nc_{n}-1}\right)|A^{-}|^{4}+ \left(2-\frac{nc_{n}(1-\delta)}{nc_{n}-1}\right)|\mathring{h}|^{2}|A^{-}|^{2}\] \[+(1-\delta)\frac{|A^{-}|^{2}}{f}\Big{(}\sum_{\alpha,\beta\geq 2} \Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{ -}|^{2}\Big{)}+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+8|\bar{R}_{ij}(\nu_{1})| |\mathring{\mathring{h}}||A^{-}|\Big{)}.\]
If \(c_{n}\leq\frac{4}{3n}\), then
\[\frac{1}{nc_{n}-1}\geq 3\quad\text{ and }\quad\frac{nc_{n}}{nc_{n}-1}\geq 4.\]
Therefore, if \(\delta\leq\frac{1}{2}\)
\[\frac{3}{2}-\frac{2(1-\delta)}{nc_{n}-1}\leq\frac{3}{2}-6(1-\delta)\leq 0,\]
\[2-\frac{nc_{n}(1-\delta)}{nc_{n}-1}\leq 2-4(1-\delta)\leq 0,\]
which gives (5.23).
We are following the arguments of Naff [20], we turn our attention to the gradient terms. For this, we will use (2.10). Recalling that \(A^{-}_{jk}\) is traceless, it is straightforward to verify that
\[\sum_{i,j,k}|\nabla_{i}h_{jk}+\langle\nabla^{\perp}_{i}A^{-}_{jk},\nu_{1} \rangle|^{2}=\sum_{i,j,k}|\nabla_{i}\mathring{h}_{jk}+\langle\nabla^{\perp}_{i }A^{-}_{jk},\nu_{1}\rangle|^{2}+\frac{1}{n}|\nabla|H\|^{2} \tag{5.24}\]
\[\sum_{i,j,k}|\hat{\nabla}^{\perp}_{i}A^{-}_{jk}+h_{jk}\nabla^{\perp}_{i}\nu_{ 1}|^{2}=\sum_{i,j,k}|\hat{\nabla}^{\perp}_{i}A^{-}_{jk}+\mathring{h}_{jk} \nabla^{\perp}_{i}\nu_{1}|^{2}+\frac{1}{n}|H|^{2}|\nabla^{\perp}\nu_{1}|^{2} \tag{5.25}\]
Observe that the first term in (5.24) is just
\[\sum_{i,j,k}|\langle\nabla^{\perp}_{i}\mathring{A}_{jk},\nu_{1} \rangle|^{2}=\sum_{i,j,k}|\nabla_{i}\mathring{h}_{jk}+\langle\nabla^{\perp}_{ i}A^{-}_{jk},\nu_{1}\rangle|^{2}, \tag{5.26}\]
which will be useful later on. Now as observed in [9], using Lemma 2.3, it follows from the Codazzi identity for the second fundamental form that the tensor
\[E_{ijk}= \frac{1}{n+2}\left(\nabla^{\perp}_{i}Hg_{jk}+\nabla^{\perp}_{j} Hg_{ik}+\nabla^{\perp}_{k}Hg_{ij}\right)\] \[-\frac{2}{(n+2)(n-1)}w_{i}g_{jk}+\frac{n}{(n+2)(n-1)}\left(w_{j} g_{ik}+w_{k}g_{ij}\right)\]
is an irreducible component of \(\nabla^{\perp}_{i}A_{jk}\) consisting of its various traces. In other words, \(\langle E_{ijk},\nabla^{\perp}_{i}A_{jk}\rangle=|E|^{2}\). This allows one to get an improved estimate over the trivial one. Namely,
\[|E|^{2}=\frac{3}{n+2}|\nabla^{\perp}H|^{2}\leq|\nabla^{\perp}A|^{2}.\]
The projection of the Codazzi identity onto \(\nu_{1}\) and its orthogonal complement implies the tensors \(\nabla_{i}h_{jk}+\langle\nabla^{\perp}_{i}A^{-}_{jk},\nu_{1}\rangle\) and \(\hat{\nabla}^{\perp}_{i}A^{-}_{jk}+h_{jk}\nabla^{\perp}_{i}\nu_{1}\) are symmetric in \(i,j,k\). Recalling (2.11) and (2.12), it follows that an irreducible component of each tensor is given by
\[E^{(1)}_{ijk}:=\frac{1}{n+2}(g_{ij}\nabla_{k}|H|+g_{jk}\nabla_{i}|H|+g_{ki} \nabla_{j}|H|),\]
\[E^{(\perp)}_{ijk}:=\frac{1}{n+2}(g_{ij}|H|\nabla^{\perp}_{k}\nu_{1}+g_{jk}|H| \nabla^{\perp}_{i}\nu_{1}+g_{ki}|H|\nabla^{\perp}_{j}\nu_{1}).\]
You can readily confirm that \(E^{(1)}_{ijk}(\nabla_{i}h_{jk}+\langle\nabla^{\perp}_{i}A^{-}_{jk},\nu_{1} \rangle)=|E^{(1)}|^{2}\) and \(\sum_{i,j,k}\langle E^{(\perp)}_{ijk},\hat{\nabla}^{\perp}_{i}A^{-}_{jk}+h_{jk} \nabla^{\perp}_{i}\nu_{1}\rangle=|E^{(\perp)}|^{2}\). As in Lemma 2.3, we obtain that
\[\frac{3}{n+2}|\nabla|H||^{2}\leq\sum_{i,j,k}|\nabla_{i}h_{jk}+\langle\nabla^{ \perp}_{i}A^{-}_{jk},\nu_{1}\rangle|^{2},\]
\[\frac{3}{n+2}|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}\leq\sum_{i,j,k}|\hat{\nabla}^{ \perp}_{i}A^{-}_{jk}+h_{jk}\nabla^{\perp}_{i}\nu_{1}|^{2}. \tag{5.27}\]
From Theorem 3.1 and (2.6), we have that
\[\frac{|A^{-}|^{2}}{f^{2}}\Big{(}4\sum_{i,j,p,q}\bar{R}_{ipjq} \big{(}\sum_{\alpha}A^{\alpha}_{pq}A^{\alpha}_{ij}\big{)}-4\sum_{j,k,p}\bar{R} _{kjkp}\big{(}\sum_{i,\alpha}A^{\alpha}_{pi}A^{\alpha}_{ij}\big{)}+2\sum_{k, \alpha,\beta}\bar{R}_{k\alpha k\beta}\big{(}\sum_{i,j}A^{\alpha}_{ij}A^{\beta }_{ij}\big{)}\Big{)}\] \[+\frac{|A^{-}|^{2}}{f^{2}}\Big{(}-8\sum_{j,p,\alpha,\beta}\bar{R} _{jp\alpha\beta}\big{(}\sum_{i}A^{\alpha}_{ip}A^{\beta}_{ij}\big{)}+2\sum_{i,j,k,\beta}\bar{\nabla}_{k}\bar{R}_{kij\beta}A^{\beta}_{ij}-2\sum_{i,j,k,\beta} \bar{\nabla}_{i}\bar{R}_{jkjk\beta}A^{\beta}_{ij}\Big{)}\] \[<C_{1}\frac{|A^{-}|^{2}}{f^{2}}(|A|+|A|^{2})\] \[\leq C_{2}\frac{|A^{-}|^{2}}{f},\]
where we used the fact that the quantities in the parenthesis divided by \(f\) are bounded and \(C_{1}\) and \(C_{2}\) are constants, which depend on \(n,K_{1},K_{2}\) and \(d_{n}\). Also, from (2.6), we have
\[\frac{1}{f}\Big{(}4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{\alpha \geq 2}A^{\alpha}_{pq}A^{\alpha}_{ij}\big{)}-4\sum_{j,k,p}\bar{R}_{kjkp}\big{(} \sum_{i,\alpha\geq 2}A^{\alpha}_{pi}A^{\alpha}_{ij}\big{)}\Big{)}\] \[+\frac{1}{f}\Big{(}2|H|^{-2}\sum_{i,j,k,\alpha,\beta\geq 2}\bar{R} _{k\alpha k\beta}H^{\alpha}A^{\beta}_{ij}\langle A_{ij},H\rangle\Big{)}\] \[+\frac{1}{f}\Big{(}2\sum_{i,j,k,\beta\geq 2}\bar{\nabla}_{k} \bar{R}_{kij\beta}A^{\beta}_{ij}-2\sum_{i,j,k,\beta\geq 2}\bar{\nabla}_{i}\bar{R}_{ jkk\beta}A^{\beta}_{ij}\Big{)}\] \[\leq C_{3}\frac{1}{f}\Big{(}|A^{-}|^{2}+|A^{-}||A|+|A^{-}||h|+|A^{ -}|\Big{)}\] \[\leq C_{4}\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{ \sqrt{f}},\]
where we used the fact that the quantities in the parenthesis divided by \(\sqrt{f}\) are bounded and \(C_{3},C_{4}\) and \(C^{\prime\prime}\) are constants, which depend on \(n,K_{1},K_{2}\) and \(d_{n}\). By previous calculations
we have upper bounds for most of the terms. We will show that he rest of the gradient terms satisfy the following:
\[4\sum_{i,j,k}Q_{ijk}\left\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\right\rangle \leq 2|\nabla^{\perp}A^{-}|^{2}+2(1-\delta)\frac{|A^{-}|^{2}}{f}\left(| \nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2}\right).\]
**Lemma 5.8** (Lower bound for Bochner term of \(\left(\partial_{t}-\Delta\right)|A^{-}|^{2}\)).:
1. _If_ \(\frac{1}{n}<c_{n}\leq\frac{4}{3n}\)_, then_ \[2|\hat{\nabla}^{\perp}A^{-}|^{2}\geq\frac{4n-10}{n+2}|\mathring{h}|^{2}|\nabla ^{\perp}\nu_{1}|^{2}+\frac{6(n-1)}{n+2}(|A^{-}|^{2}+f+d_{n})|\nabla^{\perp}\nu _{1}|^{2}.\]
2. _If_ \(\frac{1}{n}<c_{n}\leq\frac{3(n+1)}{2n(n+2)}\)_, then_ \[2|\hat{\nabla}^{\perp}A^{-}|^{2}\geq 2|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{ 2}+4(|A^{-}|^{2}+f+d_{n})|\nabla^{\perp}\nu_{1}|^{2}.\]
Proof.: We begin by applying Young's inequality
\[\sum_{i,j,k}|\hat{\nabla}^{\perp}_{i}A^{-}_{jk}+\mathring{h}_{jk} \nabla^{\perp}_{i}\nu_{1}|^{2} =|\hat{\nabla}^{\perp}A^{-}|^{2}+2\sum_{i,j,k}(\hat{\nabla}^{\perp }_{i}A^{-}_{jk},\mathring{h}_{jk}\nabla^{\perp}_{i}\nu_{1})+|\mathring{h}|^{2} |\nabla^{\perp}\nu_{1}|^{2}\] \[\leq 2|\hat{\nabla}^{\perp}A^{-}|^{2}+2|\mathring{h}|^{2}| \nabla^{\perp}\nu_{1}|^{2}.\]
Multiplying both sides of (5.19) by \(\frac{2(n-1)}{(n+2)(nc_{n}-1)}\) gives
\[\frac{2(n-1)}{n(n+2)}|H|^{2}=\frac{2(n-1)}{(n+2)\left(nc_{n}-1 \right)}\left(f+|A^{-}|^{2}+|\mathring{h}|^{2}+d_{n}\right).\]
Since
\[\frac{2(n-1)}{n(n+2)}|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}\leq\sum_{ i,j,k}|\hat{\nabla}^{\perp}_{i}A^{-}_{jk}+\mathring{h}_{jk}\nabla^{\perp}_{i} \nu_{1}|^{2}, \tag{5.28}\]
our observations give us that
\[\frac{2(n-1)}{(n+2)\left(nc_{n}-1\right)}\left(f+|A^{-}|^{2}+| \mathring{h}|^{2}+d_{n}\right)|\nabla^{\perp}\nu_{1}|^{2}\leq 2|\hat{\nabla}^{ \perp}A^{-}|^{2}+2|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
Subtracting the \(|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}\) term on the right-hand side gives
\[\frac{2(n-1)}{(n+2)\left(nc_{n}-1\right)}\left(f+|A^{-}|^{2}+d_{n }\right)|\nabla^{\perp}\nu_{1}|^{2} +\left(\frac{2(n-1)}{(n+2)\left(nc_{n}-1\right)}-2\right)| \mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}\] \[\leq 2|\hat{\nabla}^{\perp}A^{-}|^{2}. \tag{5.29}\]
If \(c_{n}\leq\frac{4}{3n}\), then \(nc_{n}-1\leq\frac{1}{3}\) and
\[\frac{2(n-1)}{\left(n+2\right)\left(nc_{n}-1\right)}\geq\frac{6(n-1)}{n+2}.\]
Plugging this into (5.29), gives the first estimate of the lemma. If \(c_{n}\leq\frac{3(n+1)}{2n(n+2)}\), then \(nc_{n}-1\leq\frac{n-1}{2(n+2)}\) and
\[\frac{2(n-1)}{\left(n+2\right)\left(nc_{n}-1\right)}\geq 4.\]
Plugging this into (5.29), establishes the second estimate in the lemma.
**Lemma 5.9** (Lower bound for Bochner term of \(\left(\partial_{t}-\Delta\right)f\)).:
1. _If_ \(\frac{1}{n}<c_{n}\leq\frac{4}{3n}\)_, then_ \[2\frac{|A^{-}|^{2}}{f}\left(|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2} \right)\geq\frac{5n-8}{3(n-1)}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring {A},\nu_{1}\rangle|^{2}+\frac{10n-16}{n+2}|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{ 2}.\]
2. _If_ \(\frac{1}{n}<c_{n}\leq\frac{3(n+1)}{2n(n+2)}\)_, then_ \[2\frac{|A^{-}|^{2}}{f}\left(|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2} \right)\geq\frac{3}{2}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A}, \nu_{1}\rangle|^{2}+6|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
Proof.: Using
\[|\nabla^{\perp}A|^{2}=\sum_{i,j,k}|\hat{\nabla}_{i}^{\perp}A_{jk}^{-}+h_{jk} \nabla_{i}^{\perp}\nu_{1}|^{2}+\sum_{i,j,k}|\langle\nabla_{i}^{\perp}A_{jk}^{- },\nu_{1}\rangle+\nabla_{i}h_{jk}|^{2}\]
and
\[|\nabla^{\perp}H|^{2}=|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}+|\nabla|H||^{2}, \tag{5.30}\]
we have
\[|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2} =\sum_{i,j,k}|\langle\nabla_{i}^{\perp}A_{jk}^{-},\nu_{1}\rangle +\nabla_{i}h_{jk}|^{2}-c_{n}|\nabla|H|^{2}\] \[+\sum_{i,j,k}|\hat{\nabla}_{i}^{\perp}A_{jk}^{-}+h_{jk}\nabla_{i} ^{\perp}\nu_{1}|^{2}-c_{n}|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
Note that
\[\sum_{i,j,k}|\nabla_{i}h_{jk}+\langle\nabla_{i}^{\perp}A_{jk}^{-},\nu_{1} \rangle|^{2}=\sum_{i,j,k}|\nabla_{i}\mathring{h}_{jk}+\langle\nabla_{i}^{ \perp}A_{jk}^{-},\nu_{1}\rangle|^{2}+\frac{1}{n}|\nabla|H||^{2}\]
and
\[\sum_{i,j,k}|\langle\nabla_{i}^{\perp}\mathring{A}_{jk},\nu_{1}\rangle|^{2}=\sum_{ i,j,k}|\nabla_{i}\mathring{\mathring{h}}_{jk}+\langle\nabla_{i}^{\perp}A_{jk}^{-}, \nu_{1}\rangle|^{2}.\]
Also, since
\[\frac{2(n-1)}{n(n+2)}|\nabla|H||^{2}\leq\sum_{i,j,k}|\langle\nabla_{i}^{\perp} \mathring{\mathring{A}}_{jk},\nu_{1}\rangle|^{2},\]
we have
\[\sum_{i,j,k}|\langle\nabla_{i}^{\perp}A_{jk}^{-},\nu_{1}\rangle+ \nabla_{i}h_{jk}|^{2}-c_{n}|\nabla|H||^{2} =\sum_{i,j,k}|\langle\nabla_{i}^{\perp}\mathring{\mathring{A}}_{ jk},\nu_{1}\rangle|^{2}-\frac{nc_{n}-1}{n}|\nabla|H||^{2}\] \[\geq\left(1-\frac{\left(n+2\right)\left(nc_{n}-1\right)}{2(n-1) }\right)\sum_{i,j,k}|\langle\nabla_{i}^{\perp}\mathring{\mathring{A}}_{jk}, \nu_{1}\rangle|^{2}.\]
In view of (5.19) and (5.27), we have
\[\sum_{i,j,k}|\hat{\nabla}_{i}^{\perp}A_{jk}^{-}+h_{jk}\nabla_{i}^ {\perp}\nu_{1}|^{2} -c_{n}|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}\geq\left(\frac{3}{n+2}-c _{n}\right)|H|^{2}|\nabla^{\perp}\nu_{1}|^{2}\] \[=\frac{n}{nc_{n}-1}\left(\frac{3}{n+2}-c_{n}\right)(f+|A^{-}|^{2} +|\mathring{h}|^{2}+d_{n})|\nabla^{\perp}\nu_{1}|^{2}\] \[\geq\frac{n}{nc_{n}-1}\left(\frac{3}{n+2}-c_{n}\right)f|\nabla^{ \perp}\nu_{1}|^{2}.\]
Thus, by the three previous computations, we have
\[2\frac{|A^{-}|^{2}}{f}\left(|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{ \perp}H|^{2}\right) \geq\left(2-\frac{\left(n+2\right)\left(nc_{n}-1\right)}{n-1} \right)\frac{|A^{-}|^{2}}{f}\sum_{i,j,k}|\langle\nabla_{i}^{\perp}\mathring{A }_{jk},\nu_{1}\rangle|^{2}\] \[+\frac{2n}{nc_{n}-1}\left(\frac{3}{n+2}-c_{n}\right)|A^{-}|^{2}| \nabla^{\perp}\nu_{1}|^{2}.\]
If \(c_{n}\leq\frac{4}{3n}\), then \(nc_{n}-1\leq\frac{1}{3}\) and
\[2-\frac{\left(n+2\right)\left(nc_{n}-1\right)}{n-1} \geq 2-\frac{n+2}{3(n-1)}=\frac{5n-8}{3(n-1)},\] \[\frac{2n}{nc_{n}-1}\left(\frac{3}{n+2}-c_{n}\right) \geq 6n\left(\frac{9n-4(n+2)}{3n(n+2)}\right)=\frac{10n-16}{n+2}.\]
This establishes the first inequality of the lemma. If \(c_{n}\leq\frac{3(n+1)}{2n(n+2)}\), then \(nc_{n}-1\leq\frac{n-1}{2(n+2)}\) and
\[2-\frac{\left(n+2\right)\left(nc_{n}-1\right)}{n-1} \geq 2-\frac{1}{2}=\frac{3}{2},\] \[\frac{2n}{nc_{n}-1}\left(\frac{3}{n+2}-c_{n}\right) \geq\frac{4n(n+2)}{n-1}\left(\frac{6n-3(n+1)}{2n(n+2)}\right)=6.\]
This establishes the second inequality of the lemma.
**Lemma 5.10** (Upper bound for gradient term of \((\partial_{t}-\Delta)\,|A^{-}|^{2}\) ).:
1. _If_ \(\frac{1}{n}<c_{n}\leq\frac{4}{3n}\)_, then_ \[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+\frac{5n-9}{3( n-1)}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}\] \[+2|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+\frac{3(n-1)}{n-3}f| \nabla^{\perp}\nu_{1}|^{2}+\frac{2(n+2)}{n+3}|\mathring{h}|^{2}|\nabla^{\perp }\nu_{1}|^{2}.\]
2. _If_ \(\frac{1}{n}<c_{n}\leq\frac{3(n+1)}{2n(n+2)}-\varepsilon_{0}\) _and_ \(\varepsilon=\frac{2n(n+2)}{3(n-1)}\varepsilon_{0}\)_, then_ \[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+(1- \varepsilon)\frac{3}{2}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A },\nu_{1}\rangle|^{2}\] \[+2|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+4f|\nabla^{\perp}\nu_{1} |^{2}+2|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
Proof.: Using the definition of \(Q_{ijk}\), we get
\[|Q|\leq|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle|+|\langle\nabla^{ \perp}A^{-},\nu_{1}\rangle|+|H|^{-1}|\mathring{h}||\nabla|H|| \tag{5.31}\]
We will first treat the case \(\frac{1}{n}<c_{n}\leq\frac{4}{3n}\). It easily follows from the definition of \(f\) that
\[f\leq\left(c_{n}-\frac{1}{n}\right)|H|^{2}\leq\frac{1}{3n}|H|^{2}.\]
Consequently, using the estimate
\[\frac{2(n-1)}{n(n+2)}|\nabla|H||^{2}\leq\sum_{i,j,k}|\langle\nabla^{\perp} _{i}\mathring{A}_{jk},\nu_{1}\rangle|^{2} \tag{5.32}\]
we obtain
\[\frac{|A^{-}|^{2}}{|H|^{2}}|\nabla|H||^{2}\leq\frac{n(n+2)}{2(n-1)}\frac{1}{3 n}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}= \frac{n+2}{6(n-1)}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu _{1}\rangle|^{2}. \tag{5.33}\]
Then
\[|\langle A^{-},\nabla^{\perp}\nu_{1}\rangle|^{2}=\sum_{i,j}\langle A^{-}_{ij},\nabla^{\perp}_{i}\nu_{1}\rangle^{2}\leq\sum_{i,j,k}\sum_{\beta\geq 2}(A^{ \beta}_{ij})^{2}\langle\nabla^{\perp}_{k}\nu_{1},\nu_{\beta}\rangle^{2}\]
and (5.31) give
\[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 4|Q||\langle A^{-},\nabla^{\perp}\nu_{1}\rangle|\] \[\leq 4\left(|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle|+| \langle\nabla^{\perp}A^{-},\nu_{1}\rangle|+|H|^{-1}|\mathring{h}|\nabla|H| \right)|A^{-}||\nabla^{\perp}\nu_{1}|.\]
Now to each of these three summed terms above we apply Young's inequality with constants \(a_{1},a_{2},a_{3}>0\). Specifically, we have
\[4|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle||A^{-}||\nabla^{\perp }\nu_{1}| \leq 2a_{1}|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+\frac{2} {a_{1}}|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2},\] \[4|\langle\nabla^{\perp}\hat{A},\nu_{1}\rangle||A^{-}||\nabla^{ \perp}\nu_{1}| =4|\langle\nabla^{\perp}\hat{A},\nu_{1}\rangle|\frac{|A^{-}|}{ \sqrt{f}}f^{\frac{1}{2}}|\nabla^{\perp}\nu_{1}|\] \[\leq 2a_{2}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\hat{A}, \nu_{1}\rangle|^{2}+\frac{2}{a_{2}}f|\nabla^{\perp}\nu_{1}|^{2},\] \[4|H|^{-1}|\mathring{h}||\nabla|H||A^{-}||\nabla^{\perp}\nu_{1}| \leq 2a_{3}\frac{|A^{-}|^{2}}{|H|^{2}}\,|\nabla|H||^{2}+\frac{2} {a_{3}}|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}\] \[\leq 2a_{3}\frac{n+2}{6(n-1)}\frac{|A^{-}|^{2}}{f}|\langle\nabla^ {\perp}\mathring{A},\nu_{1}\rangle|^{2}+\frac{2}{a_{3}}|\mathring{h}|^{2}| \nabla^{\perp}\nu_{1}|^{2}.\]
Note we used (5.33) in the last inequality. Hence
\[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 2a_{1}|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+\left(2a _{2}+2a_{3}\frac{n+2}{6(n-1)}\right)\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp }\mathring{A},\nu_{1}\rangle|^{2}\] \[+\frac{2}{a_{1}}|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+\frac{2}{a _{2}}f|\nabla^{\perp}\nu_{1}|^{2}+\frac{2}{a_{3}}|\mathring{h}|^{2}|\nabla^{ \perp}\nu_{1}|^{2}. \tag{5.34}\]
Now set
\[a_{1}=1,\quad a_{2}=\frac{2(n-3)}{3(n-1)},\quad a_{3}=\frac{n+3}{n+2}.\]
In this case,
\[2a_{2}+2a_{3}\frac{n+2}{6(n-1)} =\frac{4(n-3)}{3(n-1)}+\frac{n+3}{n+2}\frac{n+2}{3(n-1)}=\frac{5n- 9}{3(n-1)},\] \[\frac{2}{a_{2}} =\frac{3(n-1)}{n-3},\] \[\frac{2}{a_{3}} =\frac{2(n+2)}{n+3}.\]
Plugging these into (5.34), we have the first inequality as claimed. Now if \(\frac{1}{n}<c_{n}\leq\frac{3(n+1)}{2n(n+2)}-\varepsilon_{0}\), then \(c_{n}-\frac{1}{n}\leq\frac{n-1}{2n(n+2)}-\varepsilon_{0}\). Therefore, if we take \(\varepsilon=\frac{2n(n+2)}{3(n-1)}\varepsilon_{0}\), then
\[c_{n}-\frac{1}{n}\leq(1-3\varepsilon)\frac{n-1}{2n(n+2)}.\]
In this case,
\[f\leq\left(c_{n}-\frac{1}{n}\right)|H|^{2}\leq(1-3\varepsilon) \frac{n-1}{2n(n+2)}|H|^{2}.\]
Again using the definition of \(f\), it follows that
\[\frac{|A^{-}|^{2}}{|H|^{2}}|\nabla|H||^{2} \leq(1-3\varepsilon)\frac{n(n+2)}{2(n-1)}\frac{n-1}{2n(n+2)}\frac{ |A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}\] \[=\frac{1}{4}(1-3\varepsilon)\frac{|A^{-}|^{2}}{f}|\langle\nabla^ {\perp}\mathring{A},\nu_{1}\rangle|^{2}.\]
Proceeding as we did before, we obtain the inequality
\[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 2a_{1}|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+\left(2a _{2}+\frac{1}{2}a_{3}(1-3\varepsilon)\right)\frac{|A^{-}|^{2}}{f}|\langle \nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}\] \[+\frac{2}{a_{1}}|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+\frac{2}{ a_{2}}f|\nabla^{\perp}\nu_{1}|^{2}+\frac{2}{a_{3}}|\mathring{h}|^{2}|\nabla^{ \perp}\nu_{1}|^{2}. \tag{5.35}\]
Set
\[a_{1}=1,\quad a_{2}=\frac{1}{2},\quad a_{3}=1.\]
In this case,
\[2a_{2}+\frac{1}{2}a_{3}(1-3\varepsilon) =\frac{3}{2}(1-\varepsilon),\] \[\frac{2}{a_{2}} =4,\] \[\frac{2}{a_{3}} =2.\]
Plugging these into (5.35), we get the second inequality as claimed.
Finally, putting the conclusions of Lemma 5.8, 5.9 and 5.10 together, we get the following result.
**Lemma 5.11** (Gradient term estimate).:
_Suppose either \(n\geq 8,\frac{1}{n}<c_{n}\leq\frac{4}{3n}\) and \(0<\delta\leq\frac{1}{5n-8}\) or \(\frac{1}{n}<c_{n}\leq\frac{3(n+1)}{2n(n+2)}-\varepsilon_{0}\), and \(0<\delta\leq\min\left\{\frac{1}{2},\frac{2n(n+2)}{3(n-1)}\varepsilon_{0}\right\}\). Then in either case,_
\[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle\leq 2 |\nabla^{\perp}A^{-}|^{2}+2(1-\delta)\frac{|A^{-}|^{2}}{f}\left(|\nabla^{\perp }A|^{2}-c_{n}|\nabla^{\perp}H|^{2}\right).\]
Proof.: First suppose \(n\geq 8,\frac{1}{n}<c_{n}\leq\frac{4}{3n}\) and \(0<\delta\leq\frac{1}{5n-8}\). Expanding \(|\nabla^{\perp}A^{-}|^{2}\) using
\[|\nabla^{\perp}A^{-}|^{2}=|\hat{\nabla}^{\perp}A^{-}|^{2}+|\langle\nabla^{ \perp}A^{-},\nu_{1}\rangle|^{2} \tag{5.36}\]
and using the inequality (5.30) in Lemma 5.8 gives us
\[2|\nabla^{\perp}A^{-}|^{2} =2|\hat{\nabla}^{\perp}A^{-}|^{2}+2|\langle\nabla^{\perp}A^{-}, \nu_{1}\rangle|^{2}\] \[\geq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+\frac{4n-10} {n+2}|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+\frac{6(n-1)}{n+2}(|A^{-}| ^{2}+f+d_{n})|\nabla^{\perp}\nu_{1}|^{2}.\]
Multiplying the first result in Lemma 5.9 by \((1-\delta)\) and using that \(1-\delta\geq\frac{1}{2}\) on the coefficient of \(|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}\) gives
\[2(1-\delta)\frac{|A^{-}|^{2}}{f}(|\nabla^{\perp}A|^{2}-c_{n}| \nabla^{\perp}H|^{2}) \geq(1-\delta)\frac{5n-8}{3(n-1)}\frac{|A^{-}|^{2}}{f}|\langle \nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}\] \[+\frac{5n-8}{n+2}|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
Putting these together, we get
\[2|\nabla^{\perp}A^{-}|^{2} +2(1-\delta)\frac{|A^{-}|^{2}}{f}\left(|\nabla^{\perp}A|^{2}-c_{ n}|\nabla^{\perp}H|^{2}\right)\geq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle| ^{2}\] \[+(1-\delta)\frac{5n-8}{3(n-1)}\frac{|A^{-}|^{2}}{f}|\langle \nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}+\frac{11n-14}{n+2}|A^{-}|^{2}| \nabla^{\perp}\nu_{1}|^{2}\] \[+\frac{6(n-1)}{n+2}(f+d_{n})|\nabla^{\perp}\nu_{1}|^{2}+\frac{4n- 10}{n+2}|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
On the other hand, the first result of Lemma 5.10 gives us that
\[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+\frac{5n-9}{3 (n-1)}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle| ^{2}\] \[+2|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+\frac{3(n-1)}{n-3}f| \nabla^{\perp}\nu_{1}|^{2}+\frac{2(n+2)}{n+3}|\mathring{h}|^{2}|\nabla^{\perp }\nu_{1}|^{2}.\]
Therefore, it only remains to compare the coefficients of like terms in the two inequalities above. For the coefficients of \(\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu_{1}\rangle|^{2}\), we need
\[\frac{5n-9}{3(n-1)}\leq(1-\delta)\frac{5n-8}{3(n-1)}\Longleftrightarrow\delta \leq\frac{1}{5n-8}.\]
Comparing the coefficients of the remaining terms implies we need
\[2n+4\leq 11n-14\Longleftrightarrow 2\leq n,\]
\[n+2\leq 2(n-3)\Longleftrightarrow 8\leq n\]
and
\[2(n+2)^{2}\leq(4n-10)(n+3)\Longleftrightarrow 19\leq n(n-3).\]
Each of these inequalities is true if \(n\geq 8\) completing the proof for the first case. Now suppose \(\frac{1}{n}<c_{n}<\frac{3(n+1)}{2n(n+2)}\) and \(0<\delta<\min\left\{\frac{1}{2},\frac{2n(n+2)}{3(n-1)}\varepsilon_{0}\right\}\). Arguing as before, this time using the second result in Lemma 5.8 and the second result in Lemma 5.9 yields
\[2\left|\nabla^{\perp}A^{-}\right|^{2} +2(1-\delta)\frac{|A^{-}|^{2}}{f}\left(|\nabla^{\perp}A|^{2}-c_{n }|\nabla^{\perp}H|^{2}\right)\geq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}\] \[+(1-\delta)\frac{3}{2}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp }\mathring{A},\nu_{1}\rangle|^{2}+7|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}+4(f+ d_{n})|\nabla^{\perp}\nu_{1}|^{2}\] \[+2|\mathring{h}|^{2}|\nabla^{\perp}\nu_{1}|^{2}.\]
Note we again used \(\delta\leq\frac{1}{2}\) to simplify the coefficient of \(|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}\). On the other hand, by the second result in Lemma 5.10, we have
\[4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^{\perp}_{k}\nu_{1}\rangle \leq 2|\langle\nabla^{\perp}A^{-},\nu_{1}\rangle|^{2}+(1-\delta) \frac{3}{2}\frac{|A^{-}|^{2}}{f}|\langle\nabla^{\perp}\mathring{A},\nu_{1} \rangle|^{2}+f|A^{-}|^{2}|\nabla^{\perp}\nu_{1}|^{2}\] \[+4(f+d_{n})|\nabla^{\perp}\nu_{1}|^{2}+2|\mathring{h}|^{2}| \nabla^{\perp}\nu_{1}|^{2}\] \[\leq 2|\nabla^{\perp}A^{-}|^{2}+2(1-\delta)\frac{|A^{-}|^{2}}{f} \big{(}|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2}\big{)},\]
where recall \(\varepsilon=\frac{2n(n+2)}{3(n-1)}\varepsilon_{0}\). Using the assumption that \(\delta\leq\varepsilon\), this completes the proof of the lemma for the second case.
We complete the proof of Theorem 5.1. Let \(\delta\) be sufficiently small so that each of our above calculations hold. We begin by splitting off the desired nonpositive term in the evolution equation.
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f} =\frac{1}{f}\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2}-|A^{-}|^ {2}\frac{1}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f+2\Big{\langle}\nabla \frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}\] \[=2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle} -\delta\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f\] \[+\frac{1}{f}\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2}-(1- \delta)\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f.\]
Using the previous calculations, the sum of the terms at the second line are non positive:
\[\frac{1}{f}\Big{(}\partial_{t} -\Delta\Big{)}|A^{-}|^{2}-(1-\delta)\frac{|A^{-}|^{2}}{f^{2}} \Big{(}\partial_{t}-\Delta\Big{)}f\] \[=\frac{1}{f}\Big{(}2\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq} \rangle|^{2}+2|\mathring{R}^{\perp}|^{2}+2\sum_{i,j}|R^{\perp}_{ij}(\nu_{1})|^ {2}\Big{)}\] \[+\frac{1}{f}\Big{(}4\sum_{i,j,p,q}\bar{R}_{ipjq}\big{(}\sum_{ \alpha\geq 2}A^{\alpha}_{pq}A^{\alpha}_{ij}\big{)}-4\sum_{j,k,p}\bar{R}_{kjkp} \big{(}\sum_{i,\alpha\geq 2}A^{\alpha}_{pi}A^{\alpha}_{ij}\big{)}\Big{)}\] \[+\frac{1}{f}\Big{(}2\sum_{k,\alpha,\beta\geq 2}\bar{R}_{k\alpha k \beta}\big{(}\sum_{i,j}A^{\alpha}_{ij}A^{\beta}_{ij}\big{)}-8\sum_{j,p,\alpha, \beta\geq 2}\bar{R}_{jpo\beta}\big{(}\sum_{i}A^{\alpha}_{ip}A^{\beta}_{ij} \big{)}\Big{)}\]
\[+\frac{1}{f}\Big{(}2|H|^{-2}\sum_{i,j,k,\alpha,\beta\geq 2}\bar{R}_{ k\alpha k\beta}H^{\alpha}A^{\beta}_{ij}(A_{ij},H)+2\sum_{i,j,k,\beta\geq 2} \bar{\nabla}_{k}\bar{R}_{kij\beta}A^{\beta}_{ij}-2\sum_{i,j,k,\beta\geq 2} \bar{\nabla}_{i}\bar{R}_{jk\beta}A^{\beta}_{ij}\Big{)}\] \[+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{ i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^{-}_{ip} \rangle\Big{)}\] \[+\frac{1}{f}\Big{(}4\sum_{i,j,k}Q_{ijk}\langle A^{-}_{ij},\nabla^ {\perp}_{k}\nu_{1}\rangle-2|\nabla^{\perp}A^{-}|^{2}-2(1-\delta)\frac{|A^{-}|^ {2}}{f}\big{(}|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp}H|^{2}\big{)}\Big{)}\] \[-(1-\delta)\Big{(}-2\frac{|A^{-}|^{2}}{f^{2}}\Big{(}c_{n}\sum_{ i,j}|\langle A_{ij},H\rangle|^{2}-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2} -\sum_{i,j}|R^{\perp}_{ij}|^{2}\Big{)}\] \[+\frac{|A^{-}|^{2}}{f^{2}}\big{(}4\sum_{i,j,p,q}\bar{R}_{ipjq} \big{(}\sum_{\alpha}A^{\alpha}_{pq}A^{\alpha}_{ij}\big{)}-4\sum_{j,k,p}\bar{R }_{kjkp}\big{(}\sum_{i,\alpha}A^{\alpha}_{pi}A^{\alpha}_{ij}\big{)}-2\sum_{k} \bar{R}_{k1k1}\sum_{i,j}(A^{1}_{ij})^{2}\big{)}\] \[+\frac{|A^{-}|^{2}}{f^{2}}\big{(}2\sum_{k,\alpha,\beta}\bar{R}_{ k\alpha k\beta}\big{(}\sum_{i,j}A^{\alpha}_{ij}A^{\beta}_{ij}\big{)}+4c_{n}\sum_{ k,\alpha,\beta}\bar{R}_{k\alpha k\beta}H^{\alpha}H^{\beta}-8\sum_{j,p,\alpha,\beta} \bar{R}_{jp\alpha\beta}\big{(}\sum_{i}A^{\alpha}_{ip}A^{\beta}_{ij}\big{)} \big{)}\] \[+\frac{|A^{-}|^{2}}{f^{2}}\big{(}2\sum_{i,j,k,\beta}\bar{\nabla}_ {k}\bar{R}_{kij\beta}A^{\beta}_{ij}-2\sum_{i,j,k,\beta}\bar{\nabla}_{i}\bar{R }_{jkk\beta}A^{\beta}_{ij}\big{)}\Big{)}\] \[\leq(1-\delta)\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\sum_{\alpha,\beta \geq 2}\Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha \beta}||A^{-}|^{2}\Big{)}+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+8|\bar{R}_{ij }(\nu_{1})||\hat{h}||A^{-}|\Big{)}\] \[+C\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{\sqrt{f}}+ \frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p}\langle \bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^{-}_{ip}\rangle\Big{)},\]
for \(C,C^{\prime\prime}\) constants depending on \(n,K_{1},K_{2}\) and \(d_{n}\). Repeating the same estimate as before, we can see
\[(1-\delta)\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\sum_{\alpha,\beta\geq 2} \Big{(}\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}+4|\bar{R}_{ij\alpha\beta}||A^{-} |^{2}\Big{)}+2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+8|\bar{R}_{ij}(\nu_{1})| \mathring{h}||A^{-}|\Big{)}\] \[+C\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{\sqrt{f}}+ \frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p}\langle \bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^{-}_{ip}\rangle\Big{)}\] \[=(1-\delta)\frac{|A^{-}|^{2}}{f}\Big{(}\sum_{\alpha,\beta\geq 2} \Big{(}\frac{\sum_{i,j}|\bar{R}_{ij\alpha\beta}|^{2}}{f}+4|\bar{R}_{ij\alpha \beta}|\frac{|A^{-}|^{2}}{f}\Big{)}+2\frac{\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2 }}{f}+8|\bar{R}_{ij}(\nu_{1})|\frac{\mathring{h}|}{\sqrt{f}}\frac{|A^{-}|}{ \sqrt{f}}\Big{)}\] \[+C\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{\sqrt{f}}+ \frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p}\langle \bar{R}_{ij}(\nu_{1}),\hat{h}_{ip}A^{-}_{jp}-\hat{h}_{jp}A^{-}_{ip}\rangle\Big{)}\] \[\leq C^{\prime}\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|} {\sqrt{f}}+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p} \langle\bar{R}_{ij}(\nu_{1}),\mathring{h}_{ip}A^{-}_{jp}-\mathring{\mathring{h}}_ {jp}A^{-}_{ip}\rangle\Big{)},\]
were the last term on the last row is bounded from above. Thus, according to our previous calculations we get (5.11), which was our initial claim:
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}\leq 2\Big{\langle} \nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}-\delta\frac{|A^{-}|^{2}}{f^{ 2}}\Big{(}\partial_{t}-\Delta\Big{)}f+C^{\prime}\frac{|A^{-}|^{2}}{f}+C^{\prime \prime}\frac{|A^{-}|}{\sqrt{f}}\]
\[+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{i,j,p} \langle\bar{R}_{ij}(\nu_{1}),\dot{\bar{h}}_{ip}A_{jp}^{-}-\dot{\bar{h}}_{jp}A_{ip }^{-}\rangle\Big{)}.\]
Recall \(\Big{(}\partial_{t}-\Delta\Big{)}f\) is non negative at each point in space time.
Now,
\[\frac{1}{f}\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2}-\frac{|A^{ -}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f =\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}-2\Big{ }\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}\] \[\leq-\delta\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta \Big{)}f+C^{\prime}\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{\sqrt {f}}\] \[+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{ i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\dot{\bar{h}}_{ip}A_{jp}^{-}-\dot{\bar{h}}_{jp} A_{ip}^{-}\rangle\Big{)}.\]
We are now ready to prove the main theorem of this paper.
**Theorem 5.12**.: _Let \(F:\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\) be a smooth solution to mean curvature flow so that \(F_{0}(p)=F(p,0)\) is compact and quadratically pinched. Then \(\forall\varepsilon>0,\exists H_{0}>0\), such that if \(f\geq H_{0}\), then_
\[\big{|}A^{-}\big{|}^{2}\leq\varepsilon f+C_{\varepsilon}\]
\(\forall t\in[0,T)\) _where \(C_{\varepsilon}=C_{\varepsilon}(n,m)\)._
Proof.: Since \(\mathcal{M}\) is quadratically bounded, there exist constants \(C,D\) such that
\[|A^{-}|^{2}\leq Cf+D.\]
Therefore, the above estimate holds for all \(\varepsilon\geq\frac{c_{n}}{\delta}\). Indeed, from the pinching \(|A|^{2}\leq c_{n}|H|^{2}-d_{n}\), we can make a little bit more space so that
\[|A^{-}|^{2}+|A^{+}|^{2}=|A|^{2}\leq(c_{n}-\delta)|H|^{2}-C_{\delta}-d_{n}\]
and therefore,
\[\delta|H|^{2}\leq c_{n}|H|^{2}-|A|^{2}-d_{n}.\]
But since \(|A^{-}|^{2}\leq|A|^{2}\leq c_{n}|H|^{2}\), we have \(\frac{\delta|A^{-}|^{2}}{c_{n}}\leq\frac{\delta|A|^{2}}{c_{n}}\leq\delta|H|^{2}\), so
\[\frac{\delta}{c_{n}}|A^{-}|^{2}\leq c_{n}|H|^{2}-d_{n}-|A|^{2}=f,\]
which means that \(|A^{-}|^{2}\leq\frac{c_{n}}{\delta}f\leq\varepsilon f+C_{\varepsilon}\). Hence, let \(\varepsilon_{0}\) denote the infimum of such \(\varepsilon\) for which the estimate is true and suppose \(\varepsilon_{0}>0\). We will prove the theorem by contradiction.
Hence, let us assume that the conclusions of the theorem are not true that is there exists a family of mean curvature flow \(\mathcal{M}_{t}^{k}\) with points \((p_{k},t_{k})\) such that
\[\lim_{k\to\infty}\frac{|A_{k}^{-}(p_{k},t_{k})|^{2}}{f_{k}(p_{k},t_ {k})}=\varepsilon_{0} \tag{5.37}\]
with \(\varepsilon_{0}>0\) and \(f_{k}(p_{k},t_{k})\to\infty\). We perform a parabolic rescaling of \(\mathcal{M}_{t}^{k}\) in such a way that \(f_{k}\) at \((p_{k},t_{k})\) becomes \(1\). If we consider the exponential map \(\exp_{\bar{p}}\colon T_{\bar{p}}\mathcal{N}\cong\mathbb{R}^{n+m}\to\mathcal{N} ^{n+m}\) and \(\gamma\) a geodesic, then for a vector \(v\in T_{\bar{p}}\mathcal{N}\), then
\[\exp_{\bar{p}}(v)=\gamma_{\bar{p},\frac{v}{|v|}}(|v|),\ \ \gamma^{ \prime}(0)=\frac{v}{|v|}\ \ \ \text{and}\ \ \gamma(0)=\bar{p}=F_{k}(p_{k},t_{k}).\]
That is, if \(F_{k}\) is the parameterisation of the original flow \(\mathcal{M}_{t}^{k}\), we let \(\hat{r}_{k}=\frac{1}{f_{k}(p_{k},t_{k})}\), and we denote the rescaled flow by \(\overline{\mathcal{M}}_{t}^{k}\) and we define its parameterisation by
\[\overline{F}_{k}(p,\tau)=\exp_{F_{k}(p_{k},t_{k})}^{-1}\circ F_{k} (p,\hat{r}_{k}^{2}\tau+t_{k}).\]
In the Riemannian case, when we change the metric after dilation, we do not need to multiply the immersion by the same constant as we would do in the Euclidean space. When we rescale the background space, following the example of the dilation of a sphere, we see that
\[\bar{g}_{ij}=\frac{1}{\hat{r}_{k}^{2}}g_{ij}\ \ \ \text{and}\ \ \overline{K}=\hat{r}_{k}^{2}K,\]
where \(K\) is the sectional curvature of \(\mathcal{N}\). In the same way,
\[|\overline{A}|^{2}=\hat{r}_{k}^{2}|A|^{2}\ \ \ \text{and}\ \ |\overline{H}|^{2}=\hat{r}_{k}^{2}|H|^{2}.\]
Since \(d_{n}\) depends on \(n\) and the sectional curvature \(K\), the new \(\bar{d}_{n}\) depends on n and \(\overline{K}\). Hence,
\[\bar{d}_{n}=\hat{r}_{k}^{2}d_{n}.\]
For \(\hat{r}_{k}\to 0\), the background Riemannian manifold will converge to its tangent plane in a pointed \(C^{d,\gamma}\) Holder topology [24]. Therefore, we can work on the manifold \(\mathcal{N}\) as we would work in a Euclidean space. For simplicity, we choose for every flow a local co-ordinate system centred at \(p_{k}\). In these co-ordinates we can write \(0\) instead of \(p_{k}\). The parabolic neighbourhoods \(\mathcal{P}^{k}(p_{k},t_{k},\hat{r}_{k}L,\hat{r}_{k}^{2}\theta)\) in the original flow becomes \(\overline{\mathcal{P}}^{k}(0,0,L,\theta)\). By construction, each rescaled flow satisfies
\[\overline{F}_{k}(0,0)=0,\ \ \ \overline{f}_{k}(0,0)=1. \tag{5.38}\]
Indeed,
\[\overline{F}_{k}(0,0)=\exp_{F_{k}(0,0)}^{-1}\circ F_{k}(0,\hat{r }_{k}^{2}\cdot 0)=0\]
and
\[\overline{f}_{k}(p,\tau) =-|\overline{A}_{k}(p,\tau)|^{2}+c_{n}|\overline{H}_{k}(p,\tau)|^{2 }-\bar{d}_{n}\] \[=\hat{r}_{k}^{2}\Big{(}-|A_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2}+c _{n}|H_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2}-d_{n}\Big{)}\] \[=\hat{r}_{k}^{2}f_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})\]
and so
\[\overline{f}_{k}(0,0)=\hat{r}_{k}^{2}f_{k}(0,0)=1,\]
since \(\hat{r}_{k}(0,0)=\frac{1}{f_{k}(0,0)}=1\) from the change of coordinates. The gradient estimates give us uniform bounds (depending only on the pinching constant) on \(|A_{k}|\) and its derivatives up to any order on a neighbourhood of the form \(\overline{\mathcal{P}}^{k}(0,0,d,d)\) for a suitable \(d>0\). From Theorem (4.1), we obtain gradient estimates on the second fundamental form in \(C^{\infty}\) on \(\overline{F}_{k}\). Hence we can apply Arzela-Ascoli (via the Langer-Breuning compactness theorem [5] and [14]) and conclude there exists a subsequence converging in \(C^{\infty}\) to some limit flow which we denote by \(\widetilde{\mathcal{M}}_{\tau}^{\infty}\). We analyse the limit flow \(\widetilde{\mathcal{M}}_{\tau}^{\infty}\). Note we have for the Weingarten map
\[[\widetilde{A_{k}}^{-}]_{i}^{j}(p,\tau)=\hat{r}_{k}[A_{k}^{-}]_{i}^{j}(p,\hat {r}_{k}^{2}\tau+t_{k}),\]
so that
\[\frac{|\widetilde{A_{k}}^{-}(p,\tau)|^{2}}{\overline{f}_{k}(p,\tau)}=\frac{|A _{k}^{-}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2}}{f_{k}(p,\hat{r}_{k}^{2}\tau+t_{k} )}.\]
From (5.37) and (5.38), we see
\[\frac{|\widetilde{A}^{-}(0,0)|^{2}}{\widetilde{f}(0,0)}=\varepsilon_{0},\quad \widetilde{f}(0,0)=1.\]
We claim
\[\frac{|\widetilde{A}^{-}(p,\tau)|^{2}}{\widetilde{f}(p,\tau)}=\lim_{k\to \infty}\frac{|\overline{A_{k}}(p,\tau)|^{2}}{\overline{f}_{k}(p,\tau)}\leq \varepsilon\quad\forall\varepsilon>\varepsilon_{0}.\]
Since \(\widetilde{f}(0,0)=1\), it follows that \(|\widetilde{f}|\geq\frac{1}{2}\) in \(\widetilde{\mathcal{P}}^{\infty}(0,0,r,r)\) for some \(r<d^{\#}\). This is true since any point \((p,\tau)\in\widetilde{\mathcal{M}}_{\tau}^{\infty}\) is the limit of points \((p_{j_{k}},t_{j_{k}})\in\overline{\mathcal{M}}^{k}\) and for every \(\varepsilon>\varepsilon_{0}\) if we let \(\eta=\eta(\varepsilon,c_{n})<d^{\#}\) then for large \(k\), \(\mathcal{M}^{k}\) is defined in
\[\mathcal{P}^{k}\left(p_{j_{k}},t_{j_{k}},\frac{1}{f_{k}(p_{j_{k}},t_{j_{k}})} \eta,\left(\frac{1}{f_{k}(p_{j_{k}},t_{j_{k}})}\right)^{2}\eta\right)\]
which implies
\[\frac{|\overline{A_{k}}^{-}(p_{j_{k}},t_{j_{k}})|^{2}}{\overline{f}_{k}(p_{j_{ k}},t_{j_{k}})}\leq\varepsilon\quad\forall\varepsilon>\varepsilon_{0}.\]
Hence the flow \(\widetilde{\mathcal{M}}_{t}^{\infty}\subset\mathbb{R}^{n+m}\) has a space-time maximum \(\varepsilon_{0}\) for \(\frac{|\widetilde{A}^{-}(p,\tau)|^{2}}{\widetilde{f}(p,\tau)}\) at \((0,0)\). The evolution equation for \(\frac{|A^{-}|^{2}}{f}\) is given by
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f} \leq 2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f \Big{\rangle}-\delta\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f +C^{\prime}\frac{|A^{-}|^{2}}{f}+C^{\prime\prime}\frac{|A^{-}|}{\sqrt{f}}\] \[+\frac{1}{f}\Big{(}2\sum_{i,j}|\bar{R}_{ij}(\nu_{1})|^{2}+4\sum_{ i,j,p}\langle\bar{R}_{ij}(\nu_{1}),\dot{h}_{ip}A_{jp}^{-}-\dot{h}_{jp}A_{ip}^{-} \rangle\Big{)}.\]
But in the limit our background space is Euclidean, therefore the background curvature tensor is identically zero. So the evolution equation becomes
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|\widetilde{A}^{-}|^{2}}{ \widetilde{f}}\leq 2\Big{\langle}\nabla\frac{|\widetilde{A}^{-}|^{2}}{ \widetilde{f}},\nabla\log\widetilde{f}\Big{\rangle}-\delta\frac{|\widetilde{ A}^{-}|^{2}}{\widetilde{f}^{2}}\Big{(}\partial_{t}-\Delta\Big{)}\widetilde{f}.\]
Hence, since \(\frac{|\widetilde{A}^{-}|^{2}}{f}\) attains a maximum \(\varepsilon_{0}\) at \((0,0)\) by the strong maximum principle then \(\frac{|\widetilde{A}^{-}|^{2}}{f}\equiv\text{ constant}\). Therefore, there exists this constant \(\mathcal{C}\) depending up to \(d_{n},K_{1},K_{2},L\) and \(n\), such that
\[\mathcal{C}=\frac{|\widetilde{A}^{-}|^{2}}{\widetilde{f}}.\]
Putting this into the evolution equation we have
\[0\leq-\delta\frac{\mathcal{C}}{\widetilde{f}}\Big{(}\partial_{t}-\Delta\Big{)} \widetilde{f}\leq 0,\]
which means we get \(\mathcal{C}=0\) and therefore, \(|\widetilde{A}^{-}|=0\). This implies
\[\frac{|\widetilde{A}^{-}|^{2}}{\widetilde{f}}=0\implies\varepsilon_{0}=0,\]
which is a contradiction. Hence, we obtain
\[\lim_{k\to\infty}\frac{|\widetilde{A}_{k}^{-}(p_{k},t_{k})|}{ \widetilde{f}_{k}(p_{k},t_{k})}=0.\]
## 6 Cylindrical Estimates
In this section, we present estimates that demonstrate an improvement in curvature as we approach a singularity. These estimates play a critical role in the analysis of high curvature regions in geometric flows. In particular, in the high codimension setting, we establish the quadratic pinching ratio \(\frac{|A|^{2}}{|H|^{2}}\) approaches the ratio of the standard cylinder, which is \(\frac{1}{n-1}\).
**Theorem 6.1** ([13]).: _Let \(F:\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\) be a smooth solution to mean curvature flow so that \(F_{0}(p)=F(p,0)\) is compact and quadratically pinched with constant \(c_{n}=\frac{1}{n-2}\). Then \(\forall\varepsilon>0,\exists H_{1}>0\), such that if \(f\geq H_{1}\), then_
\[|A|^{2}-\frac{1}{n-1}|H|^{2}\leq\varepsilon f+C_{\varepsilon}\]
\(\forall t\in[0,T)\) _where \(C_{\varepsilon}=C_{\varepsilon}(n,m)\)._
Proof.: Since \(\mathcal{M}\) is quadratically bounded, there exist constants \(C,D\) such that
\[|A|^{2}-\frac{1}{n-1}|H|^{2}\leq Cf+D.\]
Hence, let \(\varepsilon_{0}\) denote the infimum of such \(\varepsilon\) for which the estimate is true and suppose \(\varepsilon_{0}>0\). We will prove the theorem by contradiction. Hence, let us assume that the conclusions of the theorem are not true that is there exists a family of mean curvature flow \(\mathcal{M}^{k}_{t}\) with points \((p_{k},t_{k})\) such that
\[\lim_{k\to\infty}\frac{\left(|A(p_{k},t_{k})|^{2}-\frac{1}{n-1}|H(p_{k},t_{k}) |^{2}\right)}{f_{k}(p_{k},t_{k})}=\varepsilon_{0} \tag{6.1}\]
with \(\varepsilon_{0}>0\) and \(f_{k}(p_{k},t_{k})\to\infty\).
We perform a parabolic rescaling of \(\mathcal{M}^{k}_{t}\) in such a way that \(f_{k}\) at \((p_{k},t_{k})\) becomes \(1\). If we consider the exponential map \(\exp_{\bar{p}}\colon T_{\bar{p}}\mathcal{N}\cong\mathbb{R}^{n+m}\to\mathcal{N} ^{n+m}\) and \(\gamma\) a geodesic, then for a vector \(v\in T_{\bar{p}}\mathcal{N}\), then
\[\exp_{\bar{p}}(v)=\gamma_{\bar{p},\frac{v}{|v|}}(|v|),\ \ \gamma^{\prime}(0)= \frac{v}{|v|}\ \ \ \text{and}\ \ \gamma(0)=\bar{p}=F_{k}(p_{k},t_{k}).\]
That is, if \(F_{k}\) is the parameterisation of the original flow \(\mathcal{M}^{k}_{t}\), we let \(\hat{r}_{k}=\frac{1}{f_{k}(p_{k},t_{k})}\), and we denote the rescaled flow by \(\overline{\mathcal{M}}^{k}_{t}\) and we define its parameterisation by
\[\overline{F}_{k}(p,\tau)=\exp^{-1}_{F_{k}(p_{k},t_{k})}\circ F_{k}(p,\hat{r}_ {k}^{2}\tau+t_{k}).\]
In the Riemannian case, when we change the metric after dilation, we do not need to multiply the immersion by the same constant as we would do in the Euclidean space. When we rescale the background space, following the example of the dilation of a sphere, we see that
\[\bar{g}_{ij}=\frac{1}{\hat{r}_{k}^{2}}g_{ij}\ \ \ \text{and}\ \ \overline{K}=\hat{r}_{k}^{2}K,\]
where \(K\) is the sectional curvature of \(\mathcal{N}\). In the same way,
\[|\overline{A}|^{2}=\hat{r}_{k}^{2}|A|^{2}\ \ \ \text{and}\ \ |\overline{H}|^{2}=\hat{r}_{k}^{2}|H|^{2}.\]
Since \(d_{n}\) depends on \(n\) and the sectional curvature \(K\), the new \(\bar{d}_{n}\) depends on n and \(\overline{K}\). Hence,
\[\bar{d}_{n}=\hat{r}_{k}^{2}d_{n}.\]
For \(\hat{r}_{k}\to 0\), the background Riemannian manifold will converge to its tangent plane in a pointed \(C^{d,\gamma}\) Holder topology [24]. Therefore, we can work on the manifold \(\mathcal{N}\) as we would work in a Euclidean space. For simplicity, we choose for every flow a local co-ordinate system centred at \(p_{k}\). In these co-ordinates we can write \(0\) instead of \(p_{k}\). The parabolic neighbourhoods \(\mathcal{P}^{k}(p_{k},t_{k},\hat{r}_{k}L,\hat{r}_{k}^{2}\theta)\) in the original flow becomes \(\overline{\mathcal{P}}^{k}(0,0,L,\theta)\). By construction, each rescaled flow satisfies
\[\overline{F}_{k}(0,0)=0,\quad\overline{f}_{k}(0,0)=1. \tag{6.2}\]
Indeed,
\[\overline{F}_{k}(0,0)=\exp^{-1}_{F_{k}(0,0)}\circ F_{k}(0,\hat{r}_{k}^{2}\cdot 0 )=0\]
and
\[\overline{f}_{k}(p,\tau) =-|\overline{A}_{k}(p,\tau)|^{2}+c_{n}|\overline{H}_{k}(p,\tau)|^ {2}-\bar{d}_{n}\] \[=\hat{r}_{k}^{2}\Big{(}-|A_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2}+c _{n}|H_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2}-d_{n}\Big{)}\] \[=\hat{r}_{k}^{2}f_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})\]
and so
\[\overline{f}_{k}(0,0)=\hat{r}_{k}^{2}f_{k}(0,0)=1,\]
since \(\hat{r}_{k}(0,0)=\frac{1}{f_{k}(0,0)}=1\) from the change of coordinates. The gradient estimates give us uniform bounds (depending only on the pinching constant) on \(|A_{k}|\) and its derivatives up to any order on a neighbourhood of the form \(\overline{\mathcal{P}}^{k}(0,0,d,d)\) for a suitable \(d>0\). From Theorem (4.1), we obtain gradient estimates on the second fundamental form in \(C^{\infty}\) on \(\overline{F}_{k}\). Hence we can apply Arzela-Ascoli (via the Langer-Breuning compactness theorem [5] and [14]) and conclude there exists a subsequence converging in \(C^{\infty}\) to some limit flow which we denote by \(\widetilde{\mathcal{M}}_{\tau}^{\infty}\). We analyse the limit flow \(\widetilde{\mathcal{M}}_{\tau}^{\infty}\). Note we have for the Weingarten map
\[[\widehat{A}_{k}^{-}]_{i}^{j}(p,\tau)=\hat{r}_{k}[A_{k}^{-}]_{i}^{j}(p,\hat{r }_{k}^{2}\tau+t_{k}),\]
so that so that
\[\frac{|\overline{A}_{k}(p,\tau)|^{2}-\frac{1}{n-1}|\overline{H}_{k}(p,\tau)|^ {2}}{\overline{f}_{k}(p,\tau)}=\frac{|A_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2} -\frac{1}{n-1}|H_{k}(p,\hat{r}_{k}^{2}\tau+t_{k})|^{2}}{f_{k}(p,\hat{r}_{k}^{2 }\tau+t_{k})}.\]
From (5.37) and (5.38), we see
\[\frac{|\widetilde{A}(0,0)|^{2}-\frac{1}{n-1}|\widetilde{H}(0,0)|^{2}}{ \widetilde{f}(0,0)}=\varepsilon_{0},\quad\widetilde{f}(0,0)=1.\]
We claim
\[\frac{|\widetilde{A}(p,\tau)|^{2}-\frac{1}{n-1}|\widetilde{H}(p,\tau)|^{2}}{ \widetilde{f}(p,\tau)}=\lim_{k\to\infty}\frac{|\overline{A}_{k}(p,\tau)|^{2}- \frac{1}{n-1}|\overline{H}_{k}(p,\tau)|^{2}}{\overline{f}_{k}(p,\tau)}\leq \varepsilon\quad\forall\varepsilon>\varepsilon_{0}.\]
Since \(\widetilde{f}(0,0)=1\), it follows that \(|\widetilde{f}|\geq\frac{1}{2}\) in \(\widetilde{\mathcal{P}}^{\infty}(0,0,r,r)\) for some \(r<d^{\#}\). This is true since any point \((p,\tau)\in\widetilde{\mathcal{M}}_{\tau}^{\infty}\) is the limit of points \((p_{j_{k}},t_{j_{k}})\in\overline{\mathcal{M}}^{k}\) and for every \(\varepsilon>\varepsilon_{0}\) if we let \(\eta=\eta(\varepsilon,c_{n})<d^{\#}\) then for large \(k\), \(\mathcal{M}^{k}\) is defined in
\[\mathcal{P}^{k}\left(p_{j_{k}},t_{j_{k}},\frac{1}{f_{k}(p_{j_{k}},t_{j_{k}})} \eta,\left(\frac{1}{f_{k}(p_{j_{k}},t_{j_{k}})}\right)^{2}\eta\right)\]
which implies
\[\frac{|\overline{A}_{k}(p_{j_{k}},t_{j_{k}})|^{2}-\frac{1}{n-1}| \overline{H}_{k}(p_{j_{k}},t_{j_{k}})|^{2}}{\overline{f}_{k}(p_{j_{k}},t_{j_{ k}})}\leq\varepsilon\quad\forall\varepsilon>\varepsilon_{0}.\]
Hence the flow \(\widetilde{\mathcal{M}}_{t}^{\infty}\subset\mathbb{R}^{n+m}\) has a space-time maximum \(\varepsilon_{0}\) for \(\frac{|\widetilde{A}(p,\tau)|^{2}-\frac{1}{n-1}|\widetilde{H}(p,\tau)|^{2}}{ \widetilde{f}(p,\tau)}\) at \((0,0)\) which implies that the flow \(\overline{\mathbb{M}}_{t}^{\infty}\) has a space-time maximum \(\frac{1}{n-1}+\varepsilon_{0}\) for \(\frac{|\overline{A}(p,\tau)|^{2}}{|\overline{H}(p,\tau)|^{2}}\) at \((0,0)\). Since the evolution equation for \(\frac{|A|^{2}}{|H|^{2}}\) is given by
\[\left(\partial_{t}-\Delta\right)\frac{|A|^{2}}{|H|^{2}} =\frac{2}{|H|^{2}}\left\langle\nabla|H|^{2},\nabla\left(\frac{|A| ^{2}}{|H|^{2}}\right)\right\rangle\] \[-\frac{2}{|H|^{2}}\left(|\nabla A|^{2}-\frac{|A|^{2}}{|H|^{2}}| \nabla H|^{2}\right)\] \[+\frac{2}{|H|^{2}}\left(R_{1}-\frac{|A|^{2}}{|H|^{2}}R_{2}\right)\]
We have
\[|\nabla H|^{2}\leq\frac{3}{n+2}|\nabla A|^{2},\quad\frac{|A|^{2}}{|H|^{2}} \leq c_{n}\]
which gives
\[-\frac{2}{|H|^{2}}\left(|\nabla A|^{2}-\frac{|A|^{2}}{|H|^{2}}| \nabla H|^{2}\right)\leq 0.\]
Furthermore, if \(\frac{|A|^{2}}{|H|^{2}}=c<c_{n}\) then
\[R_{1}-\frac{|A|^{2}}{|H|^{2}}R_{2} =R_{1}-cR_{2}\] \[\leq\frac{2}{n}\frac{1}{c-\nicefrac{{1}}{{n}}}|A_{-}|^{2} \mathcal{Q}+\left(6-\frac{2}{n(c-\nicefrac{{1}}{{n}})}\right)|\mathring{A}_{1 }|^{2}|\mathring{A}_{-}|^{2}+\left(3-\frac{2}{n(c-\nicefrac{{1}}{{n}})} \right)|\mathring{A}_{-}|^{4}\]
\[\leq 0.\]
Hence the strong maximum principle applies to the evolution equation of \(\frac{|A|^{2}}{|H|^{2}}\) and shows \(\frac{|A|^{2}}{|H|^{2}}\) is constant. The evolution equation then shows \(|\nabla A|^{2}=0\), that is the second fundamental form is parallel and that \(|A_{-}|^{2}=|\mathring{\bar{A}}_{-}|^{2}=0\), that is the submanifold is codimension one. Finally this shows locally \(\mathbb{M}=\mathbb{S}^{n-k}\times\mathbb{R}^{k}\), [15]. As \(\frac{|A|^{2}}{|H|^{2}}<c_{n}\leq\frac{1}{n-2}\) we can only have
\[\mathbb{S}^{n},\mathbb{S}^{n-1}\times\mathbb{R}\]
which gives \(\frac{|A|^{2}}{|H|^{2}}=\frac{1}{n},\frac{1}{n-1}\neq\frac{1}{n-1}+\varepsilon _{0},\varepsilon>0\) which gives a contradiction.
## 7 Singularity Models of Pinched Solutions of Mean Curvature Flow in Higher Codimension
In this section, we derive a corollary from Theorem 5.1, which provides information about the blow up models at the first singular time. Specifically, we show that these models can be classified up to homothety.
**Corollary 7.1** ([21, Corollary 1.4] ).: _Let \(n\geq 5\) and \(N>n\). Let \(c_{n}=\frac{1}{n-2}\) if \(n\geq 8\) and \(c_{n}=\frac{3(n+1)}{2n(n+2)}\) if \(n=5,6\), or 7. Consider a closed, n-dimensional solution to the mean curvature flow in \(\mathbb{R}^{N}\) initially satisfying \(|H|>0\) and \(|A|^{2}<c_{n}|H|^{2}\). At the first singular time, the only possible blow-up limits are codimension one shrinking round spheres, shrinking round cylinders and translating bowl solitons._
According to Theorem 5.1 and Theorem 6.1, for \(F:\mathcal{M}^{n}\times[0,T)\to\mathcal{N}^{n+m}\) be a smooth solution to mean curvature flow so that \(F_{0}(p)=F(p,0)\) is compact and quadratically pinched with \(c_{n}=\frac{3(n+1)}{2n(n+2)}\) if \(n=5,6\), or 7, then \(\forall\varepsilon>0,\exists H_{0},H_{1}>0\), such that if \(f\geq\max\{H_{0},H_{1}\}\), then
\[\left|A^{-}\right|^{2}\leq\varepsilon f+C_{\varepsilon}\quad\text{and}\quad|A |^{2}-\frac{1}{n-1}|H|^{2}\leq\varepsilon f+C_{\varepsilon},\]
\(\forall t\in[0,T)\) where \(C_{\varepsilon}=C_{\varepsilon}(n,m)\). At the first singular time, the only possible blow-up limits are codimension one shrinking round spheres, shrinking round cylinders, and translating bowl solitons. Therefore, we can classify these blowup limits as follows:
**Corollary 7.2** ([11, Corollary 4.7]).: _Let \(n\geq 5\). Let \(c_{n}=\frac{1}{n-2}\) if \(n\geq 8\) and \(c_{n}=\frac{3(n+1)}{2n(n+2)}\) if \(n=5,6\), or \(7\). Suppose \(F_{t}\colon\mathcal{M}^{n}\to\mathcal{N}^{n+m},m\geq 2\) is a smooth solution of the mean curvature flow, compact and with positive mean curvature on the maximal time interval \([0,T)\)._
1. _If the singularity for_ \(t\to T\) _is of type I, the only possible limiting flows under the rescaling procedure in_ (4.4) _in_ _[_12_]_ _are the homothetically shrinking solutions associated with_ \(\mathbb{S}^{2},\mathbb{R}\times\mathbb{S}^{1}\) _and_ \(\mathbb{R}\times\Gamma\)_, where_ \(\Gamma\) _is one of the selfsimilar immersed curves introduced by Mullins (see also Abresch-Langer_ _[_1_]__)._
2. _If the singularity is of type II, then from Theorem_ 5.1_, the only possible blow-up limits at the first singular time are codimension one shrinking round spheres, shrinking round cylinders, and translating bowl solitons._
## 8 The case of Constant Curvature
In this chapter, we prove Theorem 5.1 in the case of constant curvature. In the case of constant negative curvature, the proof is more straightforward and more quantitative so we give a direct proof of the statement.
### Evolution equations
We start by stating the evolution equations for the length and the squared length of the second fundamental form and the mean curvature vector in the case of constant curvature. We denote \(\bar{K}\) to be the sectional curvature and with respect to local orthonormal frames \(\{e_{i}\}\) and \(\{\nu_{\alpha}\}\) for the tangent and normal bundles,
\[(\partial_{t}-\Delta)A_{ij} =\sum_{p,q,\beta}A^{\beta}_{ij}A^{\beta}_{pq}A_{pq}+\sum_{p,q, \beta}A^{\beta}_{iq}A^{\beta}_{qp}A_{pj}+\sum_{p,q,\beta}A^{\beta}_{jq}A^{ \beta}_{qp}A_{pi}-2\sum_{p,q,\beta}A^{\beta}_{ip}A^{\beta}_{jq}A_{pq}\] \[+2\bar{K}Hg_{ij}-n\bar{K}A_{ij},\] \[(\partial_{t}-\Delta)H =\sum_{p,q,\beta}H^{\beta}A^{\beta}_{pq}A_{pq}+n\bar{K}H.\]
From these equations we can compute
\[(\partial_{t}-\Delta)|A|^{2} =-2|\nabla A|^{2}+2|\langle A,A\rangle|^{2}+2|R^{\perp}|^{2}+4 \bar{K}|H|^{2}-2n\bar{K}|A|^{2},\] \[(\partial_{t}-\Delta)|H|^{2} =-2|\nabla H|^{2}+2|\langle A,H\rangle|^{2}+2n\bar{K}|H|^{2}.\]
Here we show the preservation of pinching for high codimension submanifolds of hyperbolic space. To prove the codimension estimate we need good estimates for the reaction terms in this equation. These are proven following Andrews-Baker.
We assume throughout \(H\neq 0\). We first observe
\[\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2} =|h|^{4}+2\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}+|A^{-}|^{4}\] \[=|\hat{\bar{h}}|^{4}+\frac{2}{n}|\hat{\bar{h}}|^{2}|H|^{2}+\frac {1}{n^{2}}|H|^{4}+2\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}+\sum_{i,j,p,q}|\langle A^ {-}_{ij},A^{-}_{pq}\rangle|^{2}\]
and recall the identity
\[|R^{\perp}|^{2}=2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi}|^{2}+\sum_{i,j,p}| A^{-}_{ip}\otimes A^{-}_{pj}-A^{-}_{jp}\otimes A^{-}_{pi}|^{2}\]
in order to express
\[R_{1} =|\mathring{h}|^{4}+\frac{2}{n}|\mathring{h}|^{2}|H|^{2}+\frac{1 }{n^{2}}|H|^{4}+2\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}+\sum_{i,j,p,q}|\langle A^{-} _{ij},A^{-}_{pq}\rangle|^{2}\] \[+2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi}|^{2}+\sum_{i,j,p }|A^{-}_{ip}\otimes A^{-}_{pj}-A^{-}_{jp}\otimes A^{-}_{pi}|^{2}.\]
Andrews-Baker establish the estimate
\[\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}+\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi }|^{2}\leq 2|\mathring{h}|^{2}|A^{-}|^{2},\]
and also observe that
\[\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq}\rangle|^{2}+\sum_{i,j,p}|A^{-}_{ ip}\otimes A^{-}_{pj}-A^{-}_{jp}\otimes A^{-}_{pi}|^{2}\leq\frac{3}{2}|A^{-}|^{4}\]
by setting \(B^{\alpha}=A^{-,\alpha}\) for \(\alpha\geq 2\) in the following result [16, Theorem 1]:
**Theorem 8.1**.: _Let \(\{B^{\alpha}\}\) be a finite set of symmetric \((n\times n)\)-matrices. Then we have_
\[\sum_{i,j,\alpha,\beta}(B^{\alpha}_{ij}B^{\beta}_{ij})^{2}+\sum_{i,j,p,\alpha,\beta}|B^{\alpha}_{ip}B^{\beta}_{pj}-B^{\alpha}_{jp}B^{\alpha}_{pi}|^{2}\leq \frac{3}{2}\Big{(}\sum_{\alpha}|B^{\alpha}|^{2}\Big{)}^{2}.\]
Putting these estimates together we obtain the inequality
\[R_{1}\leq|\mathring{\mathring{h}}|^{4}+\frac{2}{n}|\mathring{\mathring{h}}|^ {2}|H|^{2}+\frac{1}{n^{2}}|H|^{4}+4|\mathring{\mathring{h}}|^{2}|A^{-}|^{2}+ \frac{3}{2}|A^{-}|^{4}.\]
We may also expand
\[R_{2}=\sum_{i,j}|\langle A_{ij},H\rangle|^{2}=|h|^{2}|H|^{2}=|\mathring{ \mathring{h}}|^{2}|H|^{2}+\frac{1}{n}|H|^{4},\]
hence
\[2R_{1}-2c_{n}R_{2}\leq 2|\mathring{\mathring{h}}|^{4}+8|\mathring{\mathring{h} }|^{2}|A^{-}|^{2}+3|A^{-}|^{4}-2(c_{n}-2/n)|\mathring{\mathring{h}}|^{2}|H|^{ 2}-\frac{2}{n}(c_{n}-1/n)|H|^{4}.\]
Now we express
\[\mathcal{Q}=|\mathring{\mathring{A}}|^{2}-(c_{n}-1/n)|H|^{2}-d_{n}\bar{K}\]
and rearrange to obtain
\[|H|^{2}=\frac{1}{c_{n}-1/n}(|\mathring{h}|^{2}+|A^{-}|^{2}-\mathcal{Q}-d_{n}\bar{ K}).\]
Substituting this back in gives
\[2R_{1}-2c_{n}R_{2} \leq 2|\mathring{\mathring{h}}|^{4}+8|\mathring{\mathring{h}}|^{2} |A^{-}|^{2}+3|A^{-}|^{4}-2(c_{n}-1/n)|\mathring{h}|^{2}|H|^{2}\] \[-\frac{2}{n}|A^{-}|^{2}|H|^{2}+\frac{2}{n}(\mathcal{Q}+d_{n}\bar{ K})|H|^{2}\] \[=6|\mathring{\mathring{h}}|^{2}|A^{-}|^{2}+3|A^{-}|^{4}-\frac{2}{ n}|A^{-}|^{2}|H|^{2}\] \[+\frac{2}{n}(\mathcal{Q}+d_{n}\bar{K})|H|^{2}+2(\mathcal{Q}+d_{n} \bar{K})|\mathring{h}|^{2}\] \[=\bigg{(}6-\frac{2/n}{c_{n}-1/n}\bigg{)}|\mathring{h}|^{2}|A^{-}| ^{2}+\bigg{(}3-\frac{2/n}{c_{n}-1/n}\bigg{)}|A^{-}|^{4}\] \[+\frac{2}{n}(\mathcal{Q}+d_{n}\bar{K})|H|^{2}+2(\mathcal{Q}+d_{n} \bar{K})|\mathring{h}|^{2}+\frac{2/n}{c_{n}-1/n}(\mathcal{Q}+d_{n}\bar{K})|A^ {-}|^{2}.\]
The terms on the last line can be written as
\[\frac{2}{n}(\mathcal{Q}+d_{n}\bar{K})|H|^{2} =\frac{2/n}{c_{n}-1/n}\mathcal{Q}(|\mathring{h}|^{2}+|A^{-}|^{2}- \mathcal{Q})-\frac{4/n}{c_{n}-1/n}d_{n}\bar{K}\mathcal{Q}\] \[+\frac{2/n}{c_{n}-1/n}d_{n}\bar{K}(|\mathring{h}|^{2}+|A^{-}|^{2} )-\frac{2/n}{c_{n}-1/n}d_{n}^{2}\bar{K}^{2},\]
hence
\[\frac{2}{n}(\mathcal{Q}+d_{n}\bar{K})|H|^{2}+2(\mathcal{Q}+d_{n} \bar{K})|\mathring{h}|^{2}+\frac{2/n}{c_{n}-1/n}(\mathcal{Q}+d_{n}\bar{K})|A^ {-}|^{2}\] \[=\frac{2/n}{c_{n}-1/n}\mathcal{Q}(2|A^{-}|^{2}-\mathcal{Q})- \frac{4/n}{c_{n}-1/n}d_{n}\bar{K}\mathcal{Q}+\bigg{(}\frac{2/n}{c_{n}-1/n}+2 \bigg{)}|\mathring{h}|^{2}\mathcal{Q}\] \[+\bigg{(}\frac{2/n}{c_{n}-1/n}+2\bigg{)}d_{n}\bar{K}|\mathring{h }|^{2}+\frac{4/n}{c_{n}-1/n}d_{n}\bar{K}|A^{-}|^{2}-\frac{2/n}{c_{n}-1/n}d_{n} ^{2}\bar{K}^{2}\]
and we have
\[2R_{1}-2c_{n}R_{2} \leq\bigg{(}6-\frac{2/n}{c_{n}-1/n}\bigg{)}|\mathring{h}|^{2}|A^ {-}|^{2}+\bigg{(}3-\frac{2/n}{c_{n}-1/n}\bigg{)}|A^{-}|^{4}\] \[+\bigg{(}\frac{2/n}{c_{n}-1/n}+2\bigg{)}d_{n}\bar{K}|\mathring{h}| ^{2}+\frac{4/n}{c_{n}-1/n}d_{n}\bar{K}|A^{-}|^{2}-\frac{2/n}{c_{n}-1/n}d_{n}^ {2}\bar{K}^{2}\] \[+\frac{2/n}{c_{n}-1/n}\mathcal{Q}(2|A^{-}|^{2}-\mathcal{Q})- \frac{4/n}{c_{n}-1/n}d_{n}\bar{K}\mathcal{Q}+\bigg{(}\frac{2/n}{c_{n}-1/n}+2 \bigg{)}|\mathring{h}|^{2}\mathcal{Q}.\]
Next we compute
\[-2n\bar{K}|\mathring{\mathring{A}}|^{2}-2n\bar{K}(c_{n}-1/n)|H|^{2}=-4n\bar{K} |\mathring{h}|^{2}-4n\bar{K}|A^{-}|^{2}+2nd_{n}\bar{K}^{2}+2n\bar{K}\mathcal{Q},\]
and so obtain the following estimate for the zeroth-order terms in the evolution of \(\mathcal{Q}\):
\[2R_{1}-2c_{n}R_{2}-2n\bar{K}|\hat{A}|^{2}-2n\bar{K}(c_{n}-1/n)|H|^ {2}\] \[\qquad\leq\bigg{(}6-\frac{2/n}{c_{n}-1/n}\bigg{)}|\mathring{h}|^{2 }|A^{-}|^{2}+\bigg{(}3-\frac{2/n}{c_{n}-1/n}\bigg{)}|A^{-}|^{4}\] \[\qquad+2\bigg{(}\frac{d_{n}/n}{c_{n}-1/n}+d_{n}-2n\bigg{)}\bar{K}| \mathring{h}|^{2}+4\bigg{(}\frac{d_{n}/n}{c_{n}-1/n}-n\bigg{)}\bar{K}|A^{-}|^{2}\] \[\qquad+2\bigg{(}n-\frac{d_{n}/n}{c_{n}-1/n}\bigg{)}d_{n}\bar{K}^{ 2}+2\bigg{(}1+\frac{1/n}{c_{n}-1/n}\bigg{)}|\mathring{h}|^{2}Q\] \[\qquad+\frac{2/n}{c_{n}-1/n}\mathcal{Q}(2|A^{-}|^{2}-\mathcal{Q} )+2\bigg{(}n-\frac{2d_{n}/n}{c_{n}-1/n}\bigg{)}\bar{K}\mathcal{Q}. \tag{8.1}\]
Suppose \(\bar{K}<0\). In this case, if \(d_{n}>0\) then the condition \(\mathcal{Q}\leq 0\) implies \(|H|^{2}>0\). As above, for \(c_{n}\leq\frac{4}{3n}\) we have
\[2R_{1}-2c_{n}R_{2}-2n\bar{K}|\hat{A}|^{2}-2n\bar{K}(c_{n}-1/n)|H|^ {2}\] \[\qquad\leq 2\bigg{(}\frac{d_{n}/n}{c_{n}-1/n}+d_{n}-2n\bigg{)} \bar{K}|\mathring{h}|^{2}+4\bigg{(}\frac{d_{n}/n}{c_{n}-1/n}-n\bigg{)}\bar{K}| A^{-}|^{2}\] \[\qquad+2\bigg{(}n-\frac{d_{n}/n}{c_{n}-1/n}\bigg{)}d_{n}\bar{K}^{ 2}+2\bigg{(}1+\frac{1/n}{c_{n}-1/n}\bigg{)}|\mathring{h}|^{2}Q\] \[\qquad+\frac{2/n}{c_{n}-1/n}\mathcal{Q}(2|A^{-}|^{2}-\mathcal{Q} )+2\bigg{(}n-\frac{2d_{n}/n}{c_{n}-1/n}\bigg{)}\bar{K}\mathcal{Q}.\]
The first term on the left is nonpositive for \(d_{n}\geq 2n-2/c_{n}\), and this is also sufficient to ensure
\[\bigg{(}\frac{d_{n}/n}{c_{n}-1/n}-n\bigg{)}\geq 2/c_{n}-n\geq n,\]
so we have
\[2R_{1}-2c_{n}R_{2}-2n\bar{K}|\hat{A}|^{2}-2n\bar{K}(c_{n}-1/n)|H|^ {2}\] \[\qquad\leq 4(2/c_{n}-n)\bar{K}|A^{-}|^{2}+2(n-2/c_{n})d_{n}\bar{K}^ {2}+2\bigg{(}1+\frac{1/n}{c_{n}-1/n}\bigg{)}|\mathring{h}|^{2}Q\] \[\qquad+\frac{2/n}{c_{n}-1/n}\mathcal{Q}(2|A^{-}|^{2}-\mathcal{Q} )+2\bigg{(}n-\frac{2d_{n}/n}{c_{n}-1/n}\bigg{)}\bar{K}\mathcal{Q}.\]
All of the terms on the right are either nonpositive or carry a factor \(\mathcal{Q}\), so we see that \(\mathcal{Q}\leq 0\) is preserved for
\[c_{n}\leq\min\left\{\frac{4}{3n},\frac{3}{n+2}\right\},\qquad d_{n}\geq 2n-2 /c_{n}.\]
Observe that for our allowed range of constants \(c_{n}\) and \(d_{n}\),
\[2\bigg{(}n-\frac{2d_{n}/n}{c_{n}-1/n}\bigg{)}\bar{K}\geq 0,\]
so when \(\mathcal{Q}\leq 0\) we can further estimate
\[2R_{1}-2c_{n}R_{2}-2n\bar{K}|\hat{\bar{A}}|^{2}-2n\bar{K}(c_{n}-1/n )|H|^{2}\] \[\qquad\leq-\frac{2/n}{c_{n}-1/n}\mathcal{Q}^{2}.\]
Hence
\[(\partial_{t}-\Delta)\mathcal{Q}\leq-\frac{2/n}{c_{n}-1/n}\mathcal{Q}^{2},\]
which forces \(\mathcal{Q}\) to blow up in finite time.
### The evolution of \(h\)
From the equations for \(A\) and \(H\), we have that the projection \(\langle A,H\rangle\) satisfies
\[(\partial_{t}-\Delta)A^{\alpha}_{ij}H^{\alpha} =-2\sum_{p,\alpha}\nabla_{p}A^{\alpha}_{ij}\nabla_{p}H^{\alpha}+ 2\sum_{p,q,\alpha,\beta}H^{\alpha}A^{\beta}_{ij}A^{\beta}_{pq}A^{\alpha}_{pq}\] \[+\sum_{p,q,\alpha,\beta}H^{\alpha}(A^{\beta}_{iq}A^{\beta}_{qp}A^ {\alpha}_{pj}+A^{\beta}_{jq}A^{\beta}_{qp}A^{\alpha}_{pi}-2A^{\beta}_{ip}A^{ \beta}_{jq}A^{\alpha}_{pq})\] \[+2\bar{K}|H|^{2}g_{ij}.\]
The first of the reaction terms can be split into a hypersurface and a codimension component, as follows:
\[2\sum_{p,q,\alpha,\beta}H^{\alpha}A^{\beta}_{ij}A^{\beta}_{pq}A^{\alpha}_{pq} =2|H||h|^{2}h_{ij}+2\sum_{p,q,\alpha\geq 2}|H|A^{\alpha}_{ij}A^{\alpha}_{pq}h _{pq}.\]
Similarly, the remaining reaction terms can be written as
\[\sum_{p,q,\alpha,\beta}H^{\alpha}(A^{\beta}_{iq}A^{\beta}_{qp}A^ {\alpha}_{pj}+A^{\beta}_{jq}A^{\beta}_{qp}A^{\alpha}_{pi}-2A^{\beta}_{ip}A^{ \beta}_{jq}A^{\alpha}_{pq})=\sum_{p,q,\alpha\geq 2}|H|A^{\alpha}_{iq}A^{\alpha}_{ qp}h_{pj}\] \[+\sum_{p,q,\alpha\geq 2}|H|A^{\alpha}_{jq}A^{\alpha}_{qp}h_{pi}-2 \sum_{p,q,\alpha\geq 2}|H|A^{\alpha}_{ip}A^{\alpha}_{jq}h_{pq}.\]
Therefore,
\[(\partial_{t}-\Delta)A^{\alpha}_{ij}H^{\alpha} =-2\sum_{p,\alpha}\nabla_{p}A^{\alpha}_{ij}\nabla_{p}H^{\alpha}+ 2|H||h|^{2}h_{ij}+2\sum_{p,q,\alpha\geq 2}|H|h_{pq}(A^{\alpha}_{ij}A^{\alpha}_{ pq}-A^{\alpha}_{ip}A^{\alpha}_{jq})\] \[+\sum_{p,q,\alpha\geq 2}|H|A^{\alpha}_{iq}A^{\alpha}_{qp}h_{pj}+ \sum_{p,q,\alpha\geq 2}|H|A^{\alpha}_{jq}A^{\alpha}_{qp}h_{pi}+2\bar{K}|H|^{2}g_{ ij}.\]
For a positive function \(f\), we have
\[(\partial_{t}-\Delta)\sqrt{f}=\frac{1}{4f^{3/2}}|\nabla f|^{2}+\frac{1}{2\sqrt {f}}(\partial_{t}-\Delta)f,\]
hence the quantity \(\sqrt{f}=|H|\) satisfies
\[(\partial_{t}-\Delta)|H| =\frac{1}{4|H|^{3}}|\nabla|H|^{2}|^{2}+\frac{1}{2|H|}\big{(}-2| \nabla H|^{2}+2|\langle A,H\rangle|^{2}+2n\bar{K}|H|^{2}\big{)}\] \[=\frac{1}{|H|^{3}}\langle H,\nabla_{i}H\rangle\langle H,\nabla_{ i}H\rangle-\frac{|\nabla H|^{2}}{|H|}+\frac{|\langle A,H\rangle|^{2}}{|H|}+n\bar{K}|H|.\]
Inserting the identities
\[\frac{|\langle A,H\rangle|^{2}}{|H|}=|h|^{2}|H|\]
and
\[-\frac{|\nabla H|^{2}}{|H|}+\frac{1}{|H|^{3}}\langle H,\nabla_{i} H\rangle\langle H,\nabla_{i}H\rangle =-\frac{|\nabla|H||^{2}}{|H|}-|H||\nabla\nu_{1}|^{2}+\frac{1}{|H| }\langle\nu_{1},\nabla_{i}H\rangle\langle\nu_{1},\nabla_{i}H\rangle\] \[=-|H||\nabla\nu_{1}|^{2},\]
we obtain
\[(\partial_{t}-\Delta)|H|=|h|^{2}|H|+n\bar{K}|H|-|H||\nabla\nu_{1}|^{2}. \tag{8.2}\]
For a tensor \(B_{ij}\) divided by a positive scalar function \(f\), there holds
\[(\partial_{t}-\Delta)\frac{B_{ij}}{f}=\frac{1}{f}(\nabla_{t}- \Delta)B_{ij}-\frac{B_{ij}}{f^{2}}(\partial_{t}-\Delta)f+\frac{2}{f}\bigg{\langle} \nabla\frac{B_{ij}}{f},\nabla f\bigg{\rangle}.\]
Therefore, dividing \(\langle A_{ij},H\rangle\) by \(|H|\), we obtain
\[(\partial_{t}-\Delta)h_{ij} =|h|^{2}h_{ij}+2\sum_{p,q,\alpha\geq 2}h_{pq}(A^{\alpha}_{ij}A^{ \alpha}_{pq}-A^{\alpha}_{ip}A^{\alpha}_{jq})+\sum_{p,q,\alpha\geq 2}A^{\alpha}_{iq }A^{\alpha}_{qp}h_{pj}\] \[+\sum_{p,q,\alpha\geq 2}A^{\alpha}_{jq}A^{\alpha}_{qp}h_{pi}+2 \bar{K}|H|g_{ij}-n\bar{K}h_{ij}-2|H|^{-1}\langle\nabla A_{ij},\nabla H\rangle +h_{ij}|\nabla\nu_{1}|^{2}\] \[+2|H|^{-1}\langle\nabla h_{ij},\nabla|H|\rangle.\]
We simplify the gradient terms by decomposing
\[-2\langle\nabla A_{ij},\nabla H\rangle =-2\langle\nabla h_{ij}\nu_{1}+h_{ij}\nabla\nu_{1}+\nabla A^{-}_ {ij},\nabla|H|\nu_{1}+2|H|\nabla\nu_{1}\rangle\] \[=-2\langle\nabla h_{ij},\nabla|H|\rangle-2|H|h_{ij}|\nabla\nu_{1 }|^{2}-2\langle\nabla A^{-}_{ij},\nabla|H|\nu_{1}\rangle\] \[-2|H|\langle\nabla A^{-}_{ij},\nabla\nu_{1}\rangle,\]
and so obtain
\[(\partial_{t}-\Delta)h_{ij} =|h|^{2}h_{ij}+2\sum_{p,q,\alpha\geq 2}h_{pq}(A^{\alpha}_{ij}A^{ \alpha}_{pq}-A^{\alpha}_{ip}A^{\alpha}_{jq})+\sum_{p,q,\alpha\geq 2}A^{\alpha}_{iq }A^{\alpha}_{qp}h_{pj}\] \[+\sum_{p,q,\alpha\geq 2}A^{\alpha}_{jq}A^{\alpha}_{qp}h_{pi}+2 \bar{K}|H|g_{ij}-n\bar{K}h_{ij}-h_{ij}|\nabla\nu_{1}|^{2}-2|H|^{-1}\langle \nabla A^{-}_{ij},\nabla|H|\nu_{1}\rangle\]
\[-2\langle\nabla A^{-}_{ij},\nabla\nu_{1}\rangle.\]
Next, we compute
\[(\partial_{t}-\Delta)|h|^{2} =2\sum_{i,j}h_{ij}(\nabla_{t}-\Delta)h_{ij}-2|\nabla h|^{2}\] \[=2|h|^{4}+4\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}-4\sum_{i,j,p,q,\alpha \geq 2}h_{ij}h_{pq}A^{\alpha}_{ip}A^{\alpha}_{jq}+4\sum_{i,j,p,q,\alpha \geq 2}h_{ij}h_{pj}A^{\alpha}_{iq}A^{\alpha}_{qp}\] \[+4\bar{K}|H|^{2}-2n\bar{K}|h|^{2}-2|\nabla h|^{2}-2|h|^{2}|\nabla \nu_{1}|^{2}-4\sum_{i,j}|H|^{-1}h_{ij}\langle\nabla A^{-}_{ij},\nabla|H|\nu_{1}\rangle\] \[-4\sum_{i,j}h_{ij}\langle\nabla A^{-}_{ij},\nabla\nu_{1}\rangle,\]
and, following Naff, rewrite
\[4\sum_{i,j,p,q,\alpha\geq 2}h_{ij}h_{pj}A^{\alpha}_{iq}A^{ \alpha}_{qp}-4\sum_{i,j,p,q,\alpha\geq 2}h_{ij}h_{pq}A^{\alpha}_{ip}A^{ \alpha}_{jq} =2\sum_{i,j,p,q}\langle h_{ij}A^{-}_{iq}-h_{iq}A^{-}_{ij},h_{pj}A ^{-}_{pq}-h_{pq}A^{-}_{pj}\rangle\] \[=2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi}|^{2}.\]
Hence,
\[(\partial_{t}-\Delta)|h|^{2} =2|h|^{4}+4\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}+2\sum_{i,j,p}|h_{ip}A^ {-}_{pj}-h_{jp}A^{-}_{pi}|^{2}+4\bar{K}|H|^{2}-2n\bar{K}|h|^{2}\] \[-2|\nabla h|^{2}-2|h|^{2}|\nabla\nu_{1}|^{2}-4\sum_{i,j}|H|^{-1}h _{ij}\langle\nabla A^{-}_{ij},\nabla|H|\nu_{1}\rangle-4\sum_{i,j}h_{ij} \langle\nabla A^{-}_{ij},\nabla\nu_{1}\rangle,\]
and since \(|A^{-}|^{2}=|A|^{2}-|h|^{2}\),
\[(\partial_{t}-\Delta)|A^{-}|^{2} =2|\langle A,A\rangle|^{2}-2|h|^{4}-4\sum_{i,j}|h_{ij}A^{-}_{ij}|^ {2}+2|R^{\perp}|^{2}-2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi}|^{2}\] \[-2n\bar{K}|A^{-}|^{2}-2|\nabla A|^{2}+2|\nabla h|^{2}+2|h|^{2}| \nabla\nu_{1}|^{2}\] \[+4\sum_{i,j}|H|^{-1}h_{ij}\langle\nabla A^{-}_{ij},\nabla|H|\nu_ {1}\rangle+4\sum_{i,j}h_{ij}\langle\nabla A^{-}_{ij},\nabla\nu_{1}\rangle.\]
The reaction terms can be simplified by observing
\[2|\langle A,A\rangle|^{2}-2|h|^{4}-4\sum_{i,j}|h_{ij}A^{-}_{ij}|^{2}=2|\langle A ^{-},A^{-}\rangle|^{2},\]
and (recalling the decomposition of \(R^{\perp}\) carried out above)
\[2|R^{\perp}|^{2}-2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi}|^{2}=2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{pi}|^{2}+2\sum_{i,j,p}|A^{-}_{ip}\otimes A^ {-}_{pj}-A^{-}_{jp}\otimes A^{-}_{pi}|^{2},\]
hence
\[(\partial_{t}-\Delta)|A^{-}|^{2} =2|\langle A^{-},A^{-}\rangle|^{2}+2\sum_{i,j,p}|h_{ip}A_{pj}^{-}-h_ {jp}A_{pi}^{-}|^{2}+2\sum_{i,j,p}|A_{ip}^{-}\otimes A_{pj}^{-}-A_{jp}^{-}\otimes A _{pi}^{-}|^{2}\] \[-2n\bar{K}|A^{-}|^{2}-2|\nabla A|^{2}+2|\nabla h|^{2}+2|h|^{2}| \nabla\nu_{1}|^{2}\] \[+4\sum_{i,j}|H|^{-1}h_{ij}\langle\nabla A_{ij}^{-},\nabla|H|\nu_ {1}\rangle+4\sum_{i,j}h_{ij}\langle\nabla A_{ij}^{-},\nabla\nu_{1}\rangle.\]
Since \(\nabla A=\nabla h\nu_{1}+h\nabla\nu_{1}+\nabla A^{-}\), we compute
\[2|\nabla A|^{2}=2|\nabla h|^{2}+2|h|^{2}|\nabla\nu_{1}|^{2}+2|\nabla A^{-}|^{ 2}+4\sum_{i,j}h_{ij}\langle\nabla A_{ij}^{-},\nabla\nu_{1}\rangle+4\sum_{i,j} \langle\nabla A_{ij}^{-},\nabla h_{ij}\nu_{1}\rangle,\]
and so obtain
\[(\partial_{t}-\Delta)|A^{-}|^{2} =2|\langle A^{-},A^{-}\rangle|^{2}+2\sum_{i,j,p}|h_{ip}A_{pj}^{-}- h_{jp}A_{pi}^{-}|^{2}+2\sum_{i,j,p}|A_{ip}^{-}\otimes A_{pj}^{-}-A_{jp}^{-} \otimes A_{pi}^{-}|^{2}\] \[-2n\bar{K}|A^{-}|^{2}-2|\nabla A^{-}|^{2}-4\sum_{i,j}\langle \nabla A_{ij}^{-},\nabla h_{ij}\nu_{1}\rangle+4\sum_{i,j}|H|^{-1}h_{ij} \langle\nabla A_{ij}^{-},\nabla|H|\nu_{1}\rangle.\]
Differentiating \(\langle A_{ij}^{-},\nu_{1}\rangle=0\), we see the last two gradient terms may be expressed as
\[-4\sum_{i,j}\langle\nabla A_{ij}^{-},\nabla h_{ij}\nu_{1}\rangle +4\sum_{i,j}|H|^{-1}h_{ij}\langle\nabla A_{ij}^{-},\nabla|H|\nu_{1}\rangle\] \[=-\sum_{i,j,k}(4\nabla_{k}h_{ij}-4|H|^{-1}h_{ij}\nabla_{k}|H|) \langle\nabla_{k}A_{ij}^{-},\nu_{1}\rangle\] \[=\sum_{i,j,k}(4\nabla_{k}h_{ij}-4|H|^{-1}h_{ij}\nabla_{k}|H|) \langle A_{ij}^{-},\nabla_{k}\nu_{1}\rangle,\]
and consequently,
\[(\partial_{t}-\Delta)|A^{-}|^{2} =2\sum_{i,j,p,q}|\langle A_{ij}^{-},A_{pq}^{-}\rangle|^{2}+2\sum _{i,j,p}|h_{ip}A_{pj}^{-}-h_{jp}A_{pi}^{-}|^{2}+2\sum_{i,j,p}|A_{ip}^{-}\otimes A _{pj}^{-}-A_{jp}^{-}\otimes A_{pi}^{-}|^{2}\] \[-2n\bar{K}|A^{-}|^{2}-2|\nabla A^{-}|^{2}+\sum_{i,j,k}(4\nabla_{k }h_{ij}-4|H|^{-1}h_{ij}\nabla_{k}|H|)\langle A_{ij}^{-},\nabla_{k}\nu_{1}\rangle.\]
Since \(f=c_{n}|H|^{2}-|A|^{2}-d_{n}\) and
\[\Big{(}\partial_{t}-\Delta\Big{)}f=2(|\nabla^{\perp}A|^{2}-c_{n}|\nabla^{\perp }H|^{2})+2\Big{(}c_{n}\sum_{i,j}|\langle A_{ij},H\rangle|^{2}-\sum_{i,j,p,q}| \langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j}|R_{ij}^{\perp}|^{2}\Big{)},\]
we have
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}=\frac{1}{f}\Big{(} \partial_{t}-\Delta\Big{)}|A^{-}|^{2}-|A^{-}|^{2}\frac{1}{f^{2}}\Big{(} \partial_{t}-\Delta\Big{)}f+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla \log f\Big{\rangle}\]
\[=\frac{1}{f}\Big{(}2\sum_{i,j,p,q}|\langle A^{-}_{ij},A^{-}_{pq} \rangle|^{2}+2\sum_{i,j,p}|h_{ip}A^{-}_{pj}-h_{jp}A^{-}_{ip}|^{2}+2\sum_{i,j,p}|A ^{-}_{ip}\otimes A^{-}_{jp}-A^{-}_{jp}\otimes A^{-}_{ip}|^{2}-2n\bar{K}|A^{-}|^{ 2}\Big{)}\] \[+\frac{1}{f}\Big{(}-2|\nabla^{\perp}A^{-}|^{2}+4\sum_{i,j,k}( \nabla_{k}h_{ij}-|H|^{-1}h_{ij}\nabla_{k}|H|)\langle A^{-}_{ij},\nabla^{\perp} _{k}\nu_{1}\rangle\Big{)}\] \[-|A^{-}|^{2}\frac{1}{f^{2}}\Big{(}2(|\nabla^{\perp}A|^{2}-c_{n}| \nabla^{\perp}H|^{2})\Big{)}\] \[-|A^{-}|^{2}\frac{1}{f^{2}}\Big{(}2\Big{(}c_{n}\sum_{i,j}|\langle A _{ij},H\rangle|^{2}-\sum_{i,j,p,q}|\langle A_{ij},A_{pq}\rangle|^{2}-\sum_{i,j }|R^{\perp}_{ij}|^{2}\Big{)}\Big{)}\] \[+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}.\]
According to Section 5, we get
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}\leq 2 \Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}-2n\bar{K} \frac{|A^{-}|^{2}}{f}-\delta\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}- \Delta\Big{)}f.\]
Note this shows
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}\leq 2 \Big{\langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}-2n\bar{K} \frac{|A^{-}|^{2}}{f}.\]
If we have an equation of the form
\[\partial_{t}u=\Delta u+\langle f,\nabla u\rangle+Cu\]
by considering \(U=e^{-Ct}u\), we get
\[\partial_{t}U =e^{-Ct}\partial_{t}u-Ce^{-Ct}u\] \[=e^{-Ct}(\Delta u+\langle f,\nabla u\rangle+Cu)-Ce^{-Ct}u.\]
Hence, we get
\[\partial_{t}U=\Delta U+\langle f,\nabla U\rangle.\]
By the maximum principle we find \(e^{-Ct}\max_{x\in\mathcal{M}}U(x,t)\leq\max_{x\in\mathcal{M}}U(x,0)\). Applying this to the above we get
\[\frac{|A^{-}|^{2}}{f}\leq e^{Ct}C(\mathcal{M}_{0}).\]
If we assume \(\bar{K}\leq 0\), then
\[\Big{(}\partial_{t}-\Delta\Big{)}|H|^{2}=-2|\nabla H|^{2}+2|\langle A,H \rangle|^{2}+2n\bar{K}|H|^{2}\geq 2|\langle A,H\rangle|^{2}\geq\frac{2}{n}|H|^{4}.\]
The maximum principle shows
\[|H(x,t)|^{2}\geq\frac{1}{\max_{x\in\mathcal{M}_{0}}|H(x,0)|^{2}}- \frac{2}{n}t.\]
Hence, \(T_{\max}\leq\frac{n}{2\max_{x\in\mathcal{M}_{0}}|H(x,0)|^{2}}\) and thus, we can take
\[\frac{|A^{-}|^{2}}{f}\leq C(\mathcal{M}_{0}).\]
Recall \(\Big{(}\partial_{t}-\Delta\Big{)}f\) is non negative at each point in space and time. Let \(\sigma=\delta\). We compute
\[\Big{(}\partial_{t}-\Delta\Big{)}f^{1-\sigma} =(1-\sigma)f^{-\sigma}\Big{(}\partial_{t}-\Delta\Big{)}f+\sigma( 1-\sigma)f^{-1-\sigma}|\nabla f|^{2}\] \[\geq(1-\sigma)f^{-\sigma}\Big{(}\partial_{t}-\Delta\Big{)}f.\]
Then,
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f^{1-\sigma}} =\frac{1}{f^{1-\sigma}}\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{ 2}-|A^{-}|^{2}\frac{1}{f^{2-2\sigma}}\Big{(}\partial_{t}-\Delta\Big{)}f^{1- \sigma}+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f^{1-\sigma}},\nabla\log f^{1 -\sigma}\Big{\rangle}\] \[\leq\frac{1}{f^{1-\sigma}}\Big{(}\partial_{t}-\Delta\Big{)}|A^{- }|^{2}-|A^{-}|^{2}\frac{1}{f^{2-2\sigma}}(1-\sigma)f^{-\sigma}\Big{(}\partial_ {t}-\Delta\Big{)}f\] \[+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f^{1-\sigma}},\nabla\log f ^{1-\sigma}\Big{\rangle}\] \[=f^{\sigma}\Big{(}\frac{1}{f}\Big{(}\partial_{t}-\Delta\Big{)}|A ^{-}|^{2}-\frac{|A^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f\Big{)}+ \sigma\frac{|A^{-}|^{2}}{f^{2}}f^{\sigma}\Big{(}\partial_{t}-\Delta\Big{)}f\] \[+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f^{1-\sigma}},\nabla\log f ^{1-\sigma}\Big{\rangle}.\]
Now,
\[\frac{1}{f}\Big{(}\partial_{t}-\Delta\Big{)}|A^{-}|^{2}-\frac{|A ^{-}|^{2}}{f^{2}}\Big{(}\partial_{t}-\Delta\Big{)}f =\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f}-2\Big{ \langle}\nabla\frac{|A^{-}|^{2}}{f},\nabla\log f\Big{\rangle}\] \[\leq-2n\bar{K}\frac{|A^{-}|^{2}}{f}-\delta\frac{|A^{-}|^{2}}{f^{2 }}\Big{(}\partial_{t}-\Delta\Big{)}f.\]
Therefore,
\[\Big{(}\partial_{t}-\Delta\Big{)}\frac{|A^{-}|^{2}}{f^{1-\sigma}} \leq-2n\bar{K}\frac{|A^{-}|^{2}}{f^{1-\sigma}}-\delta\frac{|A^{-}| ^{2}}{f^{2}}f^{\sigma}\Big{(}\partial_{t}-\Delta\Big{)}f+\sigma\frac{|A^{-}|^ {2}}{f^{2}}f^{\sigma}\Big{(}\partial_{t}-\Delta\Big{)}f\] \[+2\Big{\langle}\nabla\frac{|A^{-}|^{2}}{f^{1-\sigma}},\nabla\log f ^{1-\sigma}\Big{\rangle}\] \[=-2n\bar{K}\frac{|A^{-}|^{2}}{f^{1-\sigma}}+2\Big{\langle}\nabla \frac{|A^{-}|^{2}}{f^{1-\sigma}},\nabla\log f^{1-\sigma}\Big{\rangle}.\]
As before, considering \(U=e^{-Ct}u\), we get
\[\partial_{t}U =e^{-Ct}\partial_{t}u-Ce^{-Ct}u\] \[=e^{-Ct}(\Delta u+\langle f,\nabla u\rangle+Cu)-Ce^{-Ct}u.\]
Hence, we get
\[\partial_{t}U=\Delta U+\langle f,\nabla U\rangle.\]
By the maximum principle we find \(e^{-Ct}\max_{x\in\mathcal{M}}U(x,t)\leq\max_{x\in\mathcal{M}}U(x,0)\). Applying this to the above we get
\[\frac{|A^{-}|^{2}}{f^{1-\sigma}}\leq e^{Ct}C(\mathcal{M}_{0}).\]
If we assume \(\bar{K}\leq 0\), then
\[(\partial_{t}-\Delta)|H|^{2}=-2|\nabla H|^{2}+2|\langle A,H\rangle|^{2}+2n \bar{K}|H|^{2}\geq 2|\langle A,H\rangle|^{2}\geq\frac{2}{n}|H|^{4}.\]
The maximum principle shows
\[|H(x,t)|^{2}\geq\frac{1}{\frac{1}{\max_{x\in\mathcal{M}_{0}}|H(x,0)|^{2}}- \frac{2}{n}t}.\]
Hence, \(T_{\max}\leq\frac{n}{2\max_{x\in\mathcal{M}_{0}}|H(x,0)|^{2}}\) and we can take
\[\frac{|A^{-}|^{2}}{f^{1-\sigma}}\leq C(\mathcal{M}_{0}),\]
which means
\[|A^{-}|^{2}\leq Cf^{1-\sigma},\]
\(\forall t\in[0,T)\), Since, \(f=c_{n}|H|^{2}-|A|^{2}-d_{n}\leq c_{n}|H|^{2}-d_{n}<c_{n}|H|^{2}\), for \(d_{n}>d>0\), this implies
\[|A^{-}|^{2}<C|H|^{2-2\sigma},\]
which completes the proof.
|
2310.03025 | Retrieval meets Long Context Large Language Models | Extending the context window of large language models (LLMs) is getting
popular recently, while the solution of augmenting LLMs with retrieval has
existed for years. The natural questions are: i) Retrieval-augmentation versus
long context window, which one is better for downstream tasks? ii) Can both
methods be combined to get the best of both worlds? In this work, we answer
these questions by studying both solutions using two state-of-the-art
pretrained LLMs, i.e., a proprietary 43B GPT and Llama2-70B. Perhaps
surprisingly, we find that LLM with 4K context window using simple
retrieval-augmentation at generation can achieve comparable performance to
finetuned LLM with 16K context window via positional interpolation on long
context tasks, while taking much less computation. More importantly, we
demonstrate that retrieval can significantly improve the performance of LLMs
regardless of their extended context window sizes. Our best model,
retrieval-augmented Llama2-70B with 32K context window, outperforms
GPT-3.5-turbo-16k and Davinci003 in terms of average score on nine long context
tasks including question answering, query-based summarization, and in-context
few-shot learning tasks. It also outperforms its non-retrieval Llama2-70B-32k
baseline by a margin, while being much faster at generation. Our study provides
general insights on the choice of retrieval-augmentation versus long context
extension of LLM for practitioners. | Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro | 2023-10-04T17:59:41Z | http://arxiv.org/abs/2310.03025v2 | # Retrieval meets Long Context Large Language Models
###### Abstract
Extending the context window of large language models (LLMs) is getting popular recently, while the solution of augmenting LLMs with retrieval has existed for years. The natural questions are: _i) Retrieval-augmentation versus long context window, which one is better for downstream tasks? ii) Can both methods be combined to get the best of both worlds?_ In this work, we answer these questions by studying both solutions using two state-of-the-art pretrained LLMs, i.e., a proprietary 43B GPT and LLaMA2-70B. Perhaps surprisingly, we find that LLM with 4K context window using simple retrieval-augmentation at generation can achieve comparable performance to finetuned LLM with 16K context window via _positional interpolation_ on long context tasks, while taking much less computation. More importantly, we demonstrate that retrieval can significantly improve the performance of LLMs regardless of their extended context window sizes. Our best model, retrieval-augmented LLaMA2-70B with 32K context window, outperforms GPT-3.5-turbo-16k and Davinci003 in terms of average score on seven long context tasks including question answering and query-based summarization. It also outperforms its non-retrieval LLaMA2-70B-32k baseline by a margin, while being much faster at generation. Our study provides general insights on the choice of retrieval-augmentation versus long context extension of LLM for practitioners.
## 1 Introduction
The long context large language models (LLM) have recently received a lot of attention in production (e.g., Anthropic, 2023; OpenAI, 2023b), research community (e.g., Chen et al., 2023; Liu et al., 2023; Tworkowski et al., 2023), and open source community (e.g., Kaikoednev, 2023). Although the _approximate_ attention methods have been studied for years (e.g., Tay et al., 2022) (due to the quadratic time and memory complexities of self-attention mechanism in sequence length), the recent advance for long context LLMs with _exact_ attention is mainly driven by the development of faster GPU with more memory and memory-efficient exact attention (Dao et al., 2022; Dao, 2023).
An alternative and long-standing solution for handling long context is _retrieval_. Specifically, the LLMs only read relevant context retrieved from a standalone retriever (e.g., Karpukhin et al., 2020; Wang et al., 2022; Lin et al., 2023), which is much easier to scale 1 and runs orders of magnitudes faster than LLMs for selecting relevant context. Conceptually, the retrieval-augmented decoder-only LLM can be viewed as applying the sparse attention over its long context window, where the sparsity pattern is not predefined as Child et al. (2019) but determined by the standalone retriever. In other words, unretrieved context is treated as irrelevant and has zero-valued attention weights.
Footnote 1: The dense embedding retriever can easily retrieve context from billions of tokens using the fast similarity search library (Johnson et al., 2019).
Given the surge of interest in long context LLM research and much more required computation at inference 2, it is still unclear for practitioners whether extending the context window of LLM
provides higher accuracy than the retrieval augmentation for downstream tasks with informative queries. Moreover, it would be compelling if we could combine the strength of both methods and achieve even higher accuracies. In this work, we attempt to answer the above questions through a comprehensive study.
Specifically, we make the following contributions:
1. We perform comprehensive study using two state-of-the-art LLMs, a proprietary 43B pre-trained GPT and LLaMA2-70B (Touvron et al., 2023b) on 7 downstream long context tasks, including single and multi document question answering (QA) as well as query-based summarization.
2. We demonstrate that retrieval-augmentation significantly improves the performance of 4K context LLMs. Perhaps surprisingly, we find this simple retrieval-augmented baseline can perform comparable to 16K long context LLMs, i.e., average score 29.32 vs. 29.45 by using GPT-43B, and 36.02 vs. 36.78 by using LLaMA2-70B, while using much less computation.
3. Furthermore, we demonstrate that the performance of long context LLM (i.e., 16K or 32K) can still be improved by retrieval, especially for the larger LLaMA2-70B. As a result, our best model, retrieval augmented LLaMA2-70B-32k-ret with 32K context window (avg. score 43.6), outperforms GPT-3.5-turbo-16k (avg. score 42.8) and Davinci-003 in terms of average score. It also largely outperforms its non-retrieval LLaMA2-70B-32k baseline (avg. score 40.9), while can be much faster at generation (e.g., 4\(\times\) faster on NarrativeQA).
We organize the rest of the paper as follows. We discuss related work in Section 2, and present the experimental setup in Section 3. We report results in Section 4 and conclude the paper in Section 5.
## 2 Related Work
In this section, we discuss the related work in long context LLM, efficient attention methods, and retrieval-augmented language models.
### Long Context Large Language Models
Over the past few years, pretraining large language models (LLMs) with long context window becomes a viable solution thanks to faster GPU with more memory and memory-efficient exact attention (e.g., Dao et al., 2022). For example, the context window for pretrained LLM have been increased from 1024 of GPT-2 (Radford et al., 2019), 2048 of GPT-3 (Brown et al., 2020), 4096 of Llama 2 (Touvron et al., 2023b), to 8192 of GPT-4 (OpenAI, 2023a). However, further extending the context window in pretraining can be challenging, because, _i_) pretraining LLM from scratch with long context (e.g., >16K tokens) is very expensive due to the quadratic time and memory complexities of exact attention, and _ii_) most of documents in pretraining corpus (e.g., Common Crawl) are relatively short.
Most recently, researchers start to extend the context window of LLMs with continued training or fine-tuning (e.g., Kaiokendev, 2023; Nijkamp et al., 2023; Chen et al., 2023; Tworkowski et al., 2023; Mohtashami & Jaggi, 2023). Tworkowski et al. (2023) introduced LongLLaMA by fine-tuning the 3B and 7B OpenLLaMA checkpoints with contrastive training on 8K context length. Landmark attention (Mohtashami & Jaggi, 2023) extends the context length of LLaMA 7B from 4K to 32K by introducing "landmark tokens" to represent blocks of the context and fine-tuning the attention to use landmark tokens for selecting relevant blocks. Chen et al. (2023) and Kaiokendev (2023) introduced _positional interpolation_ to extend the context window sizes of RoPE-based (Su et al., 2021) pretrained LLMs. In particular, Chen et al. (2023) demonstrates promising results on LLaMA 7B to 65B (Touvron et al., 2023a) with minimal fine-tuning effort (within 1000 steps). ALBi (Press et al., 2021) extrapolates context window length by removing the positional embeddings while simply biasing the key-query attention scores with a linear penalty that is proportional to their distance, so one does not need finetuning for context window extrapolation. Ratner et al. (2023) chunks long context into multiple sub-windows and re-use the positional embeddings across these windows, thus can handle longer context without any further finetuning. In this work, we apply _positional interpolation_ method to extend the 4K context window of a proprietary 43B pretrained LLM and LLaMA2-70B (Touvron et al., 2023b) to 16K and 32K, as they both use rotary position embedding
at pretraining. In terms of evaluation, we focus on downstream task performance (e.g., Shaham et al., 2023; Bai et al., 2023) after instruction tuning (Wei et al., 2021).
There are other studies showing the interplay between retrieval-augmentation and long context LLM. Liu et al. (2023) performs the black-box evaluation for the long context capability of existing LLM products, including ChatGPT 3.5 (OpenAI, 2022), GPT-4 (OpenAI, 2023a), Claude (Anthropic, 2023), in retrieval-augmented setting, and identify the "lost in the middle" phenomenon in these models.
### Efficient Attention Methods
In previous study, many approximate attention methods (Tay et al., 2022) have been introduced for dealing with the quadratic complexity of self-attention that becomes a computational bottleneck for long context. They can be grouped into the following categories: _i)_ Sparse attention mechanisms with predefined sparsity patterns (e.g., Child et al., 2019; Parmar et al., 2018; Ho et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020; Zhu et al., 2021), _ii)_ recurrence-based method (Dai et al., 2019; Bulatov et al., 2022), _iii)_ low-rank projection attention (e.g., Wang et al., 2020; Xiong et al., 2021; Tay et al., 2021; Zhu et al., 2021), _iv)_ memory-based mechanisms (e.g., Rae et al., 2020; Liu et al., 2018), _v)_ similarity and clustering based methods (e.g., Kitaev et al., 2020; Tay et al., 2020; Roy et al., 2021). These approximate methods introduce inductive bias (e.g., predefined sparsity) that can fit well for specific domain, but may reduce model quality in general LLM training.
Most recently, FlashAttention (Dao et al., 2022; Dao, 2023) is introduced to speed up the exact attention computation by accounting for reads and writes between levels of GPU memory. FlashAttention is particularly useful for handling longer sequences.
### Retrieval-augmented Language Models
Retrieval has been integrated into language models for years to improve perplexity (Borgeaud et al., 2022; Wang et al., 2023), factual accuracy (Nakano et al., 2021), downstream task accuracy (Guu et al., 2020; Izacard & Grave, 2021; Izacard et al., 2022; Lewis et al., 2020), and in-context learning capability (Huang et al., 2023). Combined with a standalone retriever (Karpukhin et al., 2020; Wang et al., 2022; Lin et al., 2023), retrieval-augmented LLM is well established for handling question answering with long document and in open-domain. In previous study, language models have been augmented with retrieval at inference (Khandelwal et al., 2019; Yogatama et al., 2021), fine-tuning (Izacard et al., 2022; Lewis et al., 2020; Guu et al., 2020), and pretraining (Borgeaud et al., 2022; Izacard et al., 2022; Wang et al., 2023). There are also methods that try to integrate LLM and retriever in a single model and build the end-to-end solution (e.g., Jiang et al., 2022; Shi et al., 2023). However, most of previous works mainly study retrieval-augmentation for LLMs that have around 10 billion parameters, except a few recent ones (e.g., Shi et al., 2023).
In this work, we focus on decoder-only LLMs with 43B and 70B parameters trained on trillions of tokens, because the LLMs at such scale exhibit strong zero-shot capability to incorporate context after instruction tuning (Wei et al., 2021; 2022).
### Concurrent work
When we are preparing this manuscript, we notice that a concurrent work (Bai et al., 2023) (arXived on 28 Aug 2023) also studies the impact of retrieval on long context LLM, including black-box model GPT-3.5-Turbo-16k (OpenAI, 2022), white-box model Llama2-7B-chat-4k (Touvron et al., 2023b), and ChatGLM2-6B-32k (Zeng et al., 2022). Different from our findings, they find that retrieval is only helpful for Llama2-7B-chat-4k with 4K context window, but not helpful for long context model, i.e., GPT-3.5-Turbo-16k and ChatGLM2-6B-32k. We hypothesize the major reasons are: _i)_ it is challenging to do controlled experiments using black-box APIs, _ii)_ the white-box LLMs used in their study are relatively small, thus they have limited zero-shot capability of incorporating context through retrieval. Our conclusions are drawn from much larger LLMs. In particular, our best long context model LLaMA2-70B-32k performs as well as ChatGPT-3.5, while it can still be further enhanced by retrieval (see Table 3).
Experimental Setup
In this section, we present the details of our experimental setup.
### Large Language Models
We focus on comparing the zero-shot capability of integrating long context information for generative QA or summarization tasks via retrieval or LLM's own self-attention mechanism. In contrast to most existing works that focus on relatively small models (e.g., 3B or 7B) (Kaiokendev, 2023; Nijkamp et al., 2023; Tworkowski et al., 2023; Mohtashami and Jaggi, 2023), we gather the insights by exploring model sizes that are larger than 40B after instruction tuning, as previous study suggests that instruction tuning becomes effective when the decoder-only LLM has around 50B parameters (Wei et al., 2021, 2022).
Specifically, we experimented with two pretrained GPT models, a proprietary Nemo GPT-43B and LLaMA2-70B. GPT-43B is a 43 billion parameter model that is trained with 1.1T tokens with 70% English corpus and the other 30% for multilingual and code data. For the English pretraining corpus, GPT-43B used Common Crawl web archive (WARC), Wikipedia, Reddit, Books, Gutenberg, ArXiv, StackExchange, PubMed, etc. It contains 48 layers with the hidden dimension of 8,192. It is trained with a sequence length of 4,096 and RoPE embeddings (Su et al., 2021). The other LLaMA2-70B is a public available 70B GPT model trained on 2T tokens using around 90% English data. It contains 80 layers with the hidden dimension of 8,192. It also has the context window size of 4,096 and trained with RoPE embeddings.
### Datasets and Metrics
In this study, we include seven datasets ranging from single document QA, multi document QA, to query-based summarization for our zero shot evaluations. Specifically, we include four datasets from the validation set of the Scroll benchmark (Shaham et al., 2022).
* **QMSum (QM)** (Zhong et al., 2021) is a query-based summarization dataset, consisting of 232 meetings' transcripts and their corresponding summaries from multiple domains such as academic, industrial product. Annotators were tasked with writing queries basing on the contexts and ensuring that the relevant text for answering each query spans contains at least 200 words or 10 turns.
* **Qasper (QASP)** (Dasigi et al., 2021) is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC) (Lo et al., 2020). Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
* **NarrativeQA (NQA)** (Kocisky et al., 2018) is an established question answering dataset over entire books from Project Gutenberg3 and movie scripts from a list of websites. Summaries of the books and scripts obtained from Wikipedia were given to the annotators to produce question-answer pairs, resulting in approximately 30 questions and answers for each of the 1,567 books and scripts. Each question was answered by providing two reference answers. Footnote 3: [https://www.gutenberg.org/](https://www.gutenberg.org/)
* **QuALITY (QLTY)** (Pang et al., 2022) is a multiple-choice question answering dataset over stories and articles sourced from several resources, such as Project Gutenberg and the Open American National Corpus4. 50% of the questions in QuALITY are labeled as _hard_ to ensure the whole given document must be read slowly to conclude a correct answer, i.e., a skim of the document always yields wrong answers. Footnote 4: [https://anc.org/](https://anc.org/)
We take another three datasets from LongBench (Bai et al., 2023).
* **MuSiQue (MSQ)** (Trivedi et al., 2022) stands for Multihop Questions via Single-hop Question Composition aiming at multihop reasoning question answering. A bottom-up process of constructing multihop from single-hop questions allows systematic exploration of a large space of multihop candidates and greater control over which questions that are composed manually. In order to correctly generate the answers, LLMs require connected reasoning by reducing potential
reasoning shortcuts, minimizing train-test leakage, and including harder distractor contexts. Thus, MuSiQue is significantly less cheatable via disconnected reasoning than previous datasets.
* **HotpotQA (HQA)**(Yang et al., 2018) is a Wikipedia-based question-answer dataset with several key features. First, multiple supporting documents are required to be read for answering and reasoning. Second, the questions are diverse and not constrained to any pre-existing knowledge bases. Third, sentence-level supporting are provided with strong supervision to support LLM's requirement for reasoning. Finally, new types of factoid comparison questions are provided to test LLMs' ability to extract and compare various entity properties in text.
* **MultiFieldQA-en (MFQA)**(Bai et al., 2023) was manually curated to better test the model's long context understanding ability across diverse fields. Documents and articles from multiple sources, including legal documents, government reports, encyclopedias, and academic papers are collected. Ph.D. students were asked to annotate the questions and answers for each article. The evidences are fairly randomly located in the documents to avoid biases that might occur at the beginning or ending of the documents.
The full details of the dataset can be found in Table 1. We can see that our evaluation datasets have a wide range of average document length from 4.9k (QASP) to 84k (NQA). Therefore, for the baseline model without retrieval, we truncate the document accordingly to fit into the input sequence length.
Following the official metrics, we report the geometric mean of ROUGE scores (i.e., ROUGE-1/2/L) (Lin, 2004) for QM, the exact matching (EM) score for QLTY, and F1 scores for the remaining five datasets QASP, NQA, MSQ, HQA and MFQA.
### Context Window Extension
We extend the context window length with position interpolation method (Chen et al., 2023), as it is simple and effective for RoPE embeddings. We extend the 4K context window to 16K for GPT-43B. For LLaMA2-70B, we extend its 4K context window to 16K and 32K. We follow Chen et al. (2023) and finetune both LLMs on the Pile dataset (Gao et al., 2021) with batch size as 128, constant learning rate of 5e-6 to adapt the position embeddings.
### Retrieval
For the retriever, we experimented with three retrievers: 1) _Dragon_(Lin et al., 2023) as it achieves state-of-the-art results on both supervised and zero-shot information retrieval benchmarks (Thakur et al., 2021). Dragon is a dual encoder model that consists of a query encoder and a context encoder. 2) a widely used _Contriever_ model (Izacard et al., 2021). Following the MoCo technique (He et al., 2020), Contriever used a simple contrastive learning framework to pre-train models for information retrieval. It was trained without supervision and achieved competitive results with BM25 for R@100 on the BEIR benchmark (Thakur et al., 2021), and 3) _OpenAI embedding5_. For the OpenAI embedding model, we use the latest "text-embedding-ada-002" as recommended by OpenAI. It accepts 8,191 maximum input tokens for one sequence with an output vector of 1,536 dimensions. The cosine similarities are then computed between the questions and the list of contexts for retrieval ranking.
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & QM & QASP & NQA & QLTY & MSQ & HQA & MFQA \\ \hline \# of samples & 200 & 1,726 & 2,000 & 2,000 & 200 & 200 & 150 \\ avg doc length & 14,140 & 4,912 & 84,770 & 6,592 & 16,198 & 13,319 & 7,185 \\ avg top-5 chunks & 2,066 & 2,071 & 2,549 & 2,172 & 2,352 & 2,322 & 2,385 \\ avg top-10 chunks & 4,137 & 3,716 & 5,125 & 4,018 & 4,644 & 4,554 & 4,305 \\ avg top-20 chunks & 8,160 & 4,658 & 10,251 & 5,890 & 9,133 & 8,635 & 6,570 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of seven datasets used for zero-shot evaluation. All lengths are counted by the number of tokens using LLaMA2-70B tokenizer, and “avg top N chunks” denotes the average number of tokens from the top N retrieved chunks. Figure 1 gives more details.
To use these retrievers, we first chunk each context document with 300 words, and then we encode both the questions and all chunks independently with corresponding encoders. The most relevant N chunks, ranked by the dot product of the question embedding and chunk embedding, are then concatenated together (following the left to right order from the most relevant to least relevant) as the context of the prompt for generation. Table 1 shows the statistics of the top N retrieved chunks while Figure 1 gives more details of the token length distribution of all seven datasets. Note that, some dataset like Qasper (QASP) is relatively short and don't have up to 20 chunks, so the average length of top-10 chunks and top-20 chunks are close. We can see that top-5 chunks can all fit into 4k sequence length (except few outliers) while top-10 and top-20 chunks can fit into 16k sequence length.
### Instruction Tuning
To train the pretrained LLMs to follow instructions for question answering or text summarization, we also performed instruction tuning. We first construct a blend of instruction tuning datasets consisting of 102K training samples from the Soda dataset (Kim et al., 2022), ELI5 dataset (Fan et al., 2019), FLAN dataset (Wei et al., 2021), Open Assistant dataset (Kopf et al., 2023), Dolly (Conover et al., 2023) and a proprietary sourced conversational dataset, to adapt both GPT-43B and LLaMA2-70B to follow instructions. In terms of the template, we use "System: {System}unUser: {Question}unAssistant: {Answer}" as the format to support multi-turn dialogue training. As all of the tasks contain the context information for reasoning over at inference time, we add the context before the dialogue, i.e. "System: {System}unun[Context]unUser: {Question}unAssistant: {Answer}".
We finetune the LLM by taking the loss only on the {Answer} part with batch size 128 and learning rate of 5e-6 for 1000 steps. For the rest of the paper, results are all reported using the instruction tuned chat model on top of the foundational GPT-43B and LLaMA2-70B.
Figure 1: Token length distribution of the full document and the top-5, 10, 20 chunks of the seven datasets.
## 4 Results
In this section, we report the results and provide detailed analysis.
### Main Results
In Table 2, we compare different model variants with context lengths ranging from 4K to as long as 32K using GPT-43B and LLaMA2-70B. First, we find that baseline models without retrieval of 4k sequence length achieve the worst results for both GPT-43B and LLaMA2-70B. This is because the minimum average sequence length of all seven tasks exceeds 4096, the context window of the foundation models and therefore valuable texts get truncated randomly. As a result, retrieval is especially helpful for 4K LLMs e.g., LLaMA2-70B-4K is improved from 31.61 to 35.73 while GPT-43B-4K is improved from 26.44 to 29.32. Second, we observe that HotpotQA (HQA) especially favors long sequence models as the score improves from 34.64 to 43.97 for LLaMA2-70B and from 28.91 to 37.48 for GPT-43B when the sequence length increases from 4k to 16k. This is because Hotpot QA is a multi-hop dataset where the questions are not hard to answer but all intermediate hops are necessary to get correct answer. Therefore, long context are beneficial to increase the recall of incorporating all intermediate hops.
It is quite interesting that the retrieval-augmented long context LLM (e.g., 16K and 32K) can obtain better results than retrieval-augmented 4K context LLM, even they are feed with the same top 5 chunks of evidence. We hypothesize this interesting observation is related to the "lost in the middle" phenomenon (Liu et al., 2023), where the LLMs has such "U-shaped" performance curve. Specifically, LLMs are better at utilizing relevant information that occurs at the beginning or end of its input context window. Due to this reason, the 4K context LLM tends to ignore the information in the middle of 4K input, while 32K context LLM tend to ignore the information in the middle of 32K input. From Figure 1, the length of top 5 chunks is about 2K tokens, which can be in the middle and ignored by 4K context LLM, but is only at the beginning part of 16K and 32K context and may not be ignored by the 16K or 32K context LLM.
Note that, we have very different observation from the conclusion drawn from LongBench work (Bai et al., 2023): _"Retrieval brings improvement for model with weak ability on long contexts, but the performance still lags behind models that have strong long context understanding capability"._ Here, we demonstrate retrieval can significantly improve the performance of both GPT-43B and LLaMA2-70B regardless their context window size. For example, our best retrieval-augmented LLaMA2-70B-32k-ret outperforms its baseline w/o retrieval by a margin, i.e., 39.60 vs. 37.36. We think the major reason for such different conclusion is that Bai et al. (2023) uses much smaller LLM with 6B and 7B parameters, which usually has relatively worse zero-shot capability to incorporate the retrieved chunked context. In contrast, the larger instruction tuned LLMs like LLaMA2-70B has much stronger zero-shot capability to incorporate retrieved evidence. This observation is becoming more clear when one compares the gain of retrieval-augmentation between GPT-43B and LLaMA2-70B, where LLaMA2-70B enjoys larger benefit of incorporating context through retrieval.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Model & Seq len. & Avg. & QM & QASP & NQA & QLTY & MSQ & HQA & MFQA \\ \hline GPT-43B & 4k & 26.44 & 15.56 & 23.66 & 15.64 & 49.35 & 11.08 & 28.91 & 40.90 \\ + ret & 4k & 29.32 & 16.60 & 23.45 & 19.81 & 51.55 & 14.95 & 34.26 & 44.63 \\ GPT-43B & 16k & 29.45 & 16.09 & 25.75 & 16.94 & 50.05 & 14.74 & 37.48 & 45.08 \\ + ret & 16k & **29.65** & 15.69 & 23.82 & 21.11 & 47.90 & 15.52 & 36.14 & 47.39 \\ \hline LLaMA2-70B & 4k & 31.61 & 16.34 & 27.70 & 19.07 & 63.55 & 15.40 & 34.64 & 44.55 \\ + ret & 4k & 36.02 & 17.41 & 28.74 & 23.41 & 70.15 & 21.39 & 42.06 & 48.96 \\ LLaMA2-70B & 16k & 36.78 & 16.72 & 30.92 & 22.32 & **76.10** & 18.78 & 43.97 & 48.63 \\ + ret & 16k & 37.23 & **18.70** & 29.54 & 23.12 & 70.90 & 23.28 & 44.81 & 50.24 \\ LLaMA2-70B & 32k & 37.36 & 15.37 & **31.88** & 23.59 & 73.80 & 19.07 & 49.49 & 48.35 \\ + ret & 32k & **39.60** & 18.34 & 31.27 & **24.53** & 69.55 & **26.72** & **53.89** & **52.91** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of model variants (GPT-43B, LLaMA2-70B) with sequence length ranging from 4k to 32k under seven datasets. “ret” denotes using the best retriever (Dragon or Contriever) and here we used top-5 for the retriever.
### Comparing to OpenAI models
To further understand how good is our best model, i.e., augmenting LLaMA2-70B-32k with retrieval, we also compare it to GPT-3.5-turbo(4k), GPT-3.5-turbo-16k and Davinci-003 on those seven datasets.6 We found that LLaMA2-70B-32k-ret achieves better results than GPT-3.5-turbo-16k in terms of the average accuracy over seven datasets, while better than Davinci-003 (w/ 175B parameters) on the average over 4 tasks. This indicates LLaMA2-70B-32k with retrieval is a strong model for these long context tasks, and our conclusion is built on the state-of-the-art results.
Footnote 6: For QMSun (QM), Qasper (QASP), NarrativeQA (NQA), QuALITY (QLTY), we used the test set from the ZeroSCROLLS leaderboard as the organizers have prepared the scores of GPT-3.5-turbo (4k) and Davinci-003 there.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Seq len & Setting & Avg. & QM & QASP & NQA & QLTY & MSQ & HQA & MFQA \\ \hline
4k & baseline (w/o ret) & 31.61 & 16.34 & 27.70 & 19.07 & 63.55 & 15.40 & 34.64 & 44.55 \\ & Dragon & 35.73 & 18.14 & 29.20 & 23.39 & 70.30 & 20.09 & 41.54 & 47.45 \\ & Contierer & **36.02** & 17.41 & 28.74 & 23.41 & 70.15 & 21.39 & 42.06 & 48.96 \\ & OpenAI-embedding & 35.79 & 17.76 & 28.85 & 23.57 & 70.70 & 19.92 & 41.76 & 47.99 \\ \hline
32k & baseline (w/o ret) & 37.36 & 15.37 & 31.88 & 23.59 & 73.80 & 19.07 & 49.49 & 48.35 \\ & Dragon & **39.60** & 18.34 & 31.27 & 24.53 & 69.55 & 26.72 & 53.89 & 52.91 \\ & Contierer & 38.85 & 17.60 & 31.56 & 23.88 & 69.00 & 26.61 & 49.65 & 53.66 \\ & OpenAI-embedding & 39.34 & 18.24 & 32.07 & 24.36 & 69.45 & 24.90 & 51.64 & 54.75 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparisons of adding top 5 retrieved chunks from different retrievers to the context under LLaMA2-70B. Public available retriever can be better than OpenAI-embedding.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Model & Avg-7 & Avg-4* & QM* & QASP* & NQA* & QLTY* & MSQ & HQA & MFQA \\ \hline Davinci003 (175B) & - & 40.8* & 16.9* & 52.7* & 24.6* & 69.0* & - & - & - \\ GPT-3.5-turbo (4k) & - & 39.2* & 15.6* & 49.3* & 25.1* & 66.6* & & & \\ GPT-3.5-turbo-16k & 42.8 & 42.4 & 17.6 & 50.5 & 28.8 & 72.6 & 26.9 & 51.6 & 52.3 \\ LLaMA2-70B-32k & 40.9 & 42.4 & 15.6 & 45.9 & 28.4 & 79.6 & 19.1 & 49.5 & 48.4 \\ LLaMA2-70B-32k-ret & **43.6** & **43.0** & 18.5 & 46.3 & 31.5 & 75.6 & 26.7 & 53.9 & 52.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of our best retrieval-augmented LLaMA2-70B-32k-ret with GPT-3.5-turbo-16k and Davinci-003 (175B parameters). For QMSun (QM), Qasper (QASP), NarrativeQA (NQA), QuALITY (QLTY), we used the test set from the ZeroSCROLLS leaderboard as the organizers have prepared the scores of GPT-3.5-turbo (4k) and Davinci-003 (highlighted with *). Avg-7 refers to the average score of all 7 datasets, and Avg-4* refers to the average of 4 datasets from ZeroSCROLLS.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Seq len & Setting & Avg. & QM & QASP & NQA & QLTY & MSQ & HQA & MFQA \\ \hline
4k & base & 31.61 & 16.34 & 27.70 & 19.07 & 63.55 & 15.40 & 34.64 & 44.55 \\ & top-5 & **35.73** & 18.14 & 29.20 & 23.39 & 70.30 & 20.09 & 41.54 & 47.45 \\ & top-10 & 34.62 & 16.54 & 28.67 & 24.38 & 68.70 & 19.00 & 42.18 & 42.84 \\ & top-20 & 34.61 & 16.52 & 28.67 & 24.38 & 68.70 & 19.00 & 42.18 & 42.84 \\ \hline
16k & base & 36.78 & 16.72 & 30.92 & 22.32 & 76.10 & 18.78 & 43.97 & 48.63 \\ & top-5 & 37.23 & 18.70 & 29.54 & 23.12 & 70.90 & 23.28 & 44.81 & 50.24 \\ & top-10 & **38.31** & 18.41 & 30.20 & 25.53 & 73.60 & 22.78 & 47.72 & 49.91 \\ & top-20 & 36.61 & 17.26 & 29.60 & 25.81 & 72.30 & 22.69 & 41.36 & 47.23 \\ \hline
32k & base & 37.36 & 15.37 & 31.88 & 23.59 & 73.80 & 19.07 & 49.49 & 48.35 \\ & top-5 & **39.60** & 18.34 & 31.27 & 24.53 & 69.55 & 26.72 & 53.89 & 52.91 \\ & top-10 & 38.98 & 17.71 & 30.34 & 25.94 & 70.45 & 22.80 & 55.73 & 49.88 \\ & top-20 & 38.38 & 16.36 & 30.42 & 24.42 & 69.60 & 24.51 & 54.67 & 48.65 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparisons of adding top-5/10/20 retrieved chunks to the context under 4k, 16k, and 32k input sequence lengths using LLaMA2-70B. More context does not always give better results.
### Ablation on Different Retrievers
To investigate the impacts of different retrievers on top of LLaMA2-70B, we compare Dragon, Contriever, and OpenAI embeddings on top of LLaMA2-70B-4k and LLaMA2-70B-32k. The results in Table 4 confirms that our finding, i.e., _retrieval can boost the performance of both short context and long context LLMs_, is consistent across different retrievers. Also, we observe that public available retrievers can do better than the commercially closed OpenAI embeddings.
### Increasing the number of retrieved chunks
To study the impact of adding more retrieved chunks to the context, we increase the number of retrieved chunks from 5 to 20 using Dragon retriever and the results can be found in Table 5. We observe that for different sequence lengths, the best averaged results are obtained either from top 5 or top 10. Even if 20 chunks can still fit into the 16K and 32K context window (as shown in Figure 1), adding more chunks up to 20 is not helpful and will sometime hurt the performance. We believe this is related to the "lost in the middle" phenomenon (Liu et al., 2023) or the model is getting distracted by irrelevant information and therefore needs further research.
## 5 Conclusion
In this work, we systematically study the retrieval-augmentation versus long context extension using the state-of-the-art LLMs after instruction tuning for various long context QA and query-based summarization tasks. After study, we have the following interesting findings: _i)_ Retrieval largely boosts the performance of both 4K short context LLM and 16K/32K long context LLMs. _ii)_ The 4K context LLMs with simple retrieval augmentation can perform comparable to 16K long context LLMs, while being more efficient at inference. _iii)_ After context window extension and retrieval-augmentation, the best model LLaMA2-70B-32k-ret can outperform GPT-3.5-turbo-16k and Davinci003 in terms of average score on a suit of downstream tasks with informative queries. Our study shed light on the promising direction of combining retrieval and long context techniques together to build better LLM.
|
2304.01975 | Mathematical Model for Transmission Dynamics of Tuberculosis in Burundi | Tuberculosis (TB) is among the main public health challenges in Burundi. The
literature lacks mathematical models for key parameter estimates of TB
transmission dynamics in Burundi. In this paper, the
supectible-exposed-infected-recovered (SEIR) model is used to investigate the
transmission dynamics of tuberculosis in Burundi. Using the next generation
method, we calculated the basic reproduction number, R0. The model is
demonstrated to have a disease-free equilibrium (DEF) that is locally and
globally asymptotically stable. When the corresponding reproduction threshold
quantity approaches unity, the model enters an endemic equilibrium (EE). That
means, the disease can be controlled through different interventions in
Burundi. A sensitivity analysis of the model parameters was also investigated.
It shows that the progression rate from latent to becoming infectious had the
highest positive sensitivity, which means that R0 increases and decreases
proportionally with an increase and a decrease of that progression rate. | Steve Sibomanaa, Kelly Joelle Gatore Sinigirira, Paterne Gahungu, David Niyukuri | 2023-04-04T17:32:01Z | http://arxiv.org/abs/2304.01975v1 | # Mathematical Model for Transmission Dynamics of Tuberculosis in Burundi
###### Abstract
Tuberculosis (TB) is among the main public health challenges in Burundi. The literature lacks mathematical models for key parameter estimates of TB transmission dynamics in Burundi. In this paper, the supectible-exposed-infected-recovered (SEIR) model is used to investigate the transmission dynamics of tuberculosis in Burundi. Using the next generation method, we calculated the basic reproduction number, \(R_{0}\). The model is demonstrated to have a disease-free equilibrium (DEF) that is locally and globally asymptotically stable. When the corresponding reproduction threshold quantity approaches unity, the model enters an endemic equilibrium (EE). That means, the disease can be controlled through different interventions in Burundi. A sensitivity analysis of the model parameters was also investigated. It shows that the progression rate from latent to becoming infectious had the highest positive sensitivity, which means that \(R_{0}\) increases and decreases proportionally with an increase and a decrease of that progression rate.
keywords: Tuberculosis, Reproduction number, disease free-equilibrium, endemic equilibrium, Lyapunov function, Burundi +
## 1 Introduction
Tuberculosis (TB) is an airborne disease caused by Mycobacterium tuberculosis (MTB)[1], and it is reported to be the second leading cause of morbidity and mortality in the world from a single infectious agent after the human immunodeficiency virus (HIV)[2]. The MTB spreads through inhaling droplets from the cough or sneeze of a person suffering from active tuberculosis. The bacteria enters the body causing an MTB infection affecting mainly the lungs, but it can also affect any other part of the body including the urinary tract, brain, lymph nodes, bones, joints and the ear. Person with lowered immunity such as those with HIV, diabetes, immune disorders, and stage renal disease, those on drugs that suppress immunity, young children, pregnant women among others are at a higher risk of contracting the disease[3; 4].
In Burundi, TB remains a public health problem, it is rampant in an endemo-epidemic mode and all layers of the population are concerned. In general, infection with tuberculosis is very likely to be asymptomatic for healthy people. The lifetime risk of developing clinically active TB after being infected is about 10% [5]. People who have latent TB infection are not clinically ill or capable of transmitting TB [5; 6]. The immunity of older people who have previously been infected may decrease, and they may then be at risk of developing active TB of either exogenous reinfection (that means acquisition of a new infection from another infectious individual) or an endogenous reactivation of latent bacilli (that means the reactivation of a pre-existing dormant bacillus infection)[7]. Latent and active tuberculosis can be treated with antibiotics. However, its treatment has a side effects (sometimes quite serious) and takes a long time.
Carriers of the tuberculosis bacillus who have not developed tuberculosis disease can be treated with only one Isoniazid, also known as isonicotinic acid hydrazide (INH). For active tuberculosis it is often used together with rifampicin, pyrazinamide, and either streptomycin or ethambutol[8]. Unfortunately, it should be taken religiously for 6-9 months. Treatment of people with active tuberculosis requires simultaneous use of three drugs for a period of at least 12 months. Lack of compliance with these drug treatments, do not only can cause a relapse, but the development of antibiotic resistant tuberculosis, one of the most serious public health problems facing today's
society.
Studying the spread of TB using statistics and mathematics models did not receive enough attention in Burundi. As a result, we only observed a very limited use of mathematical models in studying the dynamics of TB transmission in Burundi's human populations. In the broad scientific literature, communicable diseases such as measles, influenza, rubella, among others, have been studied by many mathematical models [9; 10; 11]. These diseases have a number of traits, such as the fact that they frequently create epidemics and that transmission rates are greatly influenced by age-dependent contact rates. The etiological agents of these communicable diseases are viruses of different families but all capable of generating similar symptoms. Waaler and Anderson were the first who modeled mathematically tuberculosis transmission dynamics[12].
In this work, a mathematical model for the transmission dynamics of TB in Burundi has been developed. We determine the existence and positivity of the system, and we provide the disease's equilibrium points and the reproduction number. The stability of the disease equilibrium is then determined. A sensitivity analysis is performed on the model parameters. Based on the TB data from Burundi, simulations and interpretations of the results will be carried out.
## 2 Methods
### Mathematical model
A mathematical model is be established using Susceptible-Exposed-Infected-Recovered (SEIR) compartmental approach as shown on Figure 1, where \(S(t)\), susceptible humans, \(E(t)\), exposed humans to tuberculosis, \(I(t)\), infected humans with active tuberculosis, \(R(t)\), recovered humans at time \(t\geq 0\). We consider that susceptible population keep increasing by the human birth at \(h\) rate. Thee loose of immunity is considered to be at rate \(\rho\), and there is no permanent immunity to tuberculosis. Humans can contract MTB tuberculosis through contact with individuals who are infected with the disease. Therefore, they enter the exposed (latent) with \(\beta\) rate. A proportion of the exposed class develop active tuberculosis, thus, moving into the infectious class with \(\varepsilon\) rate of progression. If treatment is administered promptly, those who recover from the disease will move to the recovered class at \(\gamma\) rate. Each human class decreases also by natural mortality at \(\mu\) rate, except the infectious class which adds the mortality caused by the MTB tuberculosis
at \(\tau\) rate. \(N(t)\) is used to denote the total population at time \(t\) such that \(N(t)=S(t)+E(t)+I(t)+R(t)\). All parameters of the model are assumed to be non-negative and the total human is assumed to be constant.
From Figure 1, we can write the following system of ordinary differential equations:
\[\left\{\begin{array}{lll}\frac{dS(t)}{dt}&=&hN(t)-\mu S(t)-\beta S(t)I(t)+ \rho R(t),\\ \\ \frac{dE(t)}{dt}&=&\beta S(t)I(t)-(\mu+\varepsilon)E(t),\\ \\ \frac{dI(t)}{dt}&=&\varepsilon E(t)-(\mu+\tau+\gamma)I(t),\\ \\ \frac{dR(t)}{dt}&=&\gamma I(t)-(\mu+\rho)R(t).\end{array}\right. \tag{1}\]
The description of parameters and variables is the following
**Variables**:
\(S(t)\): susceptible humans,
\(E(t)\): exposed humans to tuberculosis,
\(I(t)\): infected humans with active tuberculosis,
\(R(t)\): recovered humans,
Figure 1: Diagram of the Susceptible-Exposed-Infected-Recovered (SEIR) Model
\(N(t)\): total population.
**Parameters**: \(h\): human birth rate
\(\beta\): rate at which the susceptibles become exposed to MTB
\(\rho\): progression rate from recovered to susceptible class
\(\varepsilon\): progression rate from latent to infectious class
\(\gamma\): progression rate from recovered to susceptible class
\(\mu\): natural human death rate
\(\tau\): human death rate caused by the MTB tuberculosis
### Existence and Positivity of Solutions
The system 1 is epidemiologically and mathematically well-posed in the feasible region \(\Gamma\) given by
\[\Gamma=\{(S,L,I,R)\in\mathbb{R}_{+}^{4}:\ 0<\mu\leq h\}. \tag{2}\]
**Proposition 2.1**.: _The system (1) always admits positive solutions for all positive initial conditions and the biological region \(\Gamma\in\mathbb{R}_{+}^{4}\) is positively invariant and globally attractive for the same system._
_Proof_:
From the system 1, we have
\[\left\{\begin{array}{lll}\frac{dS(t)}{dt}&\geq&-(\mu+\beta I(t))S(t),\\ \frac{dE(t)}{dt}&\geq&-(\mu+\varepsilon)E(t),\\ \frac{dI(t)}{dt}&\geq&-(\mu+\tau+\gamma)I(t),\\ \frac{dR(t)}{dt}&\geq&-(\mu+\rho)R(t).\end{array}\right. \tag{3}\]
Integrating each inequality of the system 3 yields
\[\left\{\begin{array}{lll}S(t)&\geq&S(0)\exp\{-(\mu t+\beta\int_{0}^{n}I(n) dn)\},\\ E(t)&\geq&E(0)\exp\{-(\mu+\varepsilon)t\},\\ I(t)&\geq&I(0)\exp\{-(\mu+\tau+\gamma)t\},\\ R(t)&\geq&R(0)\exp\{-(\mu+\rho)t\}.\end{array}\right. \tag{4}\]
where \(S(0),E(0),I(0),R(0)\) are all positive initial conditions. From the system (4), all state variables [that means \((S(t),E(t),I(t),R(t))\) ] are all positive for all \(t\geq 0\). Thus, the solutions of model 1 remain positive in \(\Gamma\) for all time \(t\geq 0\).
Add member to member the equations of the system (1), we notice that the total of the human population satisfies the following relation:
\[\dot{N}(t)=hN(t)-\mu N(t)-\tau I(t). \tag{5}\]
We have \(\dot{N}(t)\leq hN(t)-\mu N(t)\) according the equation 5. Then, \(\dot{N}(t)\leq 0\) if and only if \(h\leq\mu\). So, \(N(t)\leq N(0)\exp[(h-\mu)t]\). We conclude that the region \(\Gamma\) is positively invariant. Moreover, if \(h<\mu\), we see that the solution of the system 1 enter in \(\Gamma\) in finite time, that means \(N(t)\) asymptotically approaches \(h\).
Therefore, all solutions in \(\mathbb{R}^{4}_{+}\) eventually enter \(\Gamma\) that is the biological region \(\Gamma\) is globally attractive for the same system. So the problem is then mathematically and epidemiologically well posed. Hence, every solution of the model 1 with initial conditions in \(\Gamma\) remains in \(\Gamma\) for all \(t\geq 0\).
### Disease Equilibria points
#### 2.3.1 Disease Free Equilibrium (DFE)
A disease free equilibrium point is a solution of the system (1) in holding that there is no disease in population. In this case \(E=I=R=0\). Therefore in our case the DFE, \(E_{0}\), is expressed as
\[E_{0}=\left(\frac{Nh}{\mu},0,0,0\right). \tag{6}\]
#### 2.3.2 Existence of Endemic Equilibrium (EE)
Using the definition of an equilibrium point and doing the substitutions the endemic equilibrium, \(E_{1}\), is expressed as
\[E_{1}=(S^{*},E^{*},I^{*},R^{*}). \tag{7}\]
where
\[\left\{\begin{array}{lcl}S^{*}&=&\frac{(\mu+\varepsilon)(\mu+\tau+\gamma)}{ \beta\varepsilon},\\ \\ E^{*}&=&\frac{(\mu+\rho)(\mu+\tau+\gamma)[hN\beta\varepsilon-\mu(\mu+ \varepsilon)(\mu+\tau+\gamma)]}{\beta\varepsilon[(\mu+\rho)(\mu+\varepsilon)( \mu+\tau+\gamma)-\gamma\varepsilon\rho]},\\ \\ I^{*}&=&\frac{(\mu+\rho)[hN\beta\varepsilon-\mu(\mu+\varepsilon)(\mu+\tau+ \gamma)]}{\beta[(\mu+\rho)(\mu+\varepsilon)(\mu+\tau+\gamma)-\gamma\varepsilon \rho]},\\ \\ R^{*}&=&\frac{\gamma[hN\varepsilon\beta-\mu(\mu+\varepsilon)(\mu+\tau+ \gamma)]}{\beta[(\mu+\rho)(\mu+\varepsilon)(\mu+\tau+\gamma)-\gamma\varepsilon \rho]}.\end{array}\right. \tag{8}\]
Therefore, the endemic equilibrium (EE), denoted \(E_{1}\), given by equation (7) exists whenever the associated reproduction threshold quantity, \(R_{0}\), exceeds unity. In other hand, no endemic equilibrium.
### Basic Reproduction Number
The reproduction number, \(R_{0}\), which measures the spread of infections in a population, was computed by next generation matrix approach[13; 14]. Mathematically, \(R_{0}\), is a treshold for stability of a disease-free equilibrium (DFE) and is related to the peak and final size of an epidemic. In other hand, \(R_{0}\) is defined as the average number of new cases an infection caused by one typical infected individual, in a population consisting of susceptibles only [14].
Using this approach, the basic reproduction number, \(R_{0}\), is given by \(R_{0}=\rho(-FV^{-1})\), where
\[F = \begin{pmatrix}0&\frac{\beta hN}{\mu}\\ \\ 0&0\end{pmatrix} \tag{9}\]
and
\[V^{-1} = \begin{pmatrix}\frac{-1}{\mu+\varepsilon}&0\\ \\ \frac{-\varepsilon}{(\mu+\varepsilon)(\gamma+\mu+\tau)}&\frac{-1}{\gamma+\mu+ \tau}\end{pmatrix}\,. \tag{10}\]
Which means that \(R_{0}\) is given by dominant eigenvalues of \(-FV^{-1}\), where \(F\) is the transmission part, describing the production of new infections, and
the \(V^{-1}\) is the inverted matrix of \(V\) where \(V\) is the transition part, describing changes in state. Thus,
\[R_{0}=\frac{\beta Nh\varepsilon}{\mu(\mu+\varepsilon)(\gamma+\mu+\tau)}. \tag{11}\]
### Stability Analysis of disease equilibria
#### 2.5.1 Stability of DFE
**Theorem 2.1**.: _The disease free equilibrium \(E_{0}\) of the model 1, given by equation 6, is locally asymptotically stable (LAS) if \(R_{0}\leq 1\) and unstable if \(R_{0}>1\)._
_Proof_: This theorem will be proved based on the notions of the Jacobian matrix. Let then be a point \(M=(S,E,I,R)\) and \(J(M)\) its Jacobian. We have:
\[J(M)=\begin{pmatrix}-(\mu+\beta I)&0&-\beta S&\rho\\ \beta I&-(\mu+\varepsilon)&\beta S&0\\ 0&\varepsilon&-(\mu+\tau+\gamma)&0\\ 0&0&\gamma&-(\mu+\rho)\end{pmatrix}. \tag{12}\]
The Jacobian of DFE, \(E_{0}\), expressed in equation 6. Finding the eigenvalues, \(z\), of \(J(E_{0})\) amounts to solving the equation
\[P(z)=0. \tag{13}\]
where \(P(z)\) is the characteristic polynomial of \(J(E_{0})\), that is
\(P(z)=|J(E_{0})-z\mathbb{I}_{4}|\).
The equation 13 becomes:
\[(\mu+z)=0, \tag{14}\]
or
\[(\mu+\rho+z)=0 \tag{15}\]
or
\[z^{2}+(2\mu+\varepsilon+\tau+\gamma)z+(\mu+\varepsilon)-\frac{h\beta\varepsilon N }{\mu}=0 \tag{16}\]
Then, we have:
\[z_{1}=-\mu, \tag{17}\]
or
\[z_{2}=-(\mu+\rho), \tag{18}\]
or
\[z^{2}+(2\mu+\varepsilon+\tau+\gamma)z+(\mu+\varepsilon)(\mu+\tau+\gamma)- \frac{h\beta\varepsilon N}{\mu}=0, \tag{19}\]
with Descartes' rule of signs[15], if \(R_{0}\leq 1\), all the coefficients of the polyom characterizing the left side of the equation 19 are strictly positive, then it does not have a positive root. From 17 18 19, if \(R_{0}\leq 1\), \(J(E_{0})\) has all its eigenvalues with strictly negative real part, hence \(E_{0}\) is locally asymptotically stable (LAS) by the Poincarre-Lyapunov theorem[15, 16].
Otherwise, with the same rule, if \(R_{0}>1\), there is a variation of coefficients of \(P(z)\), then \(P(z)\) has at least one positive root. So, \(J(E_{0})\) has at least one eigenvalue with a strictly positive real part, hence then \(E_{0}\) is unstable by the Poincarre-Lyapunov theorem. \(\Box\)
The epidemiological implication of theorem 2.1 is that Tuberculosis spread can be effectively controlled in the community when \(R_{0}\leq 1\) that means if the initial sizes of the populations of the model are in the basin of attraction of the disease free equilibrium \(E_{0}\). To ensure that elimination of TB is independent of the initial sizes of the populations, it is necessary to show that the DFE is globally asymptotically stable[17]. It is shown below that tuberculosis will be eliminated from the community if the epidemiological threshold can be reduced to a value below unity.
**Theorem 2.2**.: _The disease free equilibrium \(E_{0}\) of the model 1, given by equation 6, is globally asymptotically stable (GAS) in \(\Gamma\) whenever \(R_{0}\leq 1\) and unstable if \(R_{0}>1\)._
_Proof_: Consider the following Lyapunov candidate function for \(E_{0}\):
\[L=\varepsilon E+(\mu+\varepsilon)I+AR\ \ \mbox{with}\ A=\frac{\varepsilon\beta N (h-\mu)}{\gamma\mu}. \tag{20}\]
The first derivative of \(L\) along the solutions of model 1 is
\[\dot{L} = \varepsilon[\beta SI-(\mu+\varepsilon)E]+(\mu+\varepsilon)[ \varepsilon E-(\mu+\tau+\gamma)I]+A[\gamma I-(\mu+\rho)R] \tag{21}\] \[\leq [\varepsilon\beta S-(\mu+\varepsilon)(\mu+\tau+\gamma)+A\gamma]I\] \[\leq (\mu+\varepsilon)(\mu+\tau+\gamma)(R_{0}-1)I.\]
Since all the model parameters are non negative, it follows that \(\dot{L}\leq 0\) for \(R_{0}\leq 1\). For \(R_{0}=1\), \(\dot{L}=0\) if and only if \(E=I=R=0\). As \(t\longrightarrow\infty\), substituting these values in model 1, we have that \(S\longrightarrow\frac{Nh}{\mu},\ E\longrightarrow 0,\ I\longrightarrow 0,\ \mbox{and}\ R \longrightarrow 0\). Hence, \(L\) is a Lyapunov candidate function on \(\Gamma\), and the largest compact invariant set in \(\{(S,E,I,R)\in\Gamma:\dot{L}=0\}\) is the singleton \(\{E_{0}\}\). Therefore, by LaSalle's Invariance Principle [17, 18], every solution of model 1, with initial conditions in \(\Gamma\), approaches \(E_{0}\) as \(t\longrightarrow\infty\), whenever \(R_{0}\leq 1\) that is the disease free equilibrium \(E_{0}\) of the model 1, given by equation 6, is globally asymptotically stable (GAS) in \(\Gamma\) whenever \(R_{0}\leq 1\) and unstable if \(R_{0}>1\). \(\Box\)
#### 2.5.2 Stability of EE
**Theorem 2.3**.: _The endemic equilibrium \(E_{1}\) of the model 1, given by equation 7, is locally asymptotically stable (LAS) if \(R_{0}>1\)._
_Proof_: From \(N=S+E+I+R\), we deduce that \(R=N-S-E-I\). By reducing dimension of system 1 while keeping the variables \(S,E,I\), we have:
\[\left\{\begin{array}{rcl}\frac{dS(t)}{dt}&=&hN(t)-\mu S(t)-\beta S(t)I(t)+ \rho(N-S-E-I),\\ \\ \frac{dE(t)}{dt}&=&\beta S(t)I(t)-(\mu+\varepsilon)E(t),\\ \\ \frac{dI(t)}{dt}&=&\varepsilon E(t)-(\mu+\tau+\gamma)I(t).\end{array}\right.. \tag{22}\]
Let \(W=(S,E,I)\) be a point and \(J(W)\) its Jacobian. We have:
\[J(W)=\begin{pmatrix}-(\mu+\rho+\beta I)&-\rho&-(\rho+\beta S)\\ \beta I&-(\mu+\varepsilon)&\beta S\\ 0&\varepsilon&-(\mu+\tau+\gamma)\end{pmatrix}. \tag{23}\]
In this form, the reduction of endemic equilibrium point EE, \(E_{1}\), becomes \(E_{1}=(S^{*},E^{*},I^{*})\). Let us look for the eigenvalues of \(J(E_{1})\), this amounts to find the roots of the characteristic polynomial \(P(z)\) that means resolve \(P(z)=0\). By factoring determinant above, we have
\[P(z)=A(z)B(z)C(z) \tag{24}\]
with
\[A(z)=-[(\mu+\rho+\beta I^{*})+z], \tag{25}\]
\[B(z)=z^{2}+(2\mu+\varepsilon+\rho+\beta I^{*})z+(\mu+\rho+\beta I^{*})(\mu+ \varepsilon)+\rho\beta I^{*}, \tag{26}\]
and
\[C(z) = z^{4}+(3\mu+2\beta I^{*}+2\rho+\varepsilon+\tau+\gamma)z^{3}\] \[+[(2\mu+\rho+\beta I^{*}+\tau+\gamma)(2\mu+\varepsilon+\rho+\beta I ^{*})\] \[+(2\mu+\rho+\beta I^{*})(\mu+\varepsilon)+(\mu+\rho+\beta I^{*})( \mu+\tau+\gamma)+\rho\beta I^{*}]z^{2}\] \[+[(\mu+\rho+\beta I^{*})(\mu+\tau+\gamma)(2\mu+\varepsilon+\rho+ \beta I^{*})+\varepsilon\beta S^{*}]z\] \[+[(\mu+\rho+\beta I^{*})(\mu+\varepsilon)+\rho+\beta I^{*}][2\mu+ \rho+\beta I^{*}\tau+\gamma\] \[+(\mu+\rho+\beta I^{*})(\mu+\rho+\beta I^{*})(\mu+\tau+\gamma)]\] \[+\varepsilon\beta S^{*}(\mu+\rho+\beta I^{*})+\rho\beta I^{*}( \rho+\beta S^{*})\]
Resolve \(P(z)=0\Longrightarrow A(z)=0,\ \ \mbox{or}\ \ B(z)=0,\ \ \mbox{or}\ \ C(z)=0\). Thus, with Descartes' rule of signs[15], if \(R_{0}>1\), all the coefficients of the polyom characterizing the right side of the equations 26, 27 are strictly positive, then they do not have a positive root. From 25 26 27, if \(R_{0}>1\), \(J(E_{1})\) has all its eigenvalues with strictly negative real part, hence \(E_{1}\) is locally asymptotically stable (LAS) by the Poincarre-Lyapunov theorem[15, 16]. Therefore, this is the end of the theorem 2.3. \(\Box\)
This implies that the tuberculosis persist and invade the population if \(R_{0}>1\).
## 3 Numerical simulation
In this section, we are going to demonstrate the behavior of the proposed TB infection model through numerical simulations while comparing the model with actual tuberculosis data in Burundi. Data from Burundi were collected on quarterly (resp. annually) from the Ministry of Public Health between 2011 and 2019 (resp. between 1997 and 2020). As shown in Table 1, the parameter values for simulation are either from literature or reasonably chosen estimates. We used Python to run simulations and the Runge-Kutta method was used to approximate solutions of the SEIR-system.
### Fitting the model to tuberculosis data in Burundi
In this paragraph, we use systems 1 and parameters listed in the table (1) to fit the tuberculosis model to the annual data of population which are diagnostics of tuberculosis in Burundi from 1985 to 2020. So, \(S_{0}=8,895,836,\ E_{0}=375,\ I_{0}=1789\) and \(R_{0}=60000\) are assumed initial conditions of states
\begin{table}
\begin{tabular}{c|l} \hline
**Parameter** & **Parameter Value and references** \\ \hline h & Human birth rate is 38.377 per year per 1000 inhabitants [16] \\ \hline \(\beta\) & 0.9 Probability of being infected for a susceptible meted by an infectious [19] \\ \hline \(\rho\) & 0.897 Rate of progression from recovered to susceptible state _Estimated_ \\ \hline \(\gamma\) & 0.525 Rate of progression from infectious to recovered state _Estimated_ \\ \hline \(\mu\) & Human (natural) mortality rate is 7.766 per year per 1000 inhabitants [16] \\ \hline \(\varepsilon\) & 0.25 Rate of progression from exposed to infectious state [20] \\ \hline \(\tau\) & 21.0 (per 100,000 people) in 2019 human tuberculosis-induced death rate [21] \\ \hline \end{tabular}
\end{table}
Table 1: Parameters values
The yearly data in Figure (2) follows the trend more closely. In this case, we have obtained a good fit. Additionally, the yearly increase in TB cases could be attributed to poor interventions implementation.
### Simulation of TB transmission dynamics
Figure (3) demonstrates that if \(R_{0}\) is more than 1, each infectious person spreads the disease to more than one new person throughout the time of contagion, which results in the spread of the disease throughout the population and the emergence of an epidemic
and Figure (4) shows that if \(R_{0}\) less than 1 suggests that each person infects, on average, less than one new person, resulting in disease extinction.
Figure 3: Human population dynamics in relation to infectious, and recovered humans when \(R_{0}>1\)
Figure 2: Fitting the model to tuberculosis data in Burundi
## 4 Sensitivity Analysis
In order to determine the relative importance of model parameters on the disease infection, a sensitivity of parameters of the model system 1 is carried out. Sensitivity indices allow us to measure the relative change in a state variable when a parameter changes [22]. The numerical calculation of the sensitivity indices also allows the determination of parameters which have a reasonable impact on the basic reproduction number \(R_{0}\) and which of the parameters is most sensitive which can help in the eradication of the disease in the population [23].
The normalized forward sensitivity index of a variable to a parameter is the ratio of the relative change in the variable to the relative change in the parameter. When the variable is a differentiable function of the parameter, the sensitivity index may be alternatively defined using partial derivatives [24; 22; 23].
**Definition 4.1**.: _The normalized forward sensitivity index of a variable, \(u(p)\), that depends differentiably on a parameter, \(p\), is defined as:_
\[K_{p}^{u}=\frac{\partial u}{\partial p}\times\frac{p}{u},\quad\text{with}\ \ u\neq 0 \tag{28}\]
Therefore, we derive an analytical expression for the sensitivity of \(R_{0}\) with \(R_{0}\) given by the expression 11. Thus,
\[K_{p_{i}}^{R_{0}}=\frac{\partial R_{0}}{\partial p_{i}}\times\frac{p_{i}}{R_{0 }},\quad\text{with}\ \ p_{i}:i=1,\cdots,n \tag{29}\]
Figure 4: Human population dynamics in relation to infectious, and recovered humans when \(R_{0}<1\)
\(p_{i}\) denotes each parameter involved in \(R_{0}\).
According to Table 1, and the equation 29, we plot sensitivity index of each parameter with respect to the \(R_{0}\) given by equation 11. Sensitivity indices on \(R_{0}\) are showed on the the following bar chart
All parameters of the model are assumed to be non-negative, let the total population be 100,000. Hence most sensitive parameter is being the highest positive index. On Figure 5 we can list \(\beta,\ h\) and \(\varepsilon\). This implies that increasing these parameters causes \(R_{0}\) to increase by 100%. Thus, since \(R_{0}\) continues to be higher, the epidemic of disease infection tends to occur. On the other hand, the parameters whose sensitivity is negative (\(\gamma,\ \tau\) and \(\mu\)) have a highest negative impact on \(R_{0}\). For the following, a descriptive analysis as well as the linear adjustment of our model will be carried out.
## 5 Conclusion
In this paper, a deterministic mathematical model for the transmission dynamics of TB in Burundi has been carried out to study the stability of both disease-free and endemic equilibrium point. Using matrix generation approach, the basic reproduction number \(R_{0}\) was computed. Therefore, the disease-free equilibrium of the model obtained is both locally and globally stable for \(R_{0}\leq 1\). It is also shown that the endemic equilibrium solution of the model is globally asymptotically stable if \(R_{0}>1\). Sensitivity Analysis has showed that \(\beta,\ h\) and \(\varepsilon\) are being the highest positive index.
Figure 5: Bar chart for sensitivity
Using data from Ministry of Public Health and the Fight against AIDS, parameters of the model have been identified and the model is shown to describe TB dynamic in Burundi from 1997 to 2019 (annual data) and from 2011 to 2019 (quarterly data).
The results of this research support the idea that in sub-Saharan Africa people in general and in Burundi in particularly should be strongly encouraged to seek a diagnosis of TB, and the success rate the treatment must be correlated with the population of infectious diagnosed. Then, the population of diagnosed infectious diseases and the number of TB-related deaths will decrease.
|
2308.01478 | Large Interferometer For Exoplanets (LIFE): XI. Phase-space synthesis
decomposition for planet detection and characterization | A mid-infrared nulling-space interferometer is a promising way to
characterize thermal light from habitable planet candidates around Sun-like
stars. However, one of the main challenges for achieving this ambitious goal is
a high-precision stability of the optical path difference (OPD) and amplitude
over a few days for planet detection and up to a few weeks for in-depth
characterization. Here we propose a new method called phase-space synthesis
decomposition (PSSD) to shorten the stability requirement to minutes,
significantly relaxing the technological challenges of the mission. Focusing on
what exactly modulates the planet signal in the presence of the stellar leak
and systematic error, PSSD prioritizes the modulation of the signals along the
wavelength domain rather than baseline rotation. Modulation along the
wavelength domain allows us to extract source positions in parallel to the
baseline vector for each exposure. The sum of the one-dimensional data converts
into two-dimensional information. Based on the reconstructed image, we
construct a continuous equation and extract the spectra through the singular
value decomposition (SVD) while efficiently separating them from a long-term
systematic stellar leak. We performed numerical simulations to investigate the
feasibility of PSSD for the LIFE mission concept. We confirm that multiple
terrestrial planets in the habitable zone around a Sun-like star at 10 pc can
be detected and characterized despite high levels and long durations of
systematic noise. We also find that PSSD is more robust against a sparse
sampling of the array rotation compared to purely rotation-based signal
extraction. Using PSSD as signal extraction method significantly relaxes the
technical requirements on signal stability and further increases the
feasibility of the LIFE mission. | Taro Matsuo, Felix Dannert, Romain Laugier, Sascha P. Quanz, Andjelka B. Kovacevic, LIFE collaboration | 2023-08-03T00:07:59Z | http://arxiv.org/abs/2308.01478v1 | # Large Interferometer For Exoplanets (LIFE):
###### Abstract
Context:A mid-infrared nulling-space interferometer is a promising way to characterize thermal light from habitable planet candidates around Sun-like stars. However, one of the main challenges for achieving this ambitious goal is a high-precision stability of the optical path difference (OPD) and amplitude over a few days for planet detection and up to a few weeks for in-depth characterization (depending on mission parameters such as aperture size, number of apertures and total instrument throughput).
Aims:Here we propose a new method called phase-space synthesis decomposition (PSSD) to shorten the stability requirement to minutes, significantly relaxing the technological challenges of the mission.
Methods:Focusing on what exactly modulates the planet signal in the presence of the stellar leak and systematic error, PSSD prioritizes the modulation of the signals along the wavelength domain rather than baseline rotation. Modulation along the wavelength domain allows us to extract source positions in parallel to the baseline vector for each exposure. The sum of the one-dimensional data converts into two-dimensional information. Based on the reconstructed image, we construct a continuous equation and extract the spectra through the singular value decomposition (SVD) while efficiently separating them from a long-term systematic stellar leak.
Results:We performed numerical simulations to investigate the feasibility of PSSD for the Large Interferometer For Exoplanets (LIFE) mission concept. We confirm that multiple terrestrial planets in the habitable zone around a Sun-like star at 10 \(pc\) can be detected and characterized despite high levels and long durations of systematic noise. We also find that PSSD is more robust against a sparse sampling of the array rotation compared to purely rotation-based signal extraction. Using PSSD as signal extraction method significantly relaxes the technical requirements on signal stability and further increases the feasibility of the LIFE mission.
Conclusions:
## 1 Introduction
Since the Michelson stellar interferometer was mounted on the Hooker telescope and successfully measured the diameter of Betelgeuse in 1920 (Michelson & Pease 1921), ground-based interferometry has been widely used for optical, infrared, and radio astronomy (Beckers et al. 1990; Colavita & Wizniowich 2000; ten Brummelaar et al. 2005; ALMA Partnership et al. 2015). An image of the sky was also reconstructed based on the Van Cittert-Zernike theorem (Born & Wolf 1999). The Fourier transform of the filled U-V plane provides a two-dimensional image by rotating the baseline and changing its length.
Bracewell (1978) introduced the concept of nulling interferometry to search for exoplanet around nearby stars by introducing a \(\pi\) phase shift to one of the beams of a 2-beam interferometer. When the observed sky consists of a host star and multiple planets, this concept generates a \(\sin^{2}(\frac{\pi}{4}\mathbf{B}\cdot\mathbf{\theta})\) fringe pattern that can null the host star at the centre of the field-of-view and transmit light from an off-axis point source, such as a planet, where \(\lambda\) is the observing wavelength, \(\mathbf{B}\) is the baseline vector, and \(\mathbf{\theta}\) is the position vector on the sky. Rotating the baseline of the interferometer also modulates the signal of the off-axis source as a function of time, which can be leveraged for signal extraction purposes. Followed by the proposal of mid-infrared nulling interferometry, Angel et al. (1986) noticed that the mid-infrared wavelength range is useful for the characterization of temperate Earth-like planets. This is because of the relatively low contrast between the planet and its host star compared to that observed in the visible wavelength range. In addition, \(CH_{4}\) and \(O_{3}\) are atmospheric biosignatures that have strong absorption bands in the same wavelength range (e.g., Des Marais et al. 2002; Fujii et al. 2018). A combination of these two studies led to the construction of a concept for remotely measuring the activity of primitive life on distant planets through detecting a variety of \(CH_{4}\) and \(O_{3}\), named _Darwin_ (Leger et al. 1996). Darwin was an ESA-led concept and similar parallel activities were going on on the US side
in the context of Terrestrial Planet Finder-Interferometer (TPF-I; Lawson et al., 2008).
Angel & Woolf (1997) used a cross-correlation technique to efficiently find the modulated signal while rotating the baseline, leveraging Bracewell's idea. A nuller consisting of four apertures was also introduced to obtain a fourth-order null of the host star. Furthermore, Mennesson & Mariotti (1997) proposed five collectors to suppress modulation of the exoczodical light during baseline rotation, keeping the fourth-order null. Instead, Velusamy et al. (2003) mentioned the advantage of a dual Bracewell interferometer consisting of two equivalent second-order nullers, which overcomes the ambiguity of the planet positions with a phase chop. We note that a phase shifter (\(\pi/2\) for nulling interferometers) is introduced to one of the two beams, and the two states are formed by inserting or removing the phase shifter. In addition, the subtraction of the two chopped states can separate symmetric components, including stellar leakage and background light, from the off-axis point sources. Finally, both the TPF-I and Darwin mission concepts favored a dual Bracewell interferometer (Cockell et al., 2009). Although TPF-I and Darwin were anticipated to detect and characterize the thermal emissions from Earth-like planets for the first time, they were postponed indefinitely due to the technical difficulties.
However, stellar leak is more sensitive to the optical path difference (OPD) and low-order aberrations in the second-order null than in the fourth-order null (e.g., Hansen et al., 2022). Lay (2004) quantified the systematic errors generated from the fluctuation of the null depth, which could obscure the modulated planet signal over the baseline rotation. The ideal phase chop technique can successfully remove most of the systematic noise errors, and only the first-order phase error and the cross-term of phase and amplitude errors remain in the demodulated signal. To identify an Earth-like planet around a Sun-like star at 10 \(\mu m\), the OPD and amplitude perturbations need to be stabilized to 1.5 \(nm\) and 0.1%, respectively (Lay, 2004). Because observing such a planet with a signal-to-noise ratio of seven requires an integration time of a few days, according to preliminary analyses for the LIFE mission presented in Dannert et al. (2022), the 1.5-\(nm\) OPD stability requirement holds for the same period. This imposes strict requirements on the formation flight and the optical beam transport and combination system. Lay (2006) proposed to stretch the aspect ratio of the rectangular four collector array of the Double Bracewell interferometer to remove instrumental noise induced by the systematic effects from the data to mitigate the requirements by a factor of 10. The method utilized different behaviors between the planet and instability noise achieved by stretching the ratio between the nulling and imaging baseline to 1:6. While this relaxes the requirements on the null stability, the streching of the baselines requires a more fuel consumption compared to baseline rotation.
Instead of using the modulated signals of off-axis objects during baseline rotation, Matsuo et al. (2011) proposed a method for estimating the positions of off-axis objects and obtaining their spectra from a few baselines, focusing on the modulation of the signal along the wavelength domain. This method requires only relative stability among the wavelength channel across the observed spectrum instead of stability of the null depth during the baseline rotation. In addition, when the number of baselines is larger than that of the detectable objects in the field of view of the interferometer, one can effectively separate planet signals from long-term fluctuations caused by systematic effects. The extended method optimizes for continuously rotating the baseline instead of fixing baselines, which could further mitigate the stability requirements of a space-based nulling interferometer. The present study is complementary to developments of the formation flying interferometry (e.g., Hansen et al., 2022; Matsuo et al., 2022) and ground-based nulling experiments (e.g., Ertel et al., 2020; Ranganathan et al., 2022). These efforts provide a support for the Large Interferometer for Exoplanets (LIFE), which represents a science theme that was recognized as one out of three potential science themes for a future L-class mission in the Voyage 2050 of the European Space Agency. Based on the heritage of Darwin and TPF-I, but leveraging most recent scientific and technological developments, LIFE will directly detect and characterize the thermal light from habitable planet candidates. LIFE could detect 25 - 45 terrestrial planets in the habitable zone around nearby F-, G-, K-, and M-type stars under conservative assumptions (Quanz et al., 2022; Kammerer et al., 2022).
In the following we propose the phase-space synthesis decomposition (PSSD) method for extracting the planet signal. This method could mitigate the rigorous requirements imposed on the nulling space interferometer, which is complementary to ongoing technological demonstrations for LIFE. Section 2 provides a brief overview of PSSD and its mathematical explanation. We perform a numerical simulation to investigate the feasibility of the method using the LIFE simulator (Dannert et al., 2022) in Section 3. Section 4 discusses the limitation of PSSD and its advantages and disadvantages. We conclude with our main findings in Section 5.
## 2 Concept
This section provides an overview of PSSD and then introduces its procedures from planet detection to spectral characterization. The analytical equations constructed for PSSD explain the two processes, planet detection and characterization, in detail.
### Overview
PSSD is divided into two processes: (1) search for the planet signal, and (2) measurement of the planet spectrum. Both steps require a continuous baseline rotation and the same operation as the previous method that extracts the modulated signal through the cross-correlation or the maximum likelihood of the data obtained while rotating the baseline (e.g., Angel & Woolf, 1997; Dannert et al., 2022). However, the previous method and PSSD differ in how the planetary positions are reconstructed. The previous process transforms interferometric signals collected by spinning the baseline into the planet position by fitting the modulation in both the azimuth and wavelength domains simultaneously. However, focusing on the fact that the spectrum of a G-type star is smoothly distributed at mid-infrared wavelengths (e.g., Husser et al., 2013), a one-dimensional image in parallel to baseline is first formed by correlation of the interferometric signal only along the wavelength domain, which basically has the same characteristics as Fourier transform of the signal in the same direction (Matsuo et al., 2011). Because the wavelength dependence of the stellar leak is largely different from that of the planet signal, the partial correlation only along the wavelength domain can decompose the stellar leak and planet signal. After spinning the baseline, summing over one-dimensional images, instead of a cross-correlation of the signal in the azimuth domain, transforms a set of one-dimensional positional information into two-dimensional positional information. Thanks to the partial correlation only in the wavelength domain, an impact of a long-term systematic error on image reconstruction can be avoided.
PSSD receives both the advantages of the two previous methods, the cross-correlation method (Angel and Woolf, 1997) and Fourier transform of the signal along the wavelength domain (Matsuo et al., 2011). While the corss-correlation method increases the signal-to-noise ratio as much as possible, the latter efficiently decomposes the stellar leak and planet signal. PSSD combines the two methods by employing a local cross-correlation of the signal only along the wavelength domain, instead of the full cross-correlation in both the wavelength- and time-domains. Generally, local (or segmented) cross-correlation is used when cross-correlating data in smaller segments. We can directly compare small sections of two arrays of data by cross-correlating corresponding segments, allowing for a more localised analysis. This method is very useful when analyzing complex astronomical phenomena with fluctuations or patterns in various portions of compared signals (e.g., Kovacevic et al., 2018).
Thanks to the combination of the two previous methods, PSSD provides three advantages in terms of planet detection. First, PSSD could shorten the required stability duration from a few days to a few minutes. Second, PSSD could also mitigate the impact of the limited number of baselines on search for the planet signal. Third, PSSD could have robustness against a larger OPD fluctuation. Utilizing the advantages of the planet detection process, PSSD also develops a method for extracting the planet spectrum embedded in the stellar leak.
Regarding the first advantage, the required stability duration could be shortened from a few days to a few minutes because we do not use the correlation of the planet signals collected while rotating the baseline. This is equivalent to the period for obtaining the two-phase chop states. We note that the period of switching between the two-phase chop states is determined such that the slow change of the background can be fully sampled (Absil et al., 2003). PSSD only requires relative stability along the wavelength (i.e. among the spectral data), rather than the stability of signals received while turning the baseline. The continuous and wide wavelength range obtained from space, such as 4 to 18.5 \(\mu m\) for the LIFE observatory, realizes this alternative approach. We note that what type of object (e.g. Jovian planet or terrestrial planet) orbits the host star is unclear in the planet detection phase because the light is integrated over the entire wavelength range (4 to 18 \(\mu m\) for the LIFE mission) in this phase. Regarding the second advantage, because PSSD reconstructs a one-dimensional image from one imaging baseline, two-dimensional positionalal information can be extracted from fewer baselines. In other words, PSSD has more resistance against a limited number of data collected during baseline rotation than the previous cross-correlation technique. In terms of the last advantage, a fluctuation of OPD during baseline rotation does not correlate with a modulation of the planet signal along the wavelength domain. Instead, the OPD error contributes to the observing data as a noise because the stellar leak is inversely proportional to approximately the fourth power of wavelength. PSSD can detect the planet light unless the modulation of the planet signal is embedded in the stellar leak. Thus, PSSD is more robust against a large OPD error in terms of planet detection.
Next, the planetary spectra are derived based on the positional information of the planets. Because we calculate the modulation of planet light while rotating the baseline based on the information of the estimated planet position, the planet light for each spectral channel can be extracted by fitting the data through the singular value decomposition (SVD) method. However, because the modulation of the planet signal during baseline rotation is used for the reconstruction of the planet spectrum, the reconstructed spectrum is more affected by a long-term systematic error compared to planet detection. As a result, the large contrast between the stellar leak and a temperate planet at short wavelengths prevents us from precisely reconstructing the planet spectrum in the same wavelength range (Section 4.1). Before applying the data to the SVD method, the stellar leak has to be subtracted from the data if the stellar leak is much brighter than the planet light. On the other hand, it is difficult to measure the OPD change because the number of available photons is very limited at the nulled output.
Here we find that the stellar leak induced by the systematic OPD error could be measured from the data at short wavelengths. Because warm and temperate planets are much fainter than the stellar leak in the short wavelength range, only the stellar leak mainly contributes to the data. Since there is no a strong chromatic aberration in the optical system, thanks to reflectors constructing the optical system of LIFE, the stellar leak may be able to be expressed as a function of wavelength. For example, when the stellar leak is induced by the OPD error, the stellar leak is inversely proportional to approximately the fourth-power of wavelength. Since there are a large volume of the data collected during baseline rotation, we could estimate the wavelength dependence of the stellar leak using a simple model, such as an exponential function. We note that modeling the wavelength dependence of the stellar leak is already performed in the data reduction pipeline of GRAVITY (e.g., GRAVITY Collaboration et al., 2020). Once the wavelength dependence is derived, the modeled stellar leak could be extrapolated to a longer wavelength range. The modeled stellar leak is subtracted from the data and is not, in principle, contaminated in the reconstructed spectra. Because the stellar leak is much weaker at the wavelengths longer than 10 \(\mu m\), the planet spectrum is less affected even if there exists a chromatic aberration.
We note, however, that if a Jovian planet close to its host star exists in the observing object, the planet is brighter than the stellar leak even a short wavelengths. Subtracting the bright planet from the data is required for modeling the wavelength dependence of the stellar leak at short wavelengths. Thanks to the long imaging baseline, an inner planet can be spatially resolved from the host star. Because both the position and spectrum of the inner planet are obtained through PSSD, we can estimate how the planet signal is modulated during baseline rotation and subtract it from the data.
We also need to emphasize that PSSD is validated only for objects having smoothed spectra such as a Planck function. If the spectra have sine components, the positions of the objects are shifted from the true positions. The reconstructed spectra are systematically affected by the wrongly estimated positions. Since the low-dispersion spectra of the atmospheres in exoplanets are close to a Planck function, the systematic shifts are not considered in this study. We note that the reflected light of Europa from the Sun in optical does not affect the position on the reconstructed image in spite of the spectrum including a number of lines (Matsuo et al., 2022).
Here we overview the concrete data reduction of this method. The process of the planet detection consists of the following five steps.
1. Subtract the two chop states for each baseline to obtain the sine component of the complex visibility.
2. Extract the modulated signals of the off-axis point sources along the wavelength (see Section 2.2).
3. Repeat processes [1] and [2] during rotation of the baseline. This process is done for each set of the two-phase chop states.
4. Transform a set of reconstructed one-dimensional images into a two-dimensional image (i.e. phase-space synthesis).
5. Search for planet light in the reconstructed two-dimensional image and measure the planet position if it exists.
The characterization process is as follows:
1. Perform procedure [1] with longer integration time.
2. Model the wavelength dependence of the stellar leak from the collected data at short wavelengths (e.g., 4 to \(6~{}\mu m\)) if the stellar leak is much brighter than the planet light due to a large OPD error.
3. Subtract the stellar leak from the collected data after extrapolating the stellar leak model constrcuted in process [7] to the long wavelength range.
4. Construct a matrix equation of the following form from the set of collected data: \(O=RI\), where \(O\) is the observable vector, \(R\) is the response function, and \(I\) is the vector of the input sky.
5. Solve the matrix equation using the SVD method to extract the planet spectrum (i.e. phase-space decomposition).
The phase information is summed for planet detection through processes [1] to [5]. In contrast, the phase information is decomposed in the planet characterization phase through procedures [6] to [10]. If the stellar leak is not much brighter than the planet light at short wavelengths, processes [7] and [8] can be skipped. As shown in Section 3.2, when the systematic OPD RMS error is \(0.75~{}nm\), corresponding to the standard requirement of LIFE (Dannert et al. 2022), the planet spectra can be precisely extracted without processes [7] and [8]. Although processes [1] and [6] are equal in data reduction, the required integration time is different. Because the planet light is integrated over the entire wavelength range in the reconstructed image through processes [1] - [5], the required integration time for planet detection is much shorter than that for planet spectrum obtained through processes [6] - [10].
We explain the planet detection and characterization in Sections 2.2 and 2.3, respectively, constructing equations for PSSD.
### Search for planet signal (phase-space synthesis)
When a dual-Bracewell nulling interferometer with a \(\frac{\pi}{2}\) phase chop observes the sky, the observed two-chop states in the unit of photoelectrons are as follows (e.g., Beichman & Velusamy 1999; Matsuo et al. 2011)
\[O_{\pm}(\lambda) = \frac{1}{2}\int\int d^{2}\theta I(\lambda,\mathbf{\theta})\sin^{2} \left(\frac{\pi}{\lambda}\mathbf{b}\cdot\mathbf{\theta}+\delta l_{n}\right)\] \[\times \left\{1\pm\sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta} +\delta l_{i}\right)\right\},\]
where \(\mathbf{\theta}\) is the position vector in the sky, \(\mathbf{b}\) and \(\mathbf{B}\) are the nulling and imaging baseline vectors, \(I(\lambda,\mathbf{\theta})\) is the signal without the effect of sky transmission caused by the dual-Bracewell nulling interferometer at wavelength of \(\lambda\), and \(\delta l_{i}\) and \(\delta l_{n}\) are the optical path differences of the imaging and nulling baselines, respectively. The (\(+\)) and (\(-\)) notations indicate the two chop states. In Equation 1, we assumed that the two nulling baselines for the dual-Bracewell nulling interferometer have the same optical path difference error, \(\delta l_{n}\), for simplicity.
The planetary system is the sum of the host star \(I_{\lambda}(\lambda,\mathbf{\theta})\), multiple planets \(N_{p}\), \(\sum_{k}^{n}I_{p,k}\), local zodiacal light, \(I_{E_{\lambda}}(\lambda)\), and the exozodiacal light \(I_{E_{\lambda}}(\lambda,\mathbf{\theta})\). The spectrally resolved signal for the planetary system is written as
\[O_{\pm}(\lambda) = \frac{1}{2}\int\int_{\Omega_{\pm}}d^{2}\theta I_{\pm}(\lambda, \mathbf{\theta})\sin^{2}\left(\frac{\pi}{\lambda}\mathbf{b}\cdot\mathbf{\theta}+\delta l _{n}\right)\] \[\times \left\{1\pm\sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta} +\delta l_{i}\right)\right\}\] \[+ \frac{1}{2}\sum_{k}^{N_{p}}I_{p,k}\left(\lambda,\mathbf{\theta}_{p,k} \right)\Omega_{p,k}\sin^{2}\left(\frac{\pi}{\lambda}\mathbf{b}\cdot\mathbf{\theta} +\delta l_{n}\right)\] \[\times \left\{1\pm\sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta} +\delta l_{i}\right)\right\}\] \[+ \frac{1}{2}\int\int_{\Omega_{\pm}}d^{2}\theta\left(I_{E_{\lambda }}(\lambda)+I_{\rm e}(\lambda,\mathbf{\theta})\right)\sin^{2}\left(\frac{\pi}{ \lambda}\mathbf{b}\cdot\mathbf{\theta}+\delta l_{n}\right)\] \[\times \left\{1\pm\sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta} +\delta l_{i}\right)\right\},\]
where \(\Omega_{\pm}\) and \(\Omega_{p,k}\) are the solid angles of the host star and the \(k\)-th planet, respectively, \(\Omega_{fov}\) is the field of view of the interferometer, and \(\theta_{p,k}\) is the position vector of the \(k\)-th planet. As shown above in step [1], the demolatuated signal is given by
\[O(\lambda) = O_{+}(\lambda)-O_{-}(\lambda)\] \[= \frac{1}{2}\int\int_{\Omega_{\pm}}d^{2}\theta I_{\pm}(\lambda, \mathbf{\theta})\sin^{2}\left(\frac{\pi}{\lambda}\mathbf{b}\cdot\mathbf{\theta}+\delta l _{n}\right)\] \[\times \sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta}+\delta l _{i}\right)\] \[+ \frac{1}{2}\sum_{k}^{N_{p}}I_{p,k}\left(\lambda,\mathbf{\theta}_{p,k} \right)\Omega_{p,k}\sin^{2}\left(\frac{\pi}{\lambda}\mathbf{b}\cdot\mathbf{\theta} _{p,k}+\delta l_{n}\right)\] \[\times \sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta}+\delta l_{i }\right)\] \[+ \frac{1}{2}\int\int_{\Omega_{fov}}d^{2}\theta I_{\rm ez}( \lambda,\mathbf{\theta})\sin^{2}\left(\frac{\pi}{\lambda}\mathbf{b}\cdot\mathbf{\theta} +\delta l_{n}\right)\] \[\times \sin\left(\frac{2\pi}{\lambda}\mathbf{B}\cdot\mathbf{\theta}+\delta l_{i }\right),\]
where the local zodiacal light was assumed to be removed from the demodulated signal because of its symmetrical structure. If the host star is perfectly positioned at the center of the field of view (FOV), the stellar leak disappears in Eq. 3 and contributes only as shot noise. Now we move to step [2].
There are two approaches for step [2]: the cross-correlation method (Angel & Woolf 1997) and the Fourier transform (Matsuo et al. 2011). While the former focuses on the modulated signal while rotating the baseline, the latter uses the modulated one along the wavelength domain for each baseline. Here, we combine the advantages of the two methods. We use the correlation method (Angel & Woolf 1997) to extract the signal correlated to the modulation of the planet along the wavelength and derive the planet position for each baseline. After rotation of the baseline, the positions of the planets are obtained. We employ a rectangular array configuration with a baseline ratio of 6:1 based on the baseline of the LIFE mission concept (see Figure 1). The parameters of the configuration are the same as those used for the numerical simulations in Section 3. The configuration is optimized to maximize the throughput of the habitable zone around a G-type star at 10 \(pc\).
Given that the position vector of the correlated signal is \(\mathbf{\alpha}_{corr}\) (i.e. the two-dimensional position of the signal), the positional
information reconstructed from the \(j\)-th imaging baseline vector, \(\mathbf{B}_{j}\), is
\[M_{corr,j}(\alpha_{j})=\Sigma_{i}^{N_{j}}O(\lambda_{i})\sin\left(\frac{2\pi}{ \lambda_{i}}\mathbf{B}_{j}\cdot\mathbf{\alpha}_{corr}\right)\sin^{2}\left(\frac{\pi}{ \lambda_{i}}\mathbf{b}_{j}\cdot\mathbf{\alpha}_{corr}\right), \tag{4}\]
where \(\lambda_{i}\) is the \(i\)-th spectral element, and \(N_{i}\) is the number of elements. \(M_{corr,j}\) tells us about the position of the correlated signal, \(\alpha_{j}\), projected to the baseline vector, \(\mathbf{B}_{j}\). As shown in Equation 3, \(O(\lambda_{i})\) is equal to the sum of the stellar leak, planet signals, and background components. Each component has a different spectrum energy distribution (see Figure 3(b)).
In order to explain how PSSD works, we perform a simulation under a simple condition that an Earth-like planet is positioned at 1 \(AU\) parallel to the \(x\)-axis (panel (a) of Figure 2). When the azimuths of the imaging baseline are 0 and 45\({}^{\circ}\) (panel Fig. 2 (b)), two one-dimensional images parallel to the imaging baseline vector are generated (panels (c) and (d) of Figure 2). When the azimuth of the imaging baseline is 0\({}^{\circ}\), the planet light is nulled for the rectangular array because the nulling baseline is parallel to the \(x\)-axis (panel Fig. 2(c)). In fact, the peak value of the planet is almost 0. In contrast, for the azimuth angle of 45\({}^{\circ}\), the planet light is extracted at 1 \(AU\) of the \(x\) axis (panel Fig. 2 (d)). The peak value is a much larger than that of the nulled planet. The data obtained by the rectangular array also has information on the planet position in the direction perpendicular to the imaging baseline, thanks to the nulling baseline. The planet position along the nulling baseline is weakly constrained for each angle of the imaging baseline. Thus, both the imaging baseline and the nulling baseline can be utilized for planet detection.
Because the stellar leak caused by the OPD error is inversely proportional to approximately \(\lambda^{-4}\), the first term of \(O(\lambda_{i})\) in Equation 3 does not correlate with \(\sin\left(\frac{2\pi}{4}\mathbf{B}_{j}\cdot\mathbf{\alpha}_{corr}\right)\sin^{2} \left(\frac{\pi}{\lambda_{i}}\mathbf{b}_{j}\cdot\mathbf{\alpha}_{corr}\right)\) along the wavelength domain. The OPD error does not have less influence on the reconstruction of one-dimensional images, compared to the previous method that extracts the modulated signal through the cross-correlation of the data collected while rotating the baseline. In other words, PSSD has robustness against a large OPD fluctuation, which is the third advantage of PSSD, as discussed in Section 2.1.
The two-dimensional positional information is obtained by summing the one-dimensional images collected while rotating the baseline:
\[M_{corr}(\mathbf{\alpha}_{corr})=\Sigma_{j}^{N_{j}}M_{corr,j}(\alpha_{j}), \tag{5}\]
where \(N_{j}\) is the number of collected baselines. Panel (e) shows the two-dimensional image reconstructed through Equation 5. The pixel value at the planet position on the two-dimensional image corresponds to the sum of the pixel values at the same position on the one-dimensional images. Because PSSD focuses on the signal modulated by the wavelength for each baseline instead of one modulated by the rotation of the baseline, PSSD is less affected by the long-term systematic noise than the purely rotation-based signal extraction, which is the first advantage of PSSD discussed in Section 2.1. The required stability duration could be shortened to a few minutes, corresponding to the period for obtaining the two-phase chop states.
PSSD is not also less impacted by a sparse U-V sampling. We discuss how the limited number of collected baselines affects the reconstructed two-dimensional image in Section 4.2.
We also note the relationship between the correlation and Fourier transform methods. The Fourier transformation of the demodulated signal along the wavelength gives a one-dimensional image of the sky in parallel to the \(j\)-th imaging baseline vector:
\[M_{FT,j}(\alpha_{FT,j})=\int d\left(\frac{1}{\lambda}\right)O(\lambda)\sin \left(\frac{2\pi}{\lambda}|\mathbf{B}_{j}\alpha_{FT,j}\right)\sin^{2}\left(\frac{ \pi}{\lambda}|\mathbf{b}_{j}\alpha_{FT,j}\right), \tag{6}\]
where \(\alpha_{FT}\) is the one-dimensional coordinate system in parallel to the \(j\)-th imaging baseline vector. The origin of the coordinate system is the center of the field of view. Comparing Equation 6 with Equation 5, we found that both approaches are analytically equal if the sky consists of multiple point sources. A continous source can be reconstructed only through Fourier transform of the interferometric signal (i.e. complex visibility). However, focusing on the fact that the transmission pattern of the sky induced by the nulling baseline can be utilized to increase the signal-to-noise ratio of the planet detection, the Fourier transform method requires the condition that the imaging and nulling baselines are aligned for better planet detection. In contrast, the correlation method can be applied to any telescope configuration. Thus, the correlation method would be more utilized for planet detection, compared to the Fourier transform method (Matsuo et al. 2011).
### Extraction of planet spectrum (phase-space decomposition)
Once the planet light is successfully detected in the reconstructed two-dimensional image, we can estimate the planet spectrum. We calculate how the planet light is modulated while rotating the baseline based on the two-dimensional planet positional information. The planet signal for each spectral channel can be extracted from the data collected during baseline rotation. However, because the long-term fluctuation of OPD correlates with the modulation of the planet signal during baseline rotation, the reconstruction of the planet spectrum is more easily affected by a long-term OPD error. In other words, the characterization of the planet light is more challenging than planet detection. Therefore, if the stellar leak is much brighter than planet signal, the bright stellar leak has to be subtracted before extracting the planet signal from the data.
Here, as introduced in Section 2.1, how the stellar leak changes while rotating the baseline could be measured from the data at short wavelengths. This is because the signals of warm and temperate planets except for a hot Jupiter are negligible compared to the stellar leak at short wavelengths. If we confirm from the reconstructed image that the stellar leak mainly contributes to the data at short wavelengths, the wavelength dependence of the stellar leak can be modeled in the same wavelength range. After the stellar leak model is extrapolated to the longer wavelength range, the stellar leak is subtracted from the demodulated signal shown in Equation 3.
There are mainly two systematic error terms, the first-order phase error and the cross-term of phase and ampli
Figure 1: Rectangular array used for this study. Each filled circle represents a \(2m\)-diameter telescope. The ratio of the imaging to nulling baselines is 6:1.
tude, in the demodulated signal. Because the demodulated signal of the stellar leak in Equation 3 is characterized by \(\sin^{2}\left(\frac{\pi}{4}\mathbf{b}\cdot\mathbf{\theta}+\delta l_{n}\right)\sin\left( \frac{2\pi}{4}\mathbf{B}\cdot\mathbf{\theta}+\delta l_{l}\right)\), the wavelength dependence of the stellar leak could be simply expressed by a power law of wavelength under the condition that the systematic errors are much smaller than wavelength:
\[I_{\rm{test}}(\lambda,t)=\left\{a\left(\frac{\lambda}{\lambda_{0}}\right)^{ \alpha}+b\left(\frac{\lambda}{\lambda_{0}}\right)^{\beta}\right\}I_{\rm{test} }(\lambda_{0},t), \tag{7}\]
where \(\lambda_{0}\) is the reference wavelength, and \(I_{\rm{test}}(\lambda,t)\) is the stellar leak model in the demodulated signal at the wavelength \(\lambda\) as a function of time, \(t\). We note, however, that different wavelength dependencies may exist in the long wavelength range because a coating dispersion error or a pupil shear error could impact at long wavelengths. Because the coating dispersion and pupil shear errors drastically decreases in the longer wavelength regime, the subtraction of the estimated stellar leak model from the demodulated signal would less impact the reconstructed planet spectrum at long wavelengths. We also note that thanks to the bright stellar leak at short wavelengths, the systematic aberration could be modeled from the intensity and wavelength dependence of the stellar leak, which is a similar work as measuring coronagraphic low-order aberrations (Guyon et al. 2009).
After subtracting the stellar leak from the demodulated signal shown in Equation 3, the planet spectra are reconstructed from the residual data through the SVD method. The matrix equation for the \(i\)-th spectral element can be written as
\[O_{i}=R_{i}I_{i}, \tag{8}\]
where \(O\) is the vector composed of the observed data, \(R\) is the matrix of the response function of objects (i.e. sky transmission of object), and \(I\) is the vector of the input sky. In order to reconstruct the spectra of the planets, we solve the matrix equation for each spectral channel.
After the observed data is subtracted from the averaged value over the baseline rotation, the \(O\) vector of the \(i\)-th spectral element is
\[O_{i}=\left(\begin{array}{c}O_{i,1}\\ O_{i,2}\\ \vdots\\ O_{i,N_{j}}\end{array}\right), \tag{9}\]
where \(O_{i,j}\) is the observation data of the \(j\)-th azimuth angle for the \(i\)-th spectral element. The number of the elements for the \(O\) vector is \(N_{j}\), corresponding to the number of the collected data during baseline rotation. The response matrix for the \(i\)-th spectral element, \(R_{i}\), is written as
\[R_{i}=\left(\begin{array}{ccc}R_{1,1}&\ldots&R_{1,N_{p}}\\ \vdots&&\vdots\\ R_{N_{j},1}&\ldots&R_{N_{j},N_{p}}\end{array}\right), \tag{10}\]
where we assumed that the continuum component is removed from the observed vector, \(O_{i}\), by the subtraction of the two chop states. The \(R\) matrix is a \(N_{j}\times N_{p}\) matrix. Each component is as follows:
\[R_{1,1} = \sin^{2}\left(\frac{\pi}{\lambda_{i}}\mathbf{b}_{1}\cdot\mathbf{\theta}_ {p,1}\right)\sin\left(\frac{2\pi}{\lambda_{i}}\mathbf{B}_{1}\cdot\mathbf{\theta}_{p,1}\right) \tag{11}\] \[R_{1,N_{p}} = \sin^{2}\left(\frac{\pi}{\lambda_{i}}\mathbf{b}_{1}\cdot\mathbf{\theta}_ {p,N_{p}}\right)\sin\left(\frac{2\pi}{\lambda_{i}}\mathbf{B}_{1}\cdot\mathbf{\theta}_ {p,N_{p}}\right)\] \[R_{N_{j},1} = \sin^{2}\left(\frac{\pi}{\lambda_{i}}\mathbf{b}_{N_{j}}\cdot\mathbf{ \theta}_{p,1}\right)\sin\left(\frac{2\pi}{\lambda_{i}}\mathbf{B}_{N_{j}}\cdot\mathbf{ \theta}_{p,1}\right)\]
The vector of the input sky for the \(i\)-th spectral element is
\[I_{i}=\left(\begin{array}{c}I_{p,1}(\lambda_{i},\mathbf{\theta}_{p,1})\\ I_{p,2}(\lambda_{i},\mathbf{\theta}_{p,2})\\ \vdots\\ I_{p,N_{p}}(\lambda_{i},\mathbf{\theta}_{p,N_{p}})\end{array}\right). \tag{12}\]
The number of elements for the \(I\) vector is \(N_{p}\). When the number of the observed data is much larger than that of the elements for the input matrix, the \(N_{p}\) planet signals for each spectral element can be decomposed by the SVD method.
Finally, we need to emphasize that there are several ways to decompose the planet signals and unknown stellar leaks under the condition that the planets are successfully detected. This
Figure 2: Procedure of image reconstruction. Panel (a) shows the planet position for this simulation. The planet is positioned at \(1\,AU\) in parallel to the \(x\) axis, where \((x,y)\) is the coordinate system of the object plane in the unit of \(AU\).
decomposition could be solved using modified orthogonal projections, or kernels, such as used in (Laugier et al., 2020). They preserve important properties of the covariance matrix of errors, and this decomposition therefore is well suited for further data whitening approaches.
We could also combine PSSD with wavelet-based signal reconstruction methods (e.g., del Ser et al., 2018), either by suppressing wavelet coefficients at different levels, corresponding to unwanted signals, or convolving them with specially design convolution kernels. It is expected that while the low-frequency signal is mainly caused by the systematic OPD residual, the mid- to high-frequency signals are caused by off-axis point sources and stochastic noises. Knowing the models of the systematic errors and stochastic noises makes it possible to suppress sections of the wavelet decomposition connected to these signals and then, using the inverse wavelet transform, rebuild the planet signal. Reconstruction of the signal will depend on the cadence and its level with respect to noise. With advanced techniques we could expect signal recovery even its contribution is up to \(\sim 10-30\%\).
## 3 Simulations
We performed numerical simulations to check the feasibility of PSSD under the LIFE baseline scenario. First, we briefly explain the simulation setup regarding the target system and instrument. Next, we show the results generated by PSSD under the ideal condition where only the astronomical noise contributes to the data as shot noise. Finally, we include a long-term systematic OPD error in the simulations and show its impact on PSSD.
### Setup
The distance of the considered target is 10 \(pc\). The target system consists of three Earth-sized (\(R_{\oplus}\)) planets, a Sun-like star with a Sun radius (\(R_{\odot}\)) and an effective temperature of 5778 \(K\), and an exozodiacal dust disk. The semi-major axes of the three planets are 1, 0.73, and 1.5 \(AU\), which are the same as those of Earth, Venus, and Mars. Given that the effective temperatures of the three planets are simply proportional to the inverse square root of the semi-major axis, the temperatures of planets P1, P2, and P3 were set to 285, 330, and 232 \(K\), respectively. All of the target objects were assumed to emit blackbody radiation. The orbital phases of the three planets were set to 0, -45, and 90\({}^{\circ}\). The phase angle of 0\({}^{\circ}\) points along the positive \(x\)-axis, where \((x,y)\) is the coordinate system of the sky in the unit of \(AU\). The arrangement of the three planets is shown in Figure 3(a). The exozodiacal light is equal to three times that of the solar system (Ertel et al., 2020). The surface brightness of the exozodiacal light is generated based on the previous model (Kennedy et al., 2015), which is applied to the software tool, LIFEsim (Dannert et al., 2022). Table 1 compiles all the parameters of the target system.
We employ a dual-Bracewell nulling interferometer with imaging and nulling baselines of 87.3 \(m\) and 14.55 \(m\) (see Figure 1) so that the maximum of transmission is achieved for the center of the habitable zone around a Sun-like star at 10 \(pc\) at a wavelength of 15 \(\mu m\)(Quanz et al., 2022). The diameter of each telescope is 2 \(m\), and the imaging and nulling baselines are perpendicular to each other. Although the observing wavelength ranges from 4 to 18.5 \(\mu m\), the same as the LIFE baseline, we limited the wavelength range to larger than 8 \(\mu m\) in the planet detection phase. The reason is that the bright stellar leak is more than 100 times brighter than the light of the planets we consider at short wavelengths and deteriorates the performance of PSSD. We note that the shorter wavelengths are effective in looking for inner planets because of the combination of higher spatial resolution and brighter planets in that wavelength range. PSSD needs to optimize the wavelength range used for planet detection based on what type of planets we find.
We set the resolving power of the spectrum to 50 for both planet detection and their characterization. The minimum resolving power for planet detection is determined by the required field of view. When the resolving power is 50, the field of view is 1.14 arcsecond at 10 \(\mu m\). And for the characterization a spectral resolution of 30-50 was suggested in Konrad et al. (2022) in order to detect the various molecules in an Earth-twin atmosphere. The total throughput was set to 0.035, given that the instrument throughput and quantum efficiency are 0.05 and 0.7, respectively. The integration time was set to 55 \(h\)\(for\) planet detection and 75 days for planet characterization, respectively.
We do not assume a continuous rotation but a discrete rotation in steps of one degree, where the spacecraft come to a halt before rotating again by one degree, because of computational cost. We note that the continuous rotation provides a better reconstruction thanks to the continuous U-V coverage compared to the discrete rotation. Assuming that the baseline rotates by 360\({}^{\circ}\) at a one-degree interval, the integration time for each baseline is 550 and 180,000 seconds for planet detection and characterization phases, respectively. In addition to the ideal observing case, we studied the feasibility of PSSD under systematic OPD error. Although there are mainly the first-order phase and the phase-amplitude cross-term in the demodulated signal (Lay, 2004), only the former was considered in this simulation. We note, however, that this simulation on the feasibility of PSSD is not largely impacted by the phase-amplitude cross-term because the two systematic components have a similar frequency dependence of \(\frac{1}{f}\), called "pink noise", where \(f\) is the frequency. The root-mean-square (RMS) of the OPD error was set to 0.75 \(nm\), corresponding to the LIFE baseline scenario for the case of only phase error (Dannert et al., 2022). The baseline value is larger than that for the case of both phase and amplitude errors. We also consider 5, 10, and 15 times the baseline values, 3.8, 7.5, and 11.3 \(nm\) RMS errors, to investigate the limitation of PSSD in Section 4. Table 2 compiles all the instrumental parameters.
Figure 3(b) shows the astronomical signals obtained by the Bracewell nulling interferometer under the above-observed conditions. The stellar leak and background, such as the local zodiacal and exozodiacal light, cover the modulations of the three planets. When there is no OPD error, all astronomical signals contribute to the data as the shot noise. We perform the numerical simulations under the ideal condition in Section 3.2.1 and then consider the fluctuation of the stellar leak due to the systematic OPD error in Section 3.2.2.
### Results
#### 3.2.1 Ideal condition
We first show the feasibility of PSSD under the ideal condition, in which only shot noise exists due to astronomical sources. After collecting data while rotating the baseline by 360\({}^{\circ}\) in steps of 1 degree, we generated a two-dimensional image through processes [1] to [4]. Figure 4(a) shows the reconstructed two-dimensional images for an integration time of 55 \(h\). The signal-to-noise ratios for the detection of planets under the ideal condition are compiled in Case 1 of Table 3. We successfully detected signals of planets P1 and P2 with signal-to-noise ratios of 10.8 and 14.6, respectively. The higher temperature of planet P2 allows us to obtain higher signal-to-noise ratio compared to that
of planet P1. In contrast, the signal-to-noise ratio of planet P3 is only 3.6 because of its lower temperature. We require a longer integration time to achieve the signal-to-noise ratio of five for planet P3.
The signal-to-noise ratio is defined as the ratio of the planet signal to the starrdard deviation at the same angular distance as its planet. In order to calculate the standard deviation, a two-dimensional image without the planet signals is generated and divided into annular rings. The noise floor is calculated as the standard deviation for each annular ring. We note that the absolute value of the signal-to-noise ratio cannot be directly compared with that calculated by Dannert et al. (2022) because PSSD reconstructs both the signal and noise in a different way.
Next, we reconstructed spectra of the three planets through processes of [6], [9] and [10], assuming that the planet positions are correctly obtained. Figures 4(b), (c), and (d) show the reconstructed spectra of the three planets. The spectra of planet P1 and planet P2 are consistent with the input spectra (solid gray lines). We also derived for each data-point the average and standard deviation by performing the numerical simulations 100 times. The signal-to-noise ratio of the spectrum for planet P1 agrees with that of the previous study (Konrad et al., 2022). LIFE could detect the methane and ozone absorption bands at 7.6 and 9.6 \(\mu m\) with signal-to-noise ratios of approximately 5 and 15, respectively. This combination of simultaneously detected absorption features is thought to be a good indicator of a non-equilibrium atmosphere caused by biological activity on the planet (Kasting et al., 2014).
On the other hand, the signal-to-noise ratio worsens at shorter wavelengths than 7.5 \(\mu m\). This is because the stellar leak drastically increases due to the narrower null pattern on the sky in the shorter wavelength range. The planet signals rapidly decrease at the same time (see Figure 3(b)). In addition, PSSD also obtained the entire spectrum of planet P3 but at a signal-to-noise ratio lower than three except for wavelengths longer than 10 \(\mu m\) due to its faintness compared to the other planets. Table 4 compiles the signal-to-noise ratios of the reconstructed spectra at wavelengths of 5, 7.5, 10, and 15 \(\mu m\).
Although planet P3 was not detected for an integration time of 55 \(h\), the signal of planet P3 could be obtained while integrating the data in the characterization phase, and its position would be well determined. Planets P1 and P2 were also detected with higher signal-to-noise ratios compared to those in the planet detection phase, which can reduce the systematic errors of the reconstructed spectra due to the estimation errors of the planet positions.
Thus, we confirmed that PSSD could detect the planet signals and characterize their atmospheres under the ideal condition, in which the data is affected only by the shot noise due to the stellar leak, background, and planets.
#### 3.2.2 Systematic error
We consider instrumental noise and investigate its negative impact on planet detection and characterization. In order to evaluate it, we included an OPD error in the numerical simulations as the instrumental noise. We note, however, that the phase-amplitude cross-term also contributes to the long-term stellar fluctuation in the demodulated signal under the existence of the amplitude error (Lay, 2004). Because both systematic components have the same spectrum in terms of the time domain (e.g., Dannert et al., 2022), the phase-amplitude cross-term would not significantly impact the results. We assumed that the systematic noises have a dependency of \(\frac{1}{f}\), where \(f\) shows the frequency. According to Dannert et al. (2022), when the OPD RMS error is larger than 0.75 \(nm\), the instrumental noise is dominant over the statistical noise (i.e. fundamental noise) from the astronomical objects at the shortest wavelength. Because the systematic OPD error is much smaller than the observing wavelength, the amount of stellar leak is proportional to the OPD error for each spectral element. The OPD error impacts the null depth and leaves the stellar leak in the subtraction of the two chop states (Equation 3). While the former contributes to the data as the Poisson noise, the latter affects the planet signal during baseline rotation, which prevents us from reconstructing the planet spectrum. In our simulations, we added the same OPD error to the imaging and nulling baselines to reduce the calculation cost.
Figure 5(a) compares the spectra of the three planets with the stellar leak left after subtracting the two chop states in the
Figure 4: Image and spectrum reconstruction under photon-noise limited condition. (a) Reconstructed two-dimensional image. The white arrows denote positions of planet P1, planet P2, planet P3, respectively. The integration time was set to 55 \(h\). The unit of the color bar is the number of photoelectrons. Reconstructed spectra of three planets (b) P1, (c) P2, and (d) P3 for an integration time of 75 days. The grey line and grey vertical bar of each panel show the input model and the standard deviation of each data point derived through 100 numerical simulations, respectively.
Figure 3: Target planetary system. (Left) Configuration of three planets. (Right) Signals of planet P1 (black), planet P2 (blue), planet P3 (green), the nulled host star (red), local zodiacal light (brown), and exozodiacal light (purple) with a resolving power of 50 per a unit of time. Target system and instrument parameters are compiled in Tables 1 and 2.
entire wavelength range (i.e. 4 - 18.5 \(\mu m\)). The stellar leak drastically increases at the shorter wavelengths because of the shallower null depth and the planet signal drops instead. It is more challenging to perform planet detection and characterization at shorter wavelengths than longer ones. Panels (b), (c), and (d) of Figure 5 compare the modulated signal of planet P1 with the stellar leak at 4, 8, and 12 \(\mu m\) as a function of the azimuth of the imaging baseline. Although the systematic OPD RMS error of 0.75 \(nm\) does not affect the planet signal at 12 \(\mu m\), the stellar leak covers the planet signal at 4 \(\mu m\), which is consistent with Figure 5(a).
Figure 6(a) shows a reconstructed two-dimensional image under the condition that the systematic OPD RMS error is 0.75 \(nm\). Thanks to robustness of PSSD against a long-term OPD error, we obtained the same signal-to-noise ratios for detecting the three planets as those for the ideal state.
Finally, we reconstructed the planet spectra over the entire wavelength based on processes of [6], [9], and [10] without subtraction of the stellar leak from the demodulated signal in processes [7] and [8]. Panels (b), (c), and (d) of Figure 6 compare the reconstructed spectra of the three planets with the input models. The reconstructed spectra are consistent with the models over the entire wavelength range, except for the shorter range. Comparing the reconstructed spectra with the ideal case, we found that the shot noise limits the performance of PSSD in the wavelength range longer than 7.5 \(\mu m\). In addition, even though the systematic stellar leak is a few times brighter than the planet signals (Figure 5(a)), the signal-to-noise ratios are almost the same as those for the reconstructed spectra under the ideal condition (Case 2 of Table 4). This is because the SVD method efficiently separates the mid- to high-frequency components induced by planets from the low-frequency component due to the systematic stellar leak. Thus, the SVD method could reconstruct the spectra of the three planets from the modulations of the planet signals during baseline rotation while efficiently separating them from the long-term systematic stellar leak.
## 4 Discussion
We have confirmed thus far that PSSD detects planet light and extracts planet spectra under the existence of OPD fluctuation, which follows \(\frac{1}{7}\). In this section we investigate how much noise amplitude PSSD can endure (Section 4.1) and compare PSSD with the previous method (Section 4.2).
### Robustness against a large OPD error
We set the systematic OPD RMS error to 3.8, 7.5, and 11.3 \(nm\), which are equal to the baseline values multiplied by factors of 5, 10, and 15, respectively. Figures 7(b), (c), and (d) show the reconstructed images for these three different OPD errors. As the OPD error increases, systematic patterns are brighter in the centre region of each image. In contrast, the noise floor is limited by the shot noise at semi-major axes larger than 1.0 \(AU\) (panel Fig. 7(d)). As discussed in Section 3.2.1, the noise floor was calculated for a reconstructed two-dimensional image without the planet signals.
We derived the signal-to-noise ratios of the three planets for each OPD RMS error. Although the signal-to-noise ratio is gradually worsened as the OPD error increases, the signal-to-noise ratios of the planets are higher than 5 except for planet 3 (Cases 2 - 4 of Table 3). Thus, PSSD could successfully detect the inner two planets even under systematic OPD errors 15 times larger than the LIFE baseline requirement. In addition, because the shot noise limits the detection of planet 3, PSSD could also detect planet 3 with a longer integration time.
Next, we reconstructed the spectra of the three planets for an OPD RMS error of 7.5 \(nm\) by solving the matrix equation with the SVD method (process [9]; see left panels of Figure 8). However, the OPD RMS error of 7.5 \(nm\) deforms the spectra of planet P1 and planet P2 at the shorter wavelengths than 10 \(\mu m\) and the spectrum of planet P2 over the entire wavelength range. This is because the planet spectra are reconstructed from the long-term modulation of the signal while rotating the data, which correlates with the systematic OPD error. Thus, characterizing the planet atmosphere is much more affected by the long-term OPD error, compared to planet detection.
Figure 5: Systematic stellar noise. Panel (a) shows comparison of signals of the three planets P1 (black), P2 (blue), and P3 (green) with the nulled stellar leaks left in the subtraction of the two chop states over the entire wavelength range. The red, brown, gray, and light gray lines represent the stellar leaks for systematic OPD RMS errors of 0.75, 3.8, 7.5, and 11.3 \(nm\). Panels (b), (c), and (d) show fluctuation of the stellar leak due to an OPD RMS error of 0.75 \(nm\) (red) and the demodulated signal of planet P1 (black) at wavelengths of 4, 8, and 12 \(\mu m\), respectively. The integration time of each data point is 550 \(s\).
Figure 6: Same as Figure 4 except for a systematic OPD RMS error of 0.75 \(nm\).
Here, we utilize the advantages of PSSD in terms of planet detection. Because a two-dimensional image can be reconstructed even for a large OPD error, we can investigate what kind of objects orbits the host star. Once we confirm that the stellar leak is dominant over the planet signals at short wavelengths, we could measure the long-term fluctuation of the stellar leak from the data at short wavelengths (process [7]) and subtract the stellar leak from the demodulated signal (process [8]). There are several steps to reconstruct the planet spectrum. First, we apply a low pass filter to the demodulated signal to decrease the impact of statistical noise on the data. Second, the wavelength dependence of the stellar leak is estimated from a large number of the data points collected during baseline rotation. Finally, based on the estimated wavelength dependence, we extrapolate the stellar leak model to the longer wavelength range and subtract it from the demodulated signal shown in Equation 3 over the entire wavelength. The planet spectra are reconstructed through applying the subtracted data to the SVD process. Panels (b), (d), and (f) of Figures 8 show the reconstructed spectra of the three planets through the above process for a large OPD RMS error of 7.5 \(nm\), corresponding to ten times larger than the baseline requirement of LIFE. In this simulation, the wavelength range applied to estimation of the wavelength dependence was 4 - 5.5 \(\mu m\), in which the stellar leak is more than one-hundred times larger than planet signals for an OPD RMS error of 7.5 \(nm\) (Figure 5(a)). The signal-to-noise ratios for the reconstructed spectra (Case 4 of Table 4) are almost the same as those under the ideal condition (Case 1 of Table 4). Thus, if we confirm from the reconstructed image that only the stellar leak mainly contributes to the demodulated signal at short wavelengths, the planet spectra could be reconstructed.
### Comparison with previous method
We compare PSSD with the previous signal extraction through the cross-correlation or the maximum likelihood of the data obtained while rotating the baseline (e.g., Angel & Woolf 1997; Dannert et al. 2022). The main difference between the two methods is whether a one-dimensional image from each baseline is first reconstructed or a two-dimensional image is reconstructed at one time. PSSD performs the correlation of planet signal among the obtained spectrum and converts one-dimensional information into a two-dimensional image. In contrast, the previous method simultaneously finds the modulation of the planet signal in both the wavelength- and time-domains. As discussed in Section 2.1, the advantages of PSSD are robustness against a large OPD error and a limited number of baselines.
Panels (a), (b), and (c) of Figure 9 show the reconstructed two-dimensional images through the previous cross-correlation method. The images were formed based on the following Equation:
\[M_{corr}(\mathbf{\alpha}_{corr})=\Sigma_{j}^{N_{j}}\Sigma_{i}^{N_{i}}O(\lambda_{i} )\sin\left(\frac{2\pi}{\lambda_{i}}\mathbf{B}_{j}\cdot\mathbf{\alpha}_{corr}\right) \sin^{2}\left(\frac{\pi}{\lambda_{i}}\mathbf{b}_{j}\cdot\mathbf{\alpha}_{corr}\right). \tag{13}\]
Compared with Figure 7, the OPD error induces brighter systematic patterns at semi-major axes smaller than 1.0 \(AU\), which prevent us from detecting the two inner planets. As shown in panel (d) of Figure 9, the intensity of the systematic pattern is roughly proportional to the OPD RMS error in the inner region. We note that the noise floor under the baseline requirement of LIFE is almost equal to that for a systematic OPD RMS error of
Figure 8: Spectrum reconstruction under large systematic noise. Panels (a), (c), and (e) show reconstructed spectra of planet P1, planet P2, and planet P3 only through the SVD process. Panels (b), (d), and (f) show the spectra of planet P1, planet P2, and planet P3 reconstructed through subtracting the stellar leak from the demodulated signal before the SVD method. The OPD RMS error was set to 7.5 \(nm\) for both the two cases.
Figure 7: Image reconstruction under large systematic noises. Panels (a), (b), and (c) show reconstructed images for systematic OPD RMS errors of 3.8, 7.5, and 11.3 \(nm\), respectively. Panel (d) shows the noise floors for systematic OPD RMS errors of 3.8 (black), 7.5 (yellow), and 11.3 \(nm\) (brown) were compared with the planet signals (star symbol). The noise floor for each OPD error was calculated for a reconstructed two-dimensional image without the planet signals. The gray line shows the noise floor for the ideal case (i.e., only the shot noise) as a reference. Because the reconstructed planet signals are slightly affected by the OPD error, the black, yellow, and brown star symbols represent the planet signals for OPD RMS errors of 3.8, 7.5, and 11.3 \(nm\), respectively.
0.75 \(nm\), which is consistent to the previous study (Dannert et al. 2022). The signal-to-noise ratios for the detection of planets with PSSD under an OPD RMS error of 7.5 \(nm\) (Case 2 of Table 3) are almost the same as those for the cross-correlation method under the ideal case (Case 5 of Table 3). Therefore, there exists a large difference between the robustness against a large OPD error.
We also compare PSSD with the cross-correlation method in terms of the impact of a limited number of baselines on planet detection. The left panels of Figure 10 show the PSSD reconstructed images with a limited number of baselines and under the existence of only long-term systematic error (without shot noise). In other words, the reconstructed image is not influenced by the integration time. We randomly selected available baselines, which are more sparsely distributed over 360\({}^{o}\) as the number of baselines decreases. The systematic pattern is slightly brighter as the number of baselines decreases. However, the three planets could be detected even though the fraction of the available baselines is limited to only 8 %. In contrast, images reconstructed using the previous method were largely affected by the limited number of baselines because the modulation in the time domain was lost (the right panels of Figure 10). In addition, the artificial pattern fully covers the three planets if the number of baselines is limited to 8 %, and the limited azimuth coverage elongates the point sources, which means that the long-term systematic error modulates the data collected while rotating the baseline.
Thus, PSSD is much less impacted by both a large systematic error and a limited number of available baselines compared to the previous method, which could relax the requirement of LIFE.
## 5 Conclusion
We proposed a method for planet detection and characterization with future nulling space interferometers, such as large interferometer for exoplanets (LIFE). The proposed method is named "phase-space synthesis decomposition (PSSD)." PSSD focuses on the correlation of the planet signal over the entire wavelength range instead of that along the baseline rotation. Because a one-dimensional image parallel to the baseline can be derived for each baseline, a large number of one-dimensional images are collected after rotating the baseline. A two-dimensional image can be reconstructed by summing over the one-dimensional images. Once the two-dimensional image is obtained, a continuous equation is constructed based on the planet position information, and its solution through singular value decomposition (SVD) allows us to extract the planet spectra embedded in the stellar fluctuation. As long as the modulation of the planet signal has a different frequency from the stellar fluctuation, the SVD method efficiently decomposes the stellar leak and planet signal. PSSD provides three advantages in planet detection compared to previous methods that find a modulation of the planet signal during baseline rotation in both the wavelength- and time-domains. One is robustness against a large systematic OPD error. Because the stellar leak has a different wavelength dependence from the planet signal, only the correlation of the signal efficiently decomposes the stellar leak and the planet signal. The second is that PSSD does not correlate with a long-term fluctuation of the stellar leak because a two-dimensional image is formed by summing over one-dimensional images that require only two-phase chop states. The third advantage is robustness against a limited number of baselines.
We performed numerical simulations to investigate the feasibility of PSSD under various conditions. We put three terrestrial planets with semi-major axes of 0.73, 1, and 1.5 \(AU\), corresponding to those of Venus, Earth, and Mars, respectively, around a Sun-like star at 10 \(pc\). The simulation included both statistical and systematic noises. PSSD successfully detected the three planets and reconstructed their spectra for an OPD RMS error of 0.75 \(nm\), which is the same as the baseline requirement of LIFE.
Figure 10: Images reconstructed through PSSD (left panels) and the previous method (right panels) under the condition that the number of baselines is limited by (a)(b) 100 %, (c)(d) 28 %, and (e)(f) 8 %. Only a long-term systematic error was included in the simulations; shot noise was not considered while investigating how the limited number of the baselines affects the image reconstruction. The OPD RMS error was set to 3.8 \(nm\).
Figure 9: Same as Figure 7 except for using the cross-correlation method. The noise floor derived under the LIFE baseline requirement (dashed line) is added as a reference.
We also confirmed that PSSD has robustness against a large systematic OPD error. We increased the amplitude of the systematic OPD error by a factor of 5, 10, and 15. PSSD successfully detected the three planets almost without being affected by the stellar leak, even under the largest systematic OPD error of 11.3 \(nm\). This is because the stellar leak is inversely proportional to approximately the fourth power of wavelength, which is largely different from the modulation of the planet signal in the wavelength domain.
In contrast, the reconstructed spectra were more affected by the long-term systematic noise than planet detection. This is because PSSD uses the modulation of the planet signal during baseline rotation to reconstruct the spectra of the three planets. The long-term systematic OPD error more easily correlates with the modulation of the planet signal. The spectra could not be accurately extracted under an OPD RMS error of 7.5 \(nm\), corresponding to ten times the baseline requirement of LIFE. The signal-to-noise ratio significantly decreases in the wavelength range shorter than 7.5 \(\mu m\).
Here, focusing on the fact that the planet signals are much smaller than the stellar leak at shorter wavelengths than 6 \(\mu m\), we can measure the fluctuation of the stellar leak and its wavelength dependence for the data at short wavelengths. Because PSSD can successfully obtain the planet signals even under a large systematic noise, PSSD utilizes the planet position information and tells us what type of sources contribute to the signal at short wavelengths. If we confirm from the reconstructed two-dimensional image that the stellar leak is dominant over the planet signals at short wavelengths, the wavelength dependence of the stellar leak can be modeled from the data in the short wavelength range. After extrapolating the estimated stellar leak model to the longer wavelength range, the stellar leak could be subtracted from the data over the entire wavelength range. The spectra of the three planets were successfully reconstructed by applying the subtracted data to the SVD process. The signal-to-noise ratios were almost the same as those for the ideal condition.
Finally, we compared PSSD with a previous method that reconstructs a two-dimensional image by simultaneously fitting the modulation of the planet in the time- and wavelength domains after the baseline is rotated. Because the long-term noise is correlated with the planet signal in the time domain, systematic patterns were formed in the reconstructed image and covered the planet signals under systematic OPD errors larger than 3.8 \(nm\). The signal-to-noise ratio for planet detection significantly decreased for large OPD errors compared to PSSD. This is because the noise floor increased in the inner region due to the systematic OPD error. In addition, limited azimuth coverage in the U-V plane impacted planet detection because the modulation in the time domain was lost. In contrast, PSSD can reconstruct a two-dimensional image from fewer baselines. Even in the case where the azimuth coverage of the baseline is limited to 8 %, the three planets could be discovered by PSSD.
Thus, PSSD is more robust against a large OPD error and a limited number of baselines, which could relax the requirements of LIFE regarding the OPD error and the stability duration. However, this numerical simulation was performed as the first step under an ideal case that makes detection of terrestrial planets easier. We will investigate various impacts not considered in this study, such as asymmetric exozodiacal structure (Defere et al., 2010), and the other systematic errors, on planet detection and characterization as the next step.
###### Acknowledgements.
We express our sincere gratitude to Dr. Lacour for many valuable comments and suggestions on this study. Part of this work was supported by JST FOREST Program, Grant Number JPMJFR202W. Part of this work has been carried out within the framework of the National Centre of Competence in Research Planet's Supported by the Swiss National Science Foundation under grants S17W4-082901-8 and S11NF40-20566. FD and SPQ acknowledge the financial support of the SNSF. Part of this work was carried out within the project SCIFY, which has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement CoG- 866070). A.B.K. acknowledges funding provided by University of Belgrade-Faculty of Mathematics (the contract 451-03-47/2023-01/200104), through the grants by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia.
|
2306.09588 | Understanding the Role of Feedback in Online Learning with Switching
Costs | In this paper, we study the role of feedback in online learning with
switching costs. It has been shown that the minimax regret is
$\widetilde{\Theta}(T^{2/3})$ under bandit feedback and improves to
$\widetilde{\Theta}(\sqrt{T})$ under full-information feedback, where $T$ is
the length of the time horizon. However, it remains largely unknown how the
amount and type of feedback generally impact regret. To this end, we first
consider the setting of bandit learning with extra observations; that is, in
addition to the typical bandit feedback, the learner can freely make a total of
$B_{\mathrm{ex}}$ extra observations. We fully characterize the minimax regret
in this setting, which exhibits an interesting phase-transition phenomenon:
when $B_{\mathrm{ex}} = O(T^{2/3})$, the regret remains
$\widetilde{\Theta}(T^{2/3})$, but when $B_{\mathrm{ex}} = \Omega(T^{2/3})$, it
becomes $\widetilde{\Theta}(T/\sqrt{B_{\mathrm{ex}}})$, which improves as the
budget $B_{\mathrm{ex}}$ increases. To design algorithms that can achieve the
minimax regret, it is instructive to consider a more general setting where the
learner has a budget of $B$ total observations. We fully characterize the
minimax regret in this setting as well and show that it is
$\widetilde{\Theta}(T/\sqrt{B})$, which scales smoothly with the total budget
$B$. Furthermore, we propose a generic algorithmic framework, which enables us
to design different learning algorithms that can achieve matching upper bounds
for both settings based on the amount and type of feedback. One interesting
finding is that while bandit feedback can still guarantee optimal regret when
the budget is relatively limited, it no longer suffices to achieve optimal
regret when the budget is relatively large. | Duo Cheng, Xingyu Zhou, Bo Ji | 2023-06-16T02:27:41Z | http://arxiv.org/abs/2306.09588v1 | # Understanding the Role of Feedback in Online Learning with Switching Costs
###### Abstract
In this paper, we study the role of feedback in online learning with switching costs. It has been shown that the minimax regret is \(\widetilde{\Theta}(T^{2/3})\) under bandit feedback and improves to \(\widetilde{\Theta}(\sqrt{T})\) under full-information feedback, where \(T\) is the length of the time horizon. However, it remains largely unknown how the amount and type of feedback generally impact regret. To this end, we first consider the setting of bandit learning with extra observations; that is, in addition to the typical bandit feedback, the learner can freely make a total of \(B_{\text{ex}}\)_extra observations_. We fully characterize the minimax regret in this setting, which exhibits an interesting _phase-transition phenomenon_: when \(B_{\text{ex}}=O(T^{2/3})\), the regret remains \(\widetilde{\Theta}(T^{2/3})\), but when \(B_{\text{ex}}=\Omega(T^{2/3})\), it becomes \(\widetilde{\Theta}(T/\sqrt{B_{\text{ex}}})\), which improves as the budget \(B_{\text{ex}}\) increases. To design algorithms that can achieve the minimax regret, it is instructive to consider a more general setting where the learner has a budget of \(B\)_total observations_. We fully characterize the minimax regret in this setting as well and show that it is \(\widetilde{\Theta}(T/\sqrt{B})\), which scales smoothly with the total budget \(B\). Furthermore, we propose a generic algorithmic framework, which enables us to design different learning algorithms that can achieve matching upper bounds for both settings based on the amount and type of feedback. One interesting finding is that while bandit feedback can still guarantee optimal regret when the budget is relatively limited, it no longer suffices to achieve optimal regret when the budget is relatively large.
Machine Learning, Switching Costs, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Feedback, Feedback, Feedback, Feedback, Learning, Feedback, Feedback, Feedback, Feedback, Feedback, Feedback, Feedback, Feedback, Learning, Feedback,
under full information. Under full-information feedback, even with switching costs, the minimax regret remains \(\Theta(\sqrt{T\ln K})\), which can be achieved by several algorithms such as Shrinking Dartboard (SD) (Geulen et al., 2010) and Follow-the-Perturbed-Leader (FTPL) (Devroye et al., 2013). On the other hand, Dekel et al. (2013) shows a (worse) lower bound of \(\widetilde{\Omega}(K^{1/3}T^{2/3})\) for the bandit setting, which can be matched (up to poly-logarithmic factors) by the batched EXP3 algorithm (Arora et al., 2012). These results reveal that introducing switching costs makes bandit problems _strictly harder_ than expert problems due to the worse dependency on \(T\) (i.e., \(\widetilde{\Theta}(T^{2/3})\) vs. \(\widetilde{\Theta}(\sqrt{T})\)).
**Our Contributions.** While these two special cases have been well studied, it remains largely unknown how feedback impacts regret in general. To close this important gap, we aim to fundamentally understand the role of feedback (in terms of both amount and type) in online learning with switching costs. Our main contributions are as follows.
**(i)** We first consider the setting of bandit learning with extra observations, where in addition to the typical bandit feedback, the learner can freely make a total of \(B_{\text{ex}}\)_extra observations_ in an arbitrary form (Section 3). We present a tight characterization of the minimax regret, which exhibits an interesting _phase-transition phenomenon_ (see Fig. 1(a)). Specifically, when \(B_{\text{ex}}=O(T^{2/3})\), the regret remains \(\widetilde{\Theta}(T^{2/3})\), but when \(B_{\text{ex}}=\Omega(T^{2/3})\), it becomes \(\widetilde{\Theta}(T/\sqrt{B_{\text{ex}}})\), which improves as the budget \(B_{\text{ex}}\) increases.
**(ii)** To understand this phenomenon and design algorithms that can achieve the minimax regret, it is instructive to consider a more general setting where the learner has a budget of \(B\)_total observations_ (Section 4). We fully characterize the minimax regret in this setting as well and show that it is \(\widetilde{\Theta}(T/\sqrt{B})\), which scales smoothly with the total budget \(B\) (see Fig. 1(b)). Furthermore, we propose a generic algorithmic framework, which enables us to design different learning algorithms that can achieve matching upper bounds for both settings based on the amount and type of feedback.
**(iii)** Our findings highlight the crucial impact of feedback type (bandit vs. others) in the second setting (see Table 1). In particular, while both bandit and other types of feedback can achieve optimal regret when the budget is relatively limited, _pure bandit feedback is no longer sufficient to guarantee optimal regret when the budget is relatively large._ However, in the standard setting without switching costs, all three types of feedback we consider can achieve optimal regret in the full range of \(B\). This reveals that the impact of feedback type is (partly) due to switching costs.
## 2 Problem Setup
In this section, we introduce basic notations and present the problem setup. For any positive integer \(n\), let \([n]:=\{1,\dots,n\}\), and let \(\ell_{1:n}\) be the loss sequence \(\ell_{1},\dots,\ell_{n}\). We use \(\mathbb{I}_{\{\mathcal{E}\}}\) to denote the indicator function of event \(\mathcal{E}\): \(\mathbb{I}_{\{\mathcal{E}\}}=1\) if event \(\mathcal{E}\) happens, and \(\mathbb{I}_{\{\mathcal{E}\}}=0\) otherwise.
The learning problem can be viewed as a repeated game between a learner and an adversary. Assume that there are \(K>1\) actions the learner can choose. Let \(T\geq K\) be the length of the time horizon, which is fixed at the beginning of the game and is known to the learner. At each round \(t\in[T]\), the adversary assigns a loss in \([0,1]\) to each action in \([K]\); the learner samples an action \(X_{t}\) from a probability
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{**Feedback Type**} & \multicolumn{2}{c|}{**Minimax Regret**} \\ \cline{2-3} & w/ SC & w/o SC \\ \hline \multicolumn{2}{|c|}{Full-information} \\ \hline \multicolumn{2}{|c|}{Flexible} \\ \hline \multicolumn{2}{|c|}{Bandit} & \multicolumn{2}{c|}{\(\widetilde{\Theta}(T\sqrt{K/B})\)} \\ (\(B=O(K^{1/3}T^{2/3})\)) & \\ \hline \multicolumn{2}{|c|}{Bandit} & \multicolumn{2}{c|}{\(\widetilde{\Theta}(K^{1/3}T^{2/3})\)} \\ (\(B=\Omega(K^{1/3}T^{2/3})\)) & \\ \hline \end{tabular}
\end{table}
Table 1: The minimax regret under different types of feedback in the setting of online learning under a total observation budget \(B\): with (w/) vs. without (w/o) switching costs (SC). A formal description of “Flexible” feedback can be found in Section 4.2.
Figure 1: An illustration of the minimax regret vs. observation budget in log-log plots: (a) the learner receives bandit feedback plus no more than \(B_{\text{ex}}\) extra observations (Theorem 1); (b) the learner can make no more than \(B\) total observations (Theorem 2).
distribution \(w_{t}\) (also determined by the learner) over the action set \([K]\). After taking action \(X_{t}\), the learner suffers a loss of the chosen action, i.e., \(\ell_{t}[X_{t}]\). By the end of each round, the learner observes the loss of some actions (specific types of such feedback will be discussed later) and updates probability distribution \(w_{t+1}\) that will be used at the next round. Each time when the learner takes an action different from that at the previous round, one unit of switching cost is incurred. The _regret_ under a learning algorithm \(\pi\) over a loss sequence \(\ell_{1:T}\), denoted by \(R^{\pi}_{T}(\ell_{1:T})\), is defined as the difference between the cumulative loss (including the switching costs incurred) under algorithm \(\pi\) and that of the optimal (best fixed) action in hindsight:
\[R^{\pi}_{T}(\ell_{1:T})\!:=\!\sum_{t=1}^{T}\left(\ell_{t}[X_{t}]\!+\!\mathbb{I }_{\{X_{t}\neq X_{t-1}\}}\right)\!-\!\min_{k\in[K]}\sum_{t=1}^{T}\ell_{t}[k]. \tag{1}\]
For a randomized algorithm, we consider the _expected regret_ (or simply regret), denoted by \(\mathbb{E}\left[R^{\pi}_{T}(\ell_{1:T})\right]\), where the expectation is taken over the randomness of the algorithm. Without loss of generality, let \(\mathbb{I}_{\{X_{1}\neq X_{0}\}}=0\), i.e., the first action does not incur any switching cost. The adversary is assumed to be _oblivious_, in the sense that the whole loss sequence is determined by the adversary before the game begins. In this paper, for any given algorithm \(\pi\), we are interested in the _worst-case (expected) regret_ over all possible loss sequences (i.e., instance-independent), denoted by \(R^{\pi}_{T}\):
\[R^{\pi}_{T}:=\sup_{\ell_{1:T}\in[0,1]^{KT}}\mathbb{E}\left[R^{\pi}_{T}(\ell_{ 1:T})\right]. \tag{2}\]
Let \(\Pi\) be the set of all feasible learning algorithms following the specified learning protocol. We define the _minimax (or optimal) regret_, denoted by \(R^{\pi}_{T}(\Pi)\), as the minimum worst-case regret under all feasible learning algorithms in \(\Pi\):
\[R^{\pi}_{T}(\Pi):=\inf_{\pi\in\Pi}R^{\pi}_{T}. \tag{3}\]
For notational ease, we may drop \(\Pi\) in \(R^{\pi}_{T}(\Pi)\) and simply use \(R^{\pi}_{T}\) whenever there is no ambiguity.
To understand the role of feedback in online learning with switching costs, we will consider two different settings with an observation budget: (i) in addition to the typical bandit feedback, the learner can freely make a total of \(B_{\text{ex}}\) extra observations (Section 3); (ii) the learner can freely make \(B\) total observations (Section 4). Due to space limitations, in Appendix A we provide motivating examples for the settings with an observation budget we consider.
## 3 Bandit Learning with Switching Costs under Extra Observation Budget
Observing the gap in the optimal regret bound under bandit and full-information feedback (\(\widetilde{\Theta}(T^{2/3})\) vs. \(\widetilde{\Theta}(\sqrt{T})\)), it is natural to ask: _How much can one improve upon the \(\widetilde{\Theta}(T^{2/3})\) regret if the learner is allowed to make some extra observations in addition to the typical bandit feedback?_
Motivated by this question, we consider the setting of bandit learning with switching costs under an _extra_ observation budget. We consider the learning protocol specified in Section 2, and in addition to the typical bandit feedback, the learner is allowed to freely use at most \(B_{\text{ex}}\) extra observations of the loss of other action(s) throughout the game, where \(B_{\text{ex}}\) is an integer in \([0,(K-1)T]\). At the two endpoints of \(0\) and \((K-1)T\), this new setting recovers the bandit and full-information cases, respectively. In this section, by slightly abusing the notation, we also use \(\Pi\) to denote the set of all learning algorithms using typical bandit feedback plus \(B_{\text{ex}}\) extra observations, and we are interested in the minimax regret \(R^{\ast}_{T}\) for \(B_{\text{ex}}\in[0,(K-1)T]\).
### Minimax Regret
We first present our main result of the minimax regret \(R^{\ast}_{T}\) in this setting, which is formally stated in Theorem 1.
**Theorem 1**.: _In the setting of bandit learning with switching costs under an extra observation budget \(B_{\text{ex}}\in[0,(K-1)T]\), the minimax regret is given by_
\[R^{\ast}_{T}=\begin{cases}\widetilde{\Theta}(K^{1/3}T^{2/3}),&B_{\text{ex}}=O( K^{1/3}T^{2/3}),\\ \widetilde{\Theta}(T\sqrt{K/B_{\text{ex}}}),&B_{\text{ex}}=\Omega(K^{1/3}T^{2/3 }).\end{cases}\]
_Remark 1_.: Interestingly, this minimax regret exhibits a _phase-transition phenomenon_ (see, also, Fig. 1(a)): when the amount of extra observations is relatively small (i.e., \(B_{\text{ex}}=O(K^{1/3}T^{2/3})\)), they are insufficient for improving the regret, which remains \(\widetilde{\Theta}(K^{1/3}T^{2/3})\); however, when the amount is large enough (i.e., \(B_{\text{ex}}=\Omega(K^{1/3}T^{2/3})\)), the regret decreases smoothly as the budget \(B_{\text{ex}}\) increases.
### Lower Bound
To establish Theorem 1, we will first show a fundamental lower bound, which is formally stated in Proposition 1.
**Proposition 1**.: _For any learning algorithm \(\pi\) that can use a total of \(B_{\text{ex}}\) extra observations in addition to the typical bandit feedback, there exists a loss sequence \(\ell_{1:T}\) (which may depend on both \(\pi\) and \(B_{\text{ex}}\)) such that_
\[\mathbb{E}\left[R^{\pi}_{T}(\ell_{1:T})\right]\!=\!\begin{cases} \widetilde{\Omega}(K^{1/3}T^{2/3}),&B_{\text{ex}}=O(K^{1/3}T^{2/3}),\\ \widetilde{\Omega}(T\sqrt{K/B_{\text{ex}}}),&B_{\text{ex}}=\Omega(K^{1/3}T^{2/3 }).\end{cases}\]
We provide detailed proof of the above lower bound in Appendix B. Here, we present a proof sketch that mainly focuses on the key steps of the lower bound analysis with necessary explanations. The proof sketch reveals useful insights that not only help explain the interesting phase-transition phenomenon but also shed light on the design of algorithms that can achieve this lower bound.
Proof Sketch of Proposition 1.: We first give an overview of the construction of hard loss sequences in our setting and the main ideas behind the construction.
Generally speaking, the difficulty of bandit problems lies in the _exploitation-exploration_ tradeoff. On the one hand, the learner wants to pull empirically good actions in order to enjoy a low instantaneous loss (i.e., exploitation); on the other hand, she may also want to pull other actions and gain useful information to distinguish the optimal (best fixed) action and suboptimal actions (i.e., exploration).
In the presence of switching costs, Dekel et al. (2013) proposes hard instances (i.e., loss sequences) based on a _multi-scale random walk_ such that useful information toward distinguishability (between the optimal action and suboptimal actions) _can only be obtained when the learner switches actions_, which, however, incurs switching costs. Using carefully constructed instances, they show that switching costs increase the intrinsic difficulty of bandit learning and result in a regret lower bound of \(\bar{\Omega}(K^{1/3}T^{2/3})\).
However, the hard instances in Dekel et al. (2013) work for _pure bandit feedback_ only. That is, if the learner can obtain full-information feedback at _any_ of the \(T\) rounds, she would immediately identify the optimal action and suffer no regret in the rest of the game. The reason is that the optimal action has the (unique) lowest loss at _all_\(T\) rounds.
To make it still hard to learn even when the learner has some extra feedback, we will borrow an idea from Shi et al. (2022) to modify the original hard instance in Dekel et al. (2013): at each round, an additional layer of action-dependent noise is added to the loss of each action. As a result, the optimal action no longer has the lowest loss at all rounds and therefore cannot be trivially identified even when the learner can make extra observations.
In the rest of the proof sketch, we present three key steps of the proof and provide high-level explanations.
**Step 1: Establishing the relationship between two regrets.** As in Dekel et al. (2013), each loss value in the initial loss sequence we construct, denoted by \(\ell_{1:T}^{\mathrm{init}}\), may not be bounded in \([0,1]\); through truncation, we construct the actual loss sequence \(\ell_{1:T}\) by simply projecting each initial loss value onto \([0,1]\). For notational ease, we use \(R_{T}^{\mathrm{init}}\) and \(R_{T}\) to denote the regret over loss sequences \(\ell_{1:T}^{\mathrm{init}}\) and \(\ell_{1:T}\), respectively. Recall that the goal is to obtain a lower bound on \(\mathbb{E}\left[R_{T}\right]\), which, however, is hard to analyze directly due to the truncation. Instead, we show that it suffices to obtain a lower bound on \(\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\) (i.e., the regret under untruncated loss sequence), due to the following relationship:
\[\mathbb{E}\left[R_{T}\right]\geq\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]- \frac{\epsilon T}{6}, \tag{4}\]
where \(\epsilon>0\) is the gap between the instantaneous losses of the optimal action and a suboptimal action. The value of \(\epsilon\) will be determined later.
**Step 2: Obtaining a lower bound on \(\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\).** Let \(S\) be the expected total number of action switches. Through careful information-theoretic analysis, we obtain the following (informal) lower bound on \(\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\) in terms of the number of switches \(S\) and extra observation budget \(B_{\mathrm{ex}}\):
\[\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\geq\underbrace{\frac{\epsilon T} {2}}_{\mathbf{A.1}}-\underbrace{C\frac{\epsilon^{2}T}{\sqrt{K}}(\sqrt{S}+ \sqrt{B_{\mathrm{ex}}})}_{\mathbf{A.2}}+\underbrace{S}_{\mathbf{A.3}}, \tag{5}\]
where \(C\) is a positive term that contains some constants and poly-logarithmic terms of \(T\).
We now explain each term in Eq. (5). Term \(\mathbf{A.1}\) reflects that without any useful information toward distinguishability, the learner may be stuck with a suboptimal action throughout the game, thus suffering \(\Theta(\epsilon T)\) regret. Term \(\mathbf{A.2}\) roughly represents the amount of useful information for gaining distinguishability and thus reducing the regret: better distinguishability leads to a larger \(\mathbf{A.2}\) and thus a lower regret. Term \(\mathbf{A.3}\) is simply the switching costs incurred.
**Step 3: Choosing a proper value of \(\epsilon\).** Note that the lower bound in Eq. (5) is a quadratic function of \(\sqrt{S}\). By finding the minimizer of this quadratic function, denoted by \(S^{*}\), we can further obtain the following lower bound:
\[\mathbb{E}\left[R_{T}^{\mathrm{init}}\right]\geq\underbrace{\frac{\epsilon T }{2}}_{\mathbf{B.1}}-\underbrace{\frac{C^{2}}{4}\cdot\frac{\epsilon^{4}T^{2}} {K}}_{\mathbf{B.2}}-\underbrace{C\frac{\epsilon^{2}T\sqrt{B_{\mathrm{ex}}}}{ \sqrt{K}}}_{\mathbf{B.3}}. \tag{6}\]
It now remains to choose a proper value of \(\epsilon\) based on \(B_{\mathrm{ex}}\). By considering two different cases (\(B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3})\) and \(B_{\mathrm{ex}}=O(K^{1/3}T^{2/3})\)) and choosing \(\epsilon\) accordingly, we show that one of \(\mathbf{B.2}\) and \(\mathbf{B.3}\) dominates the other. Then, we can obtain the desired lower bound by combining these two cases. This completes the proof sketch.
_Remark 2_.: While we use the same instance construction method in Shi et al. (2022), the problem they study is very different from ours. In particular, their learning protocol and the definition of switching costs are different, and they do not consider an observation budget as we do. We present a detailed discussion about the key difference in Section 5.
### Insights from Lower Bound Analysis
Next, we give some useful observations and important insights that can be obtained from the above proof sketch, in particular, from Eq. (5), which provides a _unified_ view of the lower bound in online learning with bandit feedback and flexible extra observations within a budget.
As a warm-up, we begin with the standard bandit case (i.e., \(B_{\mathrm{ex}}=0\)), which has been extensively studied (Dekel et al.,
2013). Recall that under the current instance construction, bandit feedback provides useful information _only_ when the learner switches actions. From Eq. (5), one can observe that there is a tradeoff between exploration and switching costs: on the one hand, in order to better explore and enjoy a lower regret, the learner has to switch frequently (i.e., a larger \(S\)) so as to gain more information (i.e., a larger \(\mathbf{A.2}\)); on the other hand, however, since the learner has to pay one unit of switching cost for each switch (contributing to \(\mathbf{A.3}\)), she should not switch too often. To strike the balance between the two, the best the learner can do is to switch \(S^{*}:=\Theta(K^{1/3}T^{2/3})\) times; otherwise, the regret can only be worse because \(S^{*}\) is the minimizer of the lower bound in Eq. (5). Finally, choosing \(\epsilon\) to be \(\widetilde{\Theta}(K^{1/3}T^{-1/3})\) in Eq. (6) yields the \(\widetilde{\Omega}(K^{1/3}T^{2/3})\) bound for the bandit case.
_Remark 3_.: The above discussion indicates that with switching costs, the worst-case hard instance restrains the learner from obtaining distinguishability from more than \(\Theta(K^{1/3}T^{2/3})\) rounds (i.e., rounds associated with action switches) rather than \(T\) rounds as in the standard bandit learning setting (without switching costs). This is also the key reason why the minimax regret is worse in bandit learning with switching costs.
Next, we consider the first case: \(B_{\text{ex}}=O(K^{1/3}T^{2/3})\). In this case, one might hope to obtain a smaller regret (compared to the bandit case) with the help of additional feedback. However, we will show that unfortunately, the gain from those additional observations is negligible for improving the regret order-wise, and hence, the previous \(\widetilde{\Omega}(K^{1/3}T^{2/3})\) bound remains. To see this, let \(\epsilon\) take the same value as in the bandit case (i.e., \(\epsilon=\widetilde{\Theta}(K^{1/3}T^{-1/3})\)) in Eq. (6); although \(\mathbf{B.3}\) now becomes positive instead of zero (as in the bandit case), it is still dominated by \(\mathbf{B.2}\), which results in the same \(\widetilde{\Omega}(K^{1/3}T^{2/3})\) bound as in the bandit case.
We now turn to the second case: \(B_{\text{ex}}=\Omega(K^{1/3}T^{2/3})\). In contrast to the previous case, due to a relatively large budget, the distinguishability provided by those extra observations (which do not contribute to switching costs) is no longer negligible. This leads to a smaller regret. In particular, by choosing \(\epsilon=\widetilde{\Theta}(\sqrt{K/B_{\text{ex}}})\), we have \(\mathbf{B.3}\) dominate \(\mathbf{B.2}\) and obtain the desired lower bound. In other words, one can reduce the regret through free exploration enabled by such extra observations without incurring switching costs.
### Fundamental Questions about Algorithm Design
The above insights we gain from the lower bound analysis can also shed light on the algorithm design. In fact, these motivate us to ask several fundamental questions, not only about how to achieve optimal regret but also about the role of feedback in online learning with switching costs, in terms of both the amount and type of feedback.
On the one hand, it is straightforward to achieve a matching upper bound when \(B_{\text{ex}}=O(K^{1/3}T^{2/3})\). Specifically, one can simply ignore all the extra observations and use bandit feedback only, e.g., batched EXP3 (Arora et al., 2012), which enjoys a \(\widetilde{\Theta}(K^{1/3}T^{2/3})\) regret. Although the bounds match, only \(\Theta(T^{2/3})\) of the bandit feedback from the \(T\) rounds contribute to distinguishability due to the tradeoff introduced by switching costs (see Remark 3). Given this observation, it is natural to ask: _(Q1) Can one still achieve the same regret of \(\widetilde{\Theta}(K^{1/3}T^{2/3})\) while using bandit feedback from \(\Theta(K^{1/3}T^{2/3})\) rounds only? Moreover, how would regret scale with the amount of available feedback if the (bandit) feedback is even more limited (e.g., \(O(K^{1/3}T^{2/3})\))?_
On the other hand, it remains largely unknown how to match the \(\widetilde{\Omega}(T\sqrt{K/B_{\text{ex}}})\) bound when \(B_{\text{ex}}=\Omega(K^{1/3}T^{2/3})\). Note that in the derivation of the lower bound, we _optimistically_ view that _all_\(B_{\text{ex}}\) extra observations contribute to useful information toward distinguishability (see term \(\mathbf{A.2}\) in Eq. (5)). To achieve this, however, one needs to answer an important question: _(Q2) How to carefully design a learning algorithm that can properly use these extra observations to indeed gain sufficient useful information toward distinguishability and match the lower bound? Moreover, since \(B_{\text{ex}}\) now dominates \(S^{*}\) (order-wise), can one still match the lower bound of \(\widetilde{\Omega}(T\sqrt{K/B_{\text{ex}}})\) using \(B_{\text{ex}}\) extra observations only (i.e., not using any bandit feedback)?_
To address these fundamental questions, it turns out that it would be more instructive to consider a general setting where the learner has a budget for total observations (see Section 4) rather than extra observations. We will show that the results obtained for this general setting will naturally answer the aforementioned questions. In particular, we show that there exist learning algorithms that can match the lower bound (up to poly-logarithmic factors), hence concluding the minimax regret stated in Theorem 1.
## 4 Online Learning with Switching Costs under Total Observation Budget
In this section, we consider a more general setting of online learning with switching costs under a total observation budget. Specifically, at each round, the learner can freely choose to observe the loss of up to \(K\) actions (which may not necessarily include the action played), as long as the total number of observations over \(T\) rounds does not exceed the budget \(B\), which is an integer in \([K,KT]\). Without loss of generality, we assume \(B\geq K\). We aim to understand the role of feedback in this general setting by studying the following fundamental question: _(Q3) How does the minimax regret scale with the amount of available feedback in general? What is the impact of different types of feedback (bandit, full-information, etc.)?_
To proceed, we need some additional notations for this sec
tion. Let \(\mathcal{O}_{t}\subseteq[K]\) be the observation set, i.e., the set of actions whose loss the learner chooses to observe at round \(t\in[T]\), and let \(N_{\mathrm{ob}}\) be the total number of observations, i.e., \(N_{\mathrm{ob}}:=\sum_{t=1}^{T}|\mathcal{O}_{t}|\). Naturally, we have \(N_{\mathrm{ob}}\leq B\leq KT\). For example, bandit feedback is a special case with \(\mathcal{O}_{t}=\{X_{t}\},\forall t\in[T]\) and \(N_{\mathrm{ob}}=B=T\); full-information feedback is another special case with \(\mathcal{O}_{t}=[K],\forall t\in[T]\) and \(N_{\mathrm{ob}}=B=KT\). By slightly abusing the notation in this section, we also use \(R^{*}_{T}\) to denote the minimax regret over the set of all learning algorithms that satisfy the learning protocol specified in Section 2 and do not exceed the total observation budget \(B\).
### Minimax Regret
We first present the main result of this section and fully characterize the minimax regret for this general setting.
**Theorem 2**.: _In the setting of online learning with switching costs under a total observation budget \(B\in[K,KT]\), the minimax regret is given by \(R^{*}_{T}=\widetilde{\Theta}(T\sqrt{K/B})\)._
_Remark 4_.: This result answers the first part of question **(Q3)**: the minimax regret has a universal \(\Theta(1/\sqrt{B})\) scaling across the full range of total budget \(B\) (see Fig. 1 (b)), compared to the phase transition in Section 3 (see Fig. 1 (a)).
To establish this result, we need to obtain both a lower bound and a matching upper bound. For the lower bound, it turns out that it suffices to use an existing lower bound, which was originally derived for standard online learning _without_ switching costs. We restate this lower bound in Lemma 1.
**Lemma 1**.: _(_Seldin et al._,_ 2014_, Theorem 2)_ _In the setting of online learning (without switching costs) under a total observation budget \(B\in[K,KT]\), the minimax regret is lower bounded by \(R^{*}_{T}=\Omega(T\sqrt{K/B})\)._
Naturally, this serves as a valid lower bound for the setting with switching costs we consider. In fact, we will show that this lower bound is tight (up to poly-logarithmic factors), which in turn offers the following important message.
_Remark 5_.: If the learner can freely make observations over \(T\) rounds within the budget, introducing switching costs _does not increase_ the intrinsic difficulty of the online learning problem in terms of the minimax regret.
Now, it only remains to show that there exist algorithms that can achieve a matching upper bound (up to poly-logarithmic factors), which will be the main focus of the next subsection.
### Learning Algorithms and Upper Bounds
In this subsection, we show that there indeed exist algorithms that can achieve the lower bound in Lemma 1, which further implies the tight bound in Theorem 2. Instead of focusing on one particular algorithm, we first propose a generic algorithmic framework, which not only enables us to design various optimal learning algorithms in a unified way but also facilitates a fundamental understanding of the problem by distilling its key components.
Our generic framework builds upon the classic _Online Mirror Descent (OMD)_ framework with negative entropy regularizer (also called the _Hedge_ algorithm) (Littlestone and Warmuth, 1989) and incorporates the following three key components to tackle both switching costs and observation budget in a synergistic manner.
**Batching Technique.** The batching technique was originally proposed for addressing adaptive adversaries (Arora et al., 2012), but naturally provides low switching guarantees. We divide \(T\) rounds into batches and judiciously distribute the available observations across batches. That is, instead of consuming observations at every round as in standard online learning (which could even be infeasible when observation budget \(B\) is relatively small), we use observations only at a _single_ round randomly sampled from each batch. One key step to obtain the desired regret guarantee is to feed the (unbiased estimate of) batch-average loss to the learning algorithm at the end of each batch. While this technique is borrowed from Shi et al. (2022), the problem setup we consider is very different (see Section 5).
**Shrinking Dartobard (SD).** SD is a calibrated technique for controlling the number of action switches in online learning under a lazy version of Hedge. That is, with a carefully crafted probability distribution, the action tends to remain unchanged across two consecutive rounds (Geulen et al., 2010) while preserving the same marginal distribution as in Hedge. In our algorithmic framework, we generalize this idea to the batching case with general feedback: the same action can be played across two consecutive batches (instead of across rounds), and it is no longer required to use only full-information feedback as in Geulen et al. (2010).
**Feedback Type.** Recall that the learner is allowed to freely request feedback within the total budget. Hence, our last component lies in the feedback type. That is, the learner has the flexibility to choose the observation set \(\mathcal{O}_{u_{b}}\) (not limited to bandit or full-information feedback only). In order to achieve a matching upper bound, however, the choice of the observation set (i.e., the type of feedback) is crucial in some cases. We will elaborate on this in Section 4.3.
Putting these three components together, we arrive at our unified algorithmic framework, which is presented in Algorithm 1. Given the input \(T\), \(K\), and \(B\) of the problem, we need to determine the following input of the algorithm: the number of batches \(N\), batch size \(\tau\), learning rate \(\eta\), and indicator \(I_{\mathrm{SD}}\) (Line 1), along with the initialization of some variables (Line 2). Throughout the game, we maintain a positive weight \(W_{b}[k]\) for each action \(k\in[K]\) in each batch \(b\in[N]\). Both the weights and the action for each batch may
be updated only between two consecutive batches. Hence, in each batch \(b\), we keep playing the chosen action \(A_{b}\) until the end of the batch (Line 4); we sample a round \(u_{b}\) uniformly at random from the current batch (Line 5) and choose an observation set \(\mathcal{O}_{u_{b}}\) in a certain way (to be specified later) such that the loss of each action in \(\mathcal{O}_{u_{b}}\) will be observed at round \(u_{b}\) (Line 6). We then construct an unbiased estimate (Line 7), denoted by \(\widehat{\ell}_{b}=(\widehat{\ell}_{b}[1],\ldots,\widehat{\ell}_{b}[K])\), of the batch-average loss \(\sum_{t=(b-1)\tau+1}^{b}\ell_{t}/\tau\) (which depends on the choice of \(\mathcal{O}_{u_{b}}\) and will be specified later) and then update the weight and sampling probability of each action accordingly: \(W_{b+1}[k]=W_{b}[k]\cdot\exp(-\eta\cdot\widehat{\ell}_{b}[k])\) and \(w_{b+1}[k]:=W_{b+1}[k]/\sum_{i=1}^{K}W_{b+1}[i]\) (Line 8). Finally, we determine action \(A_{b+1}\) for the next batch (Line 9). Specifically, if the SD indicator \(I_{\mathrm{SD}}=0\), probability \(I_{\mathrm{SD}}\cdot\exp(-\eta\cdot\widehat{\ell}_{b})\) is always zero, and hence, action \(A_{b+1}\) is sampled using fresh randomness with probability proportional to action weights as normally done in Hedge: sample \(A_{b+1}\) following distribution \(w_{b+1}=(w_{b+1}[1],\ldots,w_{b+1}[K])\). If the SD indicator \(I_{\mathrm{SD}}=1\), with probability \(\exp(-\eta\cdot\widehat{\ell}_{b})\), we keep the current action for the next batch (i.e., \(A_{b+1}=A_{b}\)); otherwise, we sample a new action \(A_{b+1}\) following distribution \(w_{b+1}\).
With Algorithm 1 in hand, we are ready to introduce several specific instantiations and study their regret guarantees. In particular, for each instantiation we will specify the choice of the following parameters: number of batches \(N\), batch size \(\tau\), learning rate \(\eta\), SD indicator \(I_{\mathrm{SD}}\), and observation set \(\mathcal{O}_{u_{b}}\). In the following, we first demonstrate one simple instantiation that uses full-information feedback only. Then, we show how to generalize this instantiation using more flexible feedback (i.e., not limited to full information only) while achieving the same performance guarantee.
**Instantiation via Full-information Feedback.** In this instantiation of Algorithm 1, we receive full-information feedback at a randomly selected round \(u_{b}\) in each batch \(b\) (i.e., \(\mathcal{O}_{u_{b}}=[K]\) and \(\widehat{\ell}_{b}=\ell_{u_{b}}\)) and SD is turned on (i.e., \(I_{\mathrm{SD}}=1\)). At a high level, this can be viewed as a batched generalization of the original SD algorithm (Geulen et al., 2010) with \(N=B/K\) batches (since we have \(K\) observations in each batch), and hence, the corresponding batch size is \(\tau=T/N=KT/B\). For ease of exposition, we assume that \(N\) and \(\tau\) are integers. Specifically, we have \(N=B/K\), \(\tau=KT/B\), \(\eta=\sqrt{\frac{2\ln K}{3B}}\), \(I_{\mathrm{SD}}=1\), \(\mathcal{O}_{u_{b}}=[K]\), and \(\widehat{\ell}_{b}=\ell_{u_{b}}\). We use \(\pi_{\mathrm{full}}\) to denote this instantiation and present its regret upper bound in Proposition 2. The proof is provided in Appendix C.
**Proposition 2**.: _The worst-case regret under algorithm \(\pi_{\mathrm{full}}\) is upper bounded by \(R_{T}^{\pi_{\mathrm{full}}}=O(T\sqrt{K\ln K/B})\)._
_Remark 6_.: This result immediately implies an upper bound of the minimax regret: \(R_{T}^{\star}=O(T\sqrt{K\ln K/B})\), which, along with the lower bound in Lemma 1, further implies the tight bound in Theorem 2. Note that there is an additional \(\sqrt{\ln K}\) factor in the upper bound. This shares the same pattern as in the setting even without switching costs (see Seldin et al. (2014, Theorem 1)), where the achieved upper bound also has an additional \(\sqrt{\ln K}\) factor.
_Remark 7_.: For the previous setting considered in Section 3, the above result also implies an upper bound of the minimax regret: \(\widetilde{O}(T\sqrt{K/B_{\mathrm{ex}}})\), when \(B_{\mathrm{ex}}=\Omega(K^{1/3}T^{2/3})\), by simply ignoring all bandit feedback (i.e., \(B=B_{\mathrm{ex}}\)). On the other hand, as discussed in Section 3.4, when \(B_{\mathrm{ex}}=O(K^{1/3}T^{2/3})\), one can simply ignore extra observations and use pure bandit feedback only (e.g., batched EXP3 (Arora et al., 2012)) to achieve a \(\widetilde{O}(K^{1/3}T^{2/3})\) regret. Combining these results, along with the lower bound in Proposition 1, implies the tight bound in Theorem 1. Moreover, this also answers question **(Q2)** raised in Section 3.
The result of our first instantiation shows that the optimal regret can indeed be achieved (up to a \(\sqrt{\ln K}\) factor) when full-information feedback is employed. However, we can also show that the use of full-information feedback is not essential. In fact, it suffices to have an observation set chosen uniformly at random from all subsets of \([K]\) with the same cardinality, which leads to a more flexible instantiation of Algorithm 1 presented below.
**Instantiation via Flexible Feedback.** In this instantiation,
instead of having \(|\mathcal{O}_{u_{b}}|=K\) as under full-information feedback, we allow \(|\mathcal{O}_{u_{b}}|=M\leq K\). The key to this flexibility is a careful construction of an unbiased estimate of the batch-average loss (i.e., \(\widehat{\ell}_{b}\)). Specifically, let \(M\) be any integer that satisfies \(M\in[K]\) if \(B<T\) and \(M\in[\lceil B/T\rceil,K]\) if \(B\geq T\).2 Then, we have \(N=B/M\), \(\tau=T/N=MT/B\), \(\eta=M\sqrt{\frac{2\ln K}{3KB}}\), \(I_{\mathrm{SD}}=1\), \(\mathcal{O}_{u_{b}}\) is chosen uniformly at random from \(\{U\in 2^{[K]}:|U|=M\}\), and \(\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{ u_{b}}[k]}{M/K}\) for all \(k\in[K]\). We use \(\pi_{\text{flex}}\) to denote this instantiation and present its regret upper bound in Proposition 3. The proof is provided in Appendix D.
Footnote 2: To fully use the budget, \(M\) cannot be too small when \(B\geq T\).
**Proposition 3**.: _The worst-case regret under algorithm \(\pi_{\text{flex}}\) is upper bounded by \(R_{T}^{\pi_{\text{flex}}}=O(T\sqrt{K\ln K/B})\)._
An astute reader may already notice that in the above flexible instantiation, while the number of observations can be one (i.e., \(|\mathcal{O}_{u_{b}}|=1\)), it is not the same as standard bandit feedback. This is because here, \(\mathcal{O}_{u_{b}}\) needs to be chosen uniformly at random rather than simply being the action played in that batch (i.e., \(\mathcal{O}_{u_{b}}=\{A_{b}\}\)) as in the standard bandit setting (with a batch size of one). Motivated by this subtle difference, we will devote the next subsection to studying the impact of feedback type.
### Impact of Feedback Type
In this subsection, we study the impact of feedback type by presenting another instantiation of Algorithm 1 via pure bandit feedback only. In this case, we naturally have \(B\leq T\).
**Instantiation via Bandit Feedback.** This instantiation is a generalized version of batched EXP3 (Arora et al., 2012) with _flexible batch size_. Specifically, we have \(N=B\), \(\tau=T/B\), \(\eta=\sqrt{\frac{2\ln K}{BK}}\), \(I_{\mathrm{SD}}=0\), \(\mathcal{O}_{u_{b}}=\{A_{b}\}\), and \(\widehat{\ell}_{b}[k]=\mathbb{I}\{k\in\mathcal{O}_{u_{b}}\}\cdot\frac{\ell_{ u_{b}}[k]}{w_{b}[k]}\) for all \(k\in[K]\). We use \(\pi_{\mathrm{b}}\) to denote this instantiation. When \(B=O(K^{1/3}T^{2/3})\), we obtain a regret upper bound for \(\pi_{\mathrm{b}}\) and state it in Proposition 4. The proof is provided in Appendix E.
**Proposition 4**.: _When \(B=O(K^{1/3}T^{2/3})\), the worst-case regret under algorithm \(\pi_{\mathrm{b}}\) is upper bounded by \(R_{T}^{\pi_{\mathrm{b}}}=O(T\sqrt{K\ln K/B})\)._
_Remark 8_.: This result is encouraging, in the sense that when \(B=O(K^{1/3}T^{2/3})\), even using pure bandit feedback can achieve the optimal minimax regret of \(\widetilde{\Theta}(T\sqrt{K/B})\). This result also answers question **(Q1)** raised in Section 3. First, it captures the regret scaling with respect to the amount of bandit feedback (i.e., still \(\Theta(1/\sqrt{B})\)) when \(B\) is relatively small. Second, it implies that to achieve a regret of \(\widetilde{\Theta}(K^{1/3}T^{2/3})\), it suffices to use bandit feedback from only \(B=\Theta(K^{1/3}T^{2/3})\) rounds rather than all \(T\) rounds as in the classic algorithms (Arora et al., 2012). The same minimax regret at these two endpoints (\(B=\Theta(K^{1/3}T^{2/3})\) and \(B=T\)) further implies that if only bandit feedback is allowed, the minimax regret is also \(\widetilde{\Theta}(K^{1/3}T^{2/3})\) when \(B=\Omega(K^{1/3}T^{2/3})\) (i.e., in-between the two endpoints). In this case, bandit feedback is _no longer sufficient_ to achieve the optimal minimax regret of \(\widetilde{\Theta}(T\sqrt{K/B})\), although full-information and flexible feedback can still achieve this optimal minimax regret (see Propositions 2 and 3). Clearly, this shows the crucial impact of different types of feedback (when the total budget \(B\) is large), which answers the second part of question **(Q3)**. On the other hand, however, a straightforward result (Proposition 5 in Appendix F), along with Propositions 2 and 3 and Lemma 1, shows that in the standard setting without switching costs, all three types of feedback can achieve optimal regret in the full range of \(B\). This reveals that the impact of feedback type is partly due to switching costs. We also summarize these results in Table 1.
_Remark 9_.: Under bandit feedback, adopting a different regularizer called _Tsallis entropy_(Audibert and Bubeck, 2009) to the OMD framework could further remove the \(\sqrt{\ln K}\) factor in the upper bound from Proposition 4 and exactly match the lower bound (order-wise) presented in Lemma 1.
## 5 Related Work
In this section, we present detailed discussions on several lines of research that are most relevant to ours. We omit the discussion on bandit and expert problems with switching costs as we have discussed this line of work in Section 1.
**Online Learning with Total Observation Budget.** In this line of research, the focus is on regret minimization when feedback is not always available and hence "limited" within a total budget. For example, in the so-called "label efficient (bandit) game" (Cesa-Bianchi et al., 2004; Audibert and Bubeck, 2010), the learner can ask for full-information/bandit feedback from no more than \(m\in[1,T]\) round(s). It is shown that the tight optimal regrets are \(\Theta(T\sqrt{\ln K/m})\) and \(\Theta(T\sqrt{K/m})\) under full-information and bandit feedback, respectively. Seldin et al. (2014) also considers a total observation budget in online learning, where the learner can freely request feedback, as long as the total amount of observed losses does not exceed the given total budget \(B\). They establish a tight characterization of the minimax regret in their setting (i.e., \(\widetilde{\Theta}(T\sqrt{K/B})\)). However, they do not consider switching costs, nor the case when the total observation budget is smaller than \(T\) in their algorithm design. Interestingly, we show that introducing switching costs _does not increase_ the intrinsic difficulty of online learning in the sense that the minimax regret remains \(\widetilde{\Theta}(T\sqrt{K/B})\), but the feedback type becomes crucial.
**Bandits with Additional Observations.**Yun et al. (2018) considers the bandit setting with additional observations, where the learner can freely make \(n\in[0,K-1]\) obser
vations at each round in addition to the bandit feedback. Hence, this can be viewed as a special case of online learning with a total observation budget (Seldin et al., 2014). That is, a total of \((n+1)T\) observations are used in a particular way (i.e., bandit plus extra observations). They present a tight characterization of the scaling of the minimax regret with respect to \(K\), \(T\), and \(n\). Similar to Seldin et al. (2014), however, switching costs are not considered.
**Online Learning with Switching Costs and Feedback Graphs.**Arora et al. (2019) considers online learning with switching costs and feedback graphs, where given a feedback graph \(G\), the learner observes the loss associated with the neighboring action(s) of the chosen action (including itself). However, the feedback graph is given and hence the additional feedback is _not_ of the learner's choice. Arora et al. (2019) shows that in this setting, the minimax regret is \(\tilde{\Theta}(\gamma(G)^{1/3}T^{2/3})\), where \(\gamma(G)\) is the domination number of the feedback graph \(G\). Hence, the dependency on \(T\) remains the same as in the standard bandit setting without additional observations (i.e., \(\tilde{\Theta}(T^{2/3})\)). On the contrary, in the setting we consider, the learner can freely decide the loss of which actions to observe, which leads to different (and more interesting) regret bounds.
**Online Learning with Limited Switches.**Altschuler and Talwar (2018) considers online learning with limited switches. In contrast to the settings with switching costs, here the learner does not pay additional losses for switching actions; instead, the total number of switches allowed is capped at \(S\). Compared to our setting, a key difference is that switching is a constraint rather than a penalty added to the loss/cost function. They show that in the bandit setting, the minimax regret is \(\Theta(T\sqrt{K/S})\), i.e., the regret improves as the switching budget increases; in the expert setting, however, there is a phase-transition phenomenon: while the minimax regret is \(\tilde{\Theta}(T\ln K/S)\) when \(S=O(\sqrt{T\ln K})\), it remains \(\tilde{\Theta}(\sqrt{T\ln K})\) when \(S=\Omega(\sqrt{T\ln K})\).
**Online Learning against Adaptive Adversaries**. Online learning with switching costs can also be viewed as a special case of _learning against adaptive adversaries_, where the losses at round \(t\) are adapted to actions taken at both rounds \(t\) and \(t-1\) (in contrast to the oblivious adversaries we consider). Such adversaries have a _bounded memory_ (of size one), in the sense that they could adapt only up to the _most recent_ action, instead of any history in the earlier rounds (Cesa-Bianchi et al., 2013). Adopting the multi-scale random walk argument in Dekel et al. (2013), it has been shown that against _adaptive adversaries with a memory of size one_, the _minimax policy regret_ is \(\tilde{\Theta}(T^{2/3})\) under _both_ bandit feedback (Cesa-Bianchi et al., 2013) and full-information feedback (Feng and Loh, 2018). This is fundamentally different from the special case with switching costs, where the minimax regret is different under bandit feedback and full-information feedback (\(\widetilde{\Theta}(T^{2/3})\) vs. \(\widetilde{\Theta}(\sqrt{T})\)).
**Stochastic Bandits and the Best of Both Worlds.**Note that the above discussions have been focused on the adversarial setting. There is another body of work focused on the stochastic setting (see, e.g., Auer et al. (2002); Auer (2003); Simchi-Levi and Xu (2019)), where the loss/reward follows some fixed distribution rather than being generated arbitrarily by an adversary. Hence, it is very different from the adversarial setting we consider. An interesting line of work has been focused on designing algorithms that can perform well in both adversarial and stochastic settings, thus achieving _the best of both worlds_ (see, e.g., Bubeck and Slivkins (2012); Zimmert et al. (2019)).
**Other Related Work.**In Shi et al. (2022), a novel bandit setting with switching costs and additional feedback has been considered. Specifically, the learner maintains an "action buffer" for each round, which is a subset of actions with fixed cardinality \(m\in[K]\), and the learner can only take an action from this buffer set. Their switching cost can be roughly viewed as how much change is made to this buffer set throughout the game - replacing an action in the buffer set incurs a constant cost. While the learner can observe the losses of all the actions in this buffer set for free, the learner can also choose to receive full-information feedback (i.e., observing the losses of all actions rather than just actions in the buffer set) by paying another (larger) constant cost. Although we draw inspiration from their work for deriving the lower bound and designing algorithms, both their problem setup and regret definition are very different from ours, and more importantly, they do not consider observation budget.
## 6 Conclusion
Our work is motivated by a well-known gap in the minimax regret under bandit feedback and full-information feedback in online learning with switching costs. We attempted to fundamentally understand the role of feedback by studying two cases of observation budget: (i) bandit feedback plus an extra observation budget and (ii) a total observation budget. Our findings reveal that both the amount and type of feedback play crucial roles when there are switching costs.
One interesting future direction is to consider stronger high-probability regret guarantees (Neu, 2015). Another direction is to achieve _the best of both worlds_ guarantees for regrets with switching costs (Rouyer et al., 2021; Amir et al., 2022).
## Acknowledgments
We thank the anonymous paper reviewers for their insightful feedback. This work is supported in part by the NSF grants under CNS-2112694 and CNS-2153220. |
2307.06592 | Noncommutative crepant resolutions of $cA_n$ singularities via Fukaya
categories | We compute the wrapped Fukaya category $\mathcal{W}(T^*S^1, D)$ of a cylinder
relative to a divisor $D= \{p_1,\ldots, p_n\}$ of $n$ points, proving a mirror
equivalence with the category of perfect complexes on a crepant resolution
(over $k[t_0,\ldots, t_n]$) of the singularity $uv=t_0t_1\ldots t_n$. Upon
making the base-change $t_i= f_i(x,y)$, we obtain the derived category of any
crepant resolution of the $cA_{n}$ singularity given by the equation $uv=
f_0\ldots f_n$. These categories inherit braid group actions via the action on
$\mathcal{W}(T^*S^1,D)$ of the mapping class group of $T^*S^1$ fixing $D$. We
also give a geometric model of the derived contraction algebra of a $cA_n$
singularity in terms of the relative Fukaya category of the disc. | Jonathan David Evans, Yanki Lekili | 2023-07-13T07:29:12Z | http://arxiv.org/abs/2307.06592v2 | # Noncommutative crepant resolutions of \(cA_{n}\) singularities via Fukaya categories
###### Abstract
We compute the wrapped Fukaya category \({\cal W}(T^{*}S^{1},D)\) of a cylinder relative to a divisor \(D=\{p_{0},\ldots,p_{n}\}\) of \(n+1\) points, proving a mirror equivalence with the category of perfect complexes on a crepant resolution (over \(k[t_{0},\ldots,t_{n}]\)) of the singularity \(uv=t_{0}t_{1}\ldots t_{n}\). Upon making the base-change \(t_{i}=f_{i}(x,y)\), we obtain the derived category of any crepant resolution of the \(cA_{n}\) singularity given by the equation \(uv=f_{0}\ldots f_{n}\). These categories inherit braid group actions via the action on \({\cal W}(T^{*}S^{1},D)\) of the mapping class group of \(T^{*}S^{1}\) fixing \(D\). We also give geometric models for the derived contraction algebras associated to a \(cA_{n}\) singularity in terms of the relative Fukaya category of the disc.
## 1 Introduction
**SS1.1** Consider the Fukaya category of a point with coefficients in a ring \(R\). Before taking the triangulated envelope, there is only one object: the point itself, with endomorphism algebra \(R\). If \(R\) is not a field then there are non-invertible non-zero endomorphisms which allow us to construct new twisted complexes in the derived Fukaya category. Via the Yoneda embedding, we can think of the derived Fukaya category of a point with coefficients in \(R\) as \({\rm perf}(R)\). We can think of this as the world's lousiest \(A\)-model mirror to \({\rm Spec}\,R\). It is lousy in the precise sense that symplectic geometry has given us absolutely no information here: all of the interesting information is contained in the coefficient ring. The moral of the current paper is that there is a whole spectrum of ways we can get at a single triangulated \(A_{\infty}\)-category by combining symplectic manifolds with coefficient rings. We work out in detail some examples where the symplectic manifold is a 2-dimensional cylinder.
**SS1.2** The starting point for these examples is the mirror symmetry result proved in [24] between (on the A-side) \(T^{*}S^{1}\) with a collection \(D\) of \(n+1\) punctures and (on the B-side) a certain reducible curve \(C_{n+1}\) with \(n+1\) nodes. The two sides of the mirror, together with dual Lagrangian torus fibrations are shown in Figure 1 (the noncompact fibres on the A-side are dual to the point-like fibres on the B-side). The precise statement of mirror symmetry
identifies the wrapped Fukaya category of Lagrangian branes avoiding the punctures with the derived category of perfect complexes on the nodal curve.
**SS1.3**: Consider the versal deformation \(\{uv=t_{0}\cdots t_{n}\}\) of an \(A_{n}\)-curve singularity; this admits a crepant resolution \(\mathcal{Y}\) with a morphism to \(\operatorname{Spec}k[t_{0},\ldots,t_{n}]\) whose central fibre is \(C_{n+1}\). The B-model in our main example will be \(\mathcal{Y}\). To build an A-model mirror to this, we need to find a Fukaya category which is linear over \(R=k[t_{0},\ldots,t_{n}]\) and which specialises to the Fukaya category of the \((n+1)\)-punctured cylinder when the \(t\)-variables are set equal to zero. We therefore use \(R\) as the coefficient ring1 for Floer theory on \(T^{*}S^{1}\) and work relative to \(D\), using intersections with \(D\) to weight polygons contributing to the Floer \(A_{\infty}\)-operations.2 We will further base-change coefficient rings to find mirrors to non-versal deformations.
Footnote 1: to get \(R\)-linearity.
Footnote 2: to get the deformation.
**SS1.4**: Here is the general setting. Let \(\Sigma\) be a surface (possibly non-compact) and let \(D=\{z_{0},\ldots,z_{n}\}\subset\Sigma\) be a finite set of marked points. Fix a field \(k\), let \(n=|D|-1\), and let \(R:=k[t_{0},\ldots,t_{n}]\). We consider the following wrapped Fukaya category of \(\Sigma\) relative to \(D\):
* The objects are properly-immersed, exact, graded Lagrangian branes in \(\Sigma\) avoiding the marked points \(D\) and asymptotic to conical Lagrangians near the ends of \(\Sigma\). The brane-data comprises a choice of orientation, relative spin-structure, grading, and local system.
* The hom-spaces are given by wrapped intersections (see [1] or [11, Appendix B]).
Figure 1: A punctured cylinder \(T^{*}S^{1}\setminus D\) and a nodal curve \(C_{n+1}\). Both are equipped with dual Lagrangian torus fibrations—the fibres are the dashed curves. The fibres above are dual to those below in the sense of having reciprocal radii; the noncompact fibres (“infinite radius”) through the punctures are dual to the nodes (“zero radius”).
* The \(A_{\infty}\)-operations are given by counting holomorphic polygons with boundaries on (wrapped) Lagrangians, but each polygon \(P\) contributes to the corresponding operation with a weight of \(\prod_{i=0}^{n}t_{i}^{\mathrm{mult}(P,z_{i})}\in R\).
* Finally, we take the split-closed triangulated envelope to get an \(R\)-linear triangulated \(A_{\infty}\)-category which we will write as \(\mathcal{W}(\Sigma,D)\).
**SS1.5** We will frequently change our coefficient ring \(R\). If \(S\) is an \(R\)-algebra (i.e. a ring with a morphism \(R\to S\)) then we will write \(\mathcal{W}(\Sigma,D)\otimes_{R}S\) for the corresponding \(S\)-linear \(A_{\infty}\)-category where all hom-spaces are tensored with \(S\).
**SS1.6** Relative Fukaya categories have played an important role in Floer theory starting with Seidel's paper on mirror symmetry for the quartic surface [33], and the idea of deforming Floer cohomology by weighting operations according to how many times a polygon passes through a point goes back to Ozsvath and Szabo [28] in their work on Heegaard Floer homology. For a detailed exposition of Fukaya categories in the exact setting, see [31]; for wrapped categories in general, see [1] or [11, Appendix B], but for a very explicit model of the wrapped Fukaya category of a surface, see [4] and [15, Section 3.3]. For relative Fukaya categories see [30, 35] and for a very similar example of a relative Fukaya category of a surface, see [23], and for a version with an arithmetic flavour see [27].
### Main Theorem.
We will focus on the specific case where \(\Sigma\) is the cotangent bundle \(T^{*}S^{1}\). We will pick a collection of Lagrangian arcs \(L_{0},\ldots,L_{n}\) as shown in Figure 2.
Let \(S\) be an \(R\)-algebra. We will prove the following results:
Figure 2: The surface \(T^{*}S^{1}\) together with its Lagrangian arcs \(L_{0},\ldots,L_{n}\), marked points \(z_{0},\ldots,z_{n}\) and some of the Reeb chords \(a_{i}\) and \(b_{i}\).
A. _The endomorphism \(A_{\infty}\)-algebra of \(\bigoplus_{i=0}^{n}L_{i}\) in \({\cal W}(T^{*}S^{1},D)\otimes_{R}S\) is quasi-isomorphic to the algebra \({\cal A}(T^{*}S^{1},D)\otimes_{R}S\) where \({\cal A}(T^{*}S^{1},D)\) is defined in SS2.1 below. This algebra is supported in degree zero, and hence has no nontrivial higher products._ (See Section 2.)
B. _Let \({\cal L}\subset{\cal W}(T^{*}S^{1},D)\) denote the subcategory split-generated by the Lagrangian arcs \(L_{0},\ldots,L_{n}\). Then \({\cal L}\otimes_{R}S\) is preserved by the action of the mapping class group \(\Gamma(T^{*}S^{1},D)\) of compactly-supported graded symplectomorphisms of \(T^{*}S^{1}\) fixing \(D\) pointwise._ (See Section 3.)
SS1.8 Remarks.(i) In Appendix A, we will show that the arcs split-generate the category \({\cal W}(T^{*}S^{1},D)\otimes_{R}\bar{R}\) where \(\bar{R}\) is the completion \(k[\![t_{0},\ldots,t_{n}]\!]\). We expect that the arcs split-generate \({\cal W}(T^{*}S^{1},D)\) itself, which would render SS1.7(B) redundant, but we cannot currently see how to prove this without passing to the completion.
(ii) We will prove something slightly more general than SS1.7(B) which gives quasi-equivalences for symplectomorphisms which permute the points of \(D\). For some choices of \(R\)-algebra \(S\), these will be autoequivalences of \({\cal L}\). See SS3.1 for details.
(iii) By construction the algebra \({\cal A}(T^{*}S^{1},D)\) is linear over \(R\) but, in fact, it turns out that it has a bigger center given by \(R[u,v]/(uv-t_{0}t_{1}\ldots t_{n})\). We expect that the autoequivalences given in SS1.7(B) are linear over this bigger ring (not just linear over \(R\)). The main reason to expect this is that the additional variables \(u\) and \(v\) come from Hochschild cohomology classes of \({\cal A}(T^{*}S^{1},D)\) associated with the infinite ends of \(T^{*}S^{1}\), whereas our autoequivalences are induced by compactly supported symplectomorphisms.
SS1.9 Mirror symmetry interpretation.Theorem SS1.7(A) implies that
\[{\cal L}\simeq\mbox{\rm perf}({\cal A}(T^{*}S^{1},D)).\]
This category has an interpretation on the B-side. Consider the singular variety given by
\[{\cal Y}_{0}=\mbox{\rm Spec}\,R[u,v]/(uv-t_{0}\cdots t_{n})\subset{\mathbb{A }}^{n+3}\]
This is a toric singularity. Indeed, consider the vector space \(V={\mathbb{A}}^{2(n+1)}\) generated by the entries of the 2-by-\((n+1)\) matrix
\[\left(\begin{array}{cccc}x_{0}&x_{1}&\cdots&x_{n}\\ y_{0}&y_{1}&\cdots&y_{n}\end{array}\right)\]
and consider the action of the torus \(T={\mathbb{G}}_{m}^{n}\) whose \(i^{th}\) component acts as follows:
\[\lambda:\left(\begin{array}{cccccc}x_{0}&\ldots&x_{i-1}&x_{i}&\ldots&x_{n} \\ y_{0}&\ldots&y_{i-1}&y_{i}&\ldots&y_{n}\end{array}\right)\rightarrow\left( \begin{array}{cccccc}x_{0}&\ldots&\lambda x_{i-1}&\lambda^{-1}x_{i}&\ldots&x _{n}\\ y_{0}&\ldots&\lambda^{-1}y_{i-1}&\lambda y_{i}&\ldots&y_{n}\end{array}\right)\]
Then \({\cal Y}_{0}\) can be identified with the affine GIT quotient \(V\mathbin{/\!\!\!/}T\), where we can see that \(t_{i}=x_{i}y_{i}\), \(u=x_{0}x_{1}\ldots x_{n}\) and \(v=y_{0}y_{1}\ldots,y_{n}\). The generic GIT quotients \(V\mathbin{/\!\!\!/}_{\theta}T\) provide toric crepant resolutions of \({\cal Y}_{0}\). These correspond to triangulations of \([0,1]\times\Delta_{n}\) where \(\Delta_{n}\) denotes the \(n\)-simplex. All of these are (non-canonically) isomorphic to a toric Calabi-Yau variety, which we denote by \({\cal Y}\). These toric Calabi-Yau varieties are well-known ([9], [25]). We have a map \({\cal Y}\to{\rm Spec}\,R\) given by projection to \((t_{0},\ldots t_{n})\). The fiber of this map over \(0\) is a nodal curve given by a chain of \({\mathbb{P}}^{1}\)'s together with two \({\mathbb{A}}^{1}\)'s attached at the two ends, and the total space \({\cal Y}\) is the versal deformation of this nodal curve.
There is a tilting bundle \({\cal V}\) on \({\cal Y}\) constructed by Van den Bergh [38]; we review this construction in Section 4. In SS4.6, we will see that \({\rm End}_{\cal Y}({\cal V})\) is precisely our algebra \({\cal A}(T^{*}S^{1},D)\) and since \({\cal Y}\) is smooth, this means that
\[{\cal L}\simeq D^{b}(\mathop{\rm coh}({\cal Y}))\]
which can be regarded as a relative version of homological mirror symmetry for \({\cal Y}\) (see also Remark SS1.13).
The braid group action on \(D^{b}(\mathop{\rm coh}({\cal Y}))\) is constructed by Donovan-Segal [9] by the variation of GIT method, and previously by Bezrukavnikov-Riche [7] via Springer theory. Under the mirror symmetry equivalence discussed above their action on the \(B\)-side almost certainly corresponds to our braid group action on the \(A\)-side given by Theorem SS1.7(B) but we do not check the details here.
SS1.10 Base change.We get further results by working over an \(R\)-algebra \(S\). Let \({\cal Y}_{S,0}={\rm Spec}({\cal O}_{Y_{0}}\otimes_{R}S)\). Let \({\cal Y}_{S}\) be the fibre product:
In SS4.7, we will show that the pullback \(j^{*}{\cal V}\) is still a tilting object with
\[{\rm End}(j^{*}{\cal V})\cong{\cal A}(T^{*}S^{1},D)\otimes_{R}S.\]
The variety \({\cal Y}_{S}\) is a partial resolution of \({\cal Y}_{S,0}\), and Theorem SS1.7(B) now yields an action of \(\Gamma(T^{*}S^{1},D)\) by autoequivalences on \({\rm perf}({\cal Y}_{S})\). If \({\cal Y}_{S}\) is itself smooth, this category is quasi-equivalent to \(D^{b}(\mathop{\rm coh}({\cal Y}_{S}))\).
SS1.11 Example.If we take \(S=k[t]\) considered as an \(R\)-module via the homomorphism \(t_{i}\mapsto t\) then \({\cal Y}_{S,0}={\rm Spec}\,(k[u,v,t]/(uv-t^{n+1}))\) is the \(A_{n}\) surface singularity and \({\cal Y}_{S}\) is its minimal resolution, so we get a \(\Gamma(T^{*}S^{1},D)\) action on \(D^{b}(\mathop{\rm coh}({\cal Y}_{S}))\). This is one of the examples where we get a bigger group action: any compactly-supported graded symplectomorphism of \(T^{*}S^{1}\) fixing \(D\)_sterwise_ acts as an autoequivalence of \({\cal L}\). This yields an
action of the annular (extended) braid group by autoequivalences. In this example, an action of the (usual) braid group was known to Seidel and Thomas [34] and an extended braid group action was constructed by Gadbled, Thiel and Wagner in [13].
SS1.12 Example.Let \(f(x,y)\) be a polynomial whose lowest order term has degree \(n+1\) and consider the compound \(A_{n}\) singularity \(\{uv=f(x,y)\}\subset\mathbb{C}^{4}\). If \(f\) factors as \(f_{0}\cdots f_{n}\) with each curve \(\{f_{i}(x,y)=0\}\) smooth then the singularity admits a small resolution. This resolution has the form \(\mathcal{Y}_{S}\) where \(S=k[x,y]\) is considered as an \(R\)-algebra via the homomorphism \(t_{i}\mapsto f_{i}(x,y)\). The algebra \(\mathcal{A}(T^{*}S^{1},D)\otimes_{R}S\) is called a _noncommutative crepant resolution_ (NCCR) of this singularity: it is a noncommutative algebra whose derived category is equivalent to the derived category of the resolution.
Theorem SS1.7(B) yields an action of \(\Gamma(T^{*}S^{1},D)\) on \(D^{b}(\mathrm{coh}(\mathcal{Y}_{S}))\). This can be enhanced to the bigger group of symplectomorphisms: let \(\psi\) be a symplectomorphism of \(T^{*}S^{1}\) fixing \(D\) setwise and let \(\sigma\) be the permutation \(\psi(z_{i})=z_{\sigma(i)}\); we get an autoequivalence from \(\psi\) if \(f_{\sigma(i)}=f_{i}\) for all \(i\). Autoequivalences of \(D^{b}(\mathrm{coh}(\mathcal{Y}_{S}))\) called "mutation functors" were constructed by Iyama and Wemyss [20] using flops along the exceptional curves.
SS1.13These examples show that, although this Fukaya category leaves much of the heavy-lifting to the module category of the coefficient ring, it does readily give geometric insights which are nontrivial on the \(B\)-side. The relative Fukaya category \(\mathcal{W}(T^{*}S^{1},D)\) is appealing because working with Fukaya categories of surfaces reduces to combinatorial algebra. However, in view of [26, Conjecture E], it is possible to relate the relative Fukaya category \(\mathcal{W}(T^{*}S^{1},D)\) to an appropriate subcategory of an absolute Fukaya category of a higher dimensional symplectic manifold \(X\). See [26, Example 2.5] for a detailed exposition of the case \(D=\{1\}\).
SS1.14Derived contraction algebra.The derived contraction algebra is a DG-algebra associated to a small resolution \(\mathcal{Y}\to\mathcal{Y}_{0}\) that prorepresents derived deformations of the irreducible components of the reduced exceptional fiber of the contraction. Concretely, it is a non-positively graded DG-algebra whose zeroth cohomology recovers the contraction algebra of Donovan and Wemyss [10]. See the papers by Hua-Toda [18], Hua [16], Hua-Keller [17], and Booth [5] for more background. The derived contraction algebra is obtained by localising a noncommutative resolution away from an idempotent. From the Fukaya-categorical description of the noncommutative resolution in the \(cA_{n}\) case from SS1.12, we can give a geometric interpretation of this localisation: the derived contraction algebra can be described using the relative Fukaya category of the punctured disc \((T^{*}S^{1}\setminus L_{0},D)\). We discuss this in Section 6.
SS1.15 Acknowledgements.JE is supported by EPSRC grant EP/W015749/1. YL is partially supported by the Royal Society URF\(\backslash\)R\(\backslash\)180024 and EPSRC grant EP/W015889/1. We would like to thank Michael Wemyss for enlightening discussions which led to a much
cleaner approach, and Matt Booth, Gustavo Jasso, Daniil Mamaev and Richard Thomas for helpful conversations.
## 2 The Floer cohomology algebra
SS2.1 Definition of \(\mathcal{A}(T^{*}S^{1},D)\).Let \(Q_{n+1}\) be the quiver in Figure 3 with vertices \(L_{0},\ldots,L_{n}\) and arrows3\(a_{i}\colon L_{i-1}\to L_{i}\), \(b_{i}\colon L_{i}\to L_{i-1}\).
Footnote 3: Indices are taken to belong to the cyclic group \(\mathbb{Z}/(n+1)\).
Recall that \(R=k[t_{0},\ldots,t_{n}]\). Consider the path algebra \(RQ_{n+1}\) of \(Q_{n+1}\) with coefficients in the ring \(R\); that is elements of \(RQ_{n+1}\) are \(R\)-linear combinations of paths in \(Q_{n+1}\) and multiplication is given by concatenate-or-die. We write \(e_{i}\) for the idempotent corresponding to the constant (lazy) path at the vertex \(L_{i}\). Let \(I_{R}\subset RQ_{n}\) be the ideal of \(RQ_{n}\) generated by
\[a_{i}b_{i}-t_{i}e_{i+1},\quad b_{i}a_{i}-t_{i}e_{i},\quad i=0,\ldots,n.\]
Write \(\mathcal{A}(T^{*}S^{1},D)\) for the algebra \(RQ_{n}/I_{R}\), considered as an \(A_{\infty}\)-algebra concentrated in degree zero with no differential or higher operations.
Theorem SS1.7(A) follows immediately from the next proposition.
Proposition._The \(A_{\infty}\)-algebra \(\bigoplus_{i,j=0}^{n}CF(L_{i},L_{j})\) is quasi-equivalent to \(\mathcal{A}(T^{*}S^{1},D)\). Note that, in this proof, we write \(CF\) to mean \(\hom_{\mathcal{W}(T^{*}S^{1},D)}\)._
Proof.: We will use the model of the Fukaya category from [15]. The arrows labelled \(a\) and \(b\) in Figure 3 represent the Reeb chords with the same names in Figure 2, considered as wrapped intersection points \(a_{i}\in CF^{0}(L_{i},L_{i+1})\), \(b_{i}\in CF^{0}(L_{i+1},L_{i})\). All Reeb chords (called "boundary paths" in [15]) can be obtained by concatenating these, and therefore
Figure 3: The quiver \(Q_{n+1}\).
the \(R\)-module \(CF(L_{i},L_{j})\) has as a basis the set of all paths from \(L_{i}\) to \(L_{j}\) in \(Q_{n+1}\). Here, we include the constant path \(e_{i}\) at \(L_{i}\), thought of as the identity element of \(CF(L_{i},L_{i})\).
Since all of these chords are concatenations of chords of degree zero, everything is in degree zero, which implies that the only nontrivial \(\mu_{k}\)-operation on \(\bigoplus_{i,j}CF(L_{i},L_{j})\) is \(\mu_{2}\): the differential and higher products all vanish. To compute \(\mu_{2}\), aside from concatenation of chords, we need to count polygons. The arcs \(L_{i}\) cut \(\Sigma\) into \(m+1\) quadrilaterals \(D_{0},\ldots,D_{n}\), where we write \(D_{i}\) for the quadrilateral containing the point \(z_{i}\). Using the formula4[15, Eq. 3.18] and keeping track of our additional weighting from the marked points, we see that:
Footnote 4: The authors of [15] state this formula for \(\mu_{k}\) with \(k\geq 3\) only because they do not have any quadrilaterals like \(D_{i}\) in [15].
\[\mu_{2}(a_{i},b_{i})=t_{i}e_{i+1}\qquad\mu_{2}(b_{i},a_{i})=t_{i}e_{i}\]
for all \(i\), where these contributions come from \(D_{i}\). Any other contributions to \(\mu_{2}\) would need to come from quadrilaterals, and any quadrilateral can be decomposed as a union of \(D_{i}\)s, so any other \(\mu_{2}\) product can be deduced from these.
## 3 Autoequivalences
SS3.1 Group action.Let \(R=k[t_{0},\ldots,t_{n}]\). Given a permutation \(\sigma\) of \(\{0,1,\ldots,n\}\), let \(R_{\sigma}\) denote the \(R\)-module whose underlying vector space is \(R\) but \(t_{i}\) acts as multiplication by \(t_{\sigma(i)}\). Consider the triangulated \(A_{\infty}\)-category
\[\mathcal{W}(T^{*}S^{1},D)\rtimes S_{n+1}:=\coprod_{\sigma\in S_{n+1}}\mathcal{ W}(T^{*}S^{1},D)\otimes_{R}R_{\sigma}\]
where the morphism spaces between different components are zero. Given a graded symplectomorphism \(\psi\colon T^{*}S^{1}\to T^{*}S^{1}\) satisfying \(\psi(D)=D\), we get a permutation \(\sigma\in S_{n+1}\) defined by \(\psi(z_{i})=z_{\sigma(i)}\). This induces an autoequivalence
\[\mathcal{W}(T^{*}S^{1},D)\rtimes S_{n+1}\to\mathcal{W}(T^{*}S^{1},D)\rtimes S _{n+1}\]
sending \(\mathcal{W}(T^{*}S^{1},D)\otimes_{R}R_{\tau}\) to \(\mathcal{W}(T^{*}S^{1},D)\otimes_{R}R_{\sigma\tau}\). In particular, this gives an action of the pure annular braid group by autoequivalences on \(\mathcal{W}(T^{*}S^{1},D)\).
SS3.2 Theorem._Let \(\mathcal{L}_{\sigma}\) denote the subcategory of \(\mathcal{W}(T^{*}S^{1},D)\otimes_{R}R_{\sigma}\) generated by the arcs \(L_{0},\ldots,L_{n}\). Then the autoequivalences from SS3.1 preserve \(\coprod_{\sigma\in S_{n+1}}\mathcal{L}_{\sigma}\)._
We now begin the proof of this theorem, which will conclude in SS3.10. We will focus on the case \(n\geq 2\) because it can be handled uniformly: for small \(n\) the arguments are similar but the pictures are slightly different because \(L_{1}=L_{n}\) or \(L_{0}=L_{1}=L_{n}\). Throughout the argument we will ignore signs and orientations of moduli spaces. The reason we can get away with this is explained in Remark SS3.11.
**SS3.3** Let \(\psi_{i}\colon T^{*}S^{1}\to T^{*}S^{1}\) denote the half-twist around the arc connecting \(z_{i}\) to \(z_{i+1}\) (indices taken modulo \(n+1\)). Let \(\rho\colon T^{*}S^{1}\to T^{*}S^{1}\) denote the symplectomorphism which preserves concentric circles in \(T^{*}S^{1}\), fixing the two boundary components pointwise and rotating the points of \(D\) by \(2\pi/(n+1)\). Let \(\delta\) be a boundary-parallel Dehn twist parallel to the inner boundary of \(T^{*}S^{1}\). The mapping classes \(\psi_{0},\ldots,\psi_{n},\rho,\delta\) generate the graded symplectic mapping class group: see5[13, Section 1]. The symplectomorphism \(\delta\) acts trivially on our Lagrangians: they are objects of the wrapped category and \(\delta\) is part of the wrapping that we would do anyway to compute hom-spaces. The symplectomorphism \(\rho\) cyclically permutes the \(L_{i}\). So to prove that \(\Gamma(T^{*}S^{1},D)\) preserves \({\cal L}\), it suffices to check that \(\psi_{i}(L_{j})\) is generated by the arcs \(L_{0},\ldots,L_{n}\) for all \(i,j\). In fact, \(\psi_{i}(L_{j})=L_{j}\) unless \(i=j\), so we just need to study \(\psi_{i}(L_{i})\). Moreover, by cyclic symmetry of \((T^{*}S^{1},D)\) we can assume that \(i=0\).
Footnote 5: Gadbled, Thiel and Wagner treat the inner boundary as a puncture, so do not need \(\delta\).
**SS3.4** The half-twisted arc \(\psi_{0}(L_{0})\) is shown in Figure 4. To localise the calculation near the diagram, we will insert a stop (in the sense of Sylvan [37]) on each of the two boundary components and work first in the partially wrapped Fukaya category. We will write down a twisted complex \(\mathbb{L}^{\prime}\) built out of \(L_{n}\), \(L_{0}\) and \(L_{1}\) and a quasi-isomorphism \(q\in CF(\mathbb{L}^{\prime},\psi_{0}(L_{0}))\). If we then apply Sylvan's stop removal functor to this twisted complex, we obtain a twisted complex \(\mathbb{L}\) in \({\cal W}(T^{*}S^{1},D)\) which is quasi-isomorphic to \(\psi_{0}(L_{0})\).
Figure 4: The half-twisted arc \(\psi_{0}(L_{0})\), perturbed slightly along the Reeb flow to separate it from \(L_{0}\). We have added two stops on the boundary for convenience; these are labelled \(\circ\). We have also labelled the Reeb orbits connecting the Lagrangian arcs. Note that \(a_{0}=\alpha^{\prime}\alpha\) and \(b_{n}=\beta^{\prime}\beta^{\prime}\). The point \(p\) (marked with a \(\bullet\)) is an intersection point of \(L_{0}\) with \(\psi_{0}(L_{0})\). Two important polygonal regions \(A\) and \(B\) are shaded.
**SS3.5**: The advantage of inserting stops is that the partially wrapped Floer cohomology is easy to read off from Figure 4:
\[CF(\psi_{0}(L_{0}),L_{0}) =R\cdot p, CF(L_{0},\psi_{0}(L_{0})) =R\cdot p\,\oplus\,R\cdot\alpha\,\oplus\,R\cdot\beta,\] \[CF(\psi_{0}(L_{0}),L_{1}) =R\cdot\alpha^{\prime}, CF(L_{1},\psi_{0}(L_{0})) =R\cdot(\beta b_{0}),\] \[CF(\psi_{0}(L_{0}),L_{n}) =R\cdot\beta^{\prime}, CF(L_{n},\psi_{0}(L_{0})) =R\cdot(\alpha a_{n}).\]
All of these morphisms are in degree zero except for \(p\) which is in degree 1.
**SS3.6**: Consider the twisted complex
\[\mathbb{L}^{\prime}\coloneqq\left(L_{1}\oplus L_{n}\xrightarrow{(b_{0},a_{n} )}L_{0}\right)\]
and the morphisms \(q_{1}\colon\mathbb{L}^{\prime}\to\psi_{0}(L_{0})\) and \(q_{2}\colon\psi_{0}(L_{0})\to\mathbb{L}^{\prime}\) defined by6
Footnote 6: We will write twisted complexes horizontally and morphisms between them vertically.
We need to show that \(\mu_{2}^{Tw}(q_{1},q_{2})\) and \(\mu_{2}^{Tw}(q_{2},q_{1})\) are equal to the identity elements of \(CF(\psi_{0}(L_{0}),\psi_{0}(L_{0}))\) and \(CF(\mathbb{L}^{\prime},\mathbb{L}^{\prime})\) respectively (we are using Seidel's convention for composition, right-to-left). We compute \(\mu_{2}^{Tw}\) by stacking the morphisms and then taking all possible paths through the resulting diagram, composing wherever possible.
**SS3.7**: To calculate \(\mu_{2}^{Tw}(q_{2},q_{1})\), we have the following diagram:
There are several routes through the diagram connecting the top row to the bottom. There are two paths that involve three morphisms:
\(\begin{CD}L_{1}\oplus L_{n}@>{(b_{0},a_{n})}>{}>L_{0}\\ \left(\begin{matrix}\mu_{3}(\alpha^{\prime},p,b_{0})&\mu_{3}(\alpha^{\prime},p,a_ {n})\\ \mu_{3}(\beta^{\prime},p,b_{0})&\mu_{3}(\beta^{\prime},p,a_{n})\end{matrix} \right)\Bigg{\downarrow}\\ L_{1}\oplus L_{n}@>{(b_{0},a_{n})}>{}>L_{0}\end{CD}\)
There is also a path of length 2 connecting \(L_{0}\) to \(L_{1}\oplus L_{n}\) and one of length 4 connecting \(L_{1}\oplus L_{n}\) to \(L_{0}\). Both of these concatenations vanish for degree reasons.
**SS3.8**: The \(\mu_{3}\) products contributing to \(\mu_{2}^{Tw}(q_{2},q_{1})\) are:
\(\begin{CD}L_{1}\oplus L_{n}@>{(b_{0},a_{n})}>{}>L_{0}\\ \left(\begin{matrix}1&0\\ 0&1\end{matrix}\right)\Bigg{\downarrow}\\ L_{1}\oplus L_{n}@>{(b_{0},a_{n})}>{}>L_{0}\end{CD}\)
For example, the products \(\mu_{3}(\alpha^{\prime},p,b_{0})=1\) and \(\mu_{3}(\beta^{\prime},p,b_{0})=1\) come from the shaded polygons in Figure 4. To understand \(\mu_{3}(\alpha^{\prime},p,b_{0})=1\), we make partially wrapped perturbations (Figure 5). We are trying to compute
\[\mu_{3}\colon CF(\psi_{0}(L_{0})^{\prime},L_{1})\otimes CF(L_{0}^{\prime\prime },\psi_{0}(L_{0})^{\prime})\otimes CF(L_{1}^{\prime\prime\prime},L_{0}^{\prime \prime})\to CF(L_{1}^{\prime\prime\prime},L_{1}).\]
Here, each prime indicates that we have partially wrapped, and that we have wrapped _more_ when there are more primes. We see that the polygonal region \(A\) from Figure 4 becomes a quadrilateral with vertices at \(\alpha^{\prime}\), \(p\), \(b_{0}\), and at the unique intersection point \(L_{1}^{\prime\prime\prime}\cap L_{1}\) which represents \(1\in CF(L_{1}^{\prime\prime\prime},L_{1})\). The calculation of \(\mu_{3}(\beta^{\prime},p,a_{n})\) is similar.
**SS3.9**: The calculation of \(\mu_{3}(b_{0},\alpha^{\prime},p)+\mu_{3}(a_{n},\beta^{\prime},p)\) comes down to the same two polygons, but it is more subtle. Either polygon \(A\) contributes to \(\mu_{3}(a_{n},\beta^{\prime},p)=1\) and the other term is zero, or else polygon \(B\) contributes to \(\mu_{3}(b_{0},\alpha^{\prime},p)=1\). Which of these eventualities occurs depends on the choice of partial wrappings; see Figure 6.
**SS3.10**: To calculate \(\mu_{2}^{Tw}(q_{1},q_{2})\), we have the following diagram:
\(\psi_{0}(L_{0})\)\(L_{1}\oplus L_{n}\)\(\psi_{0}(L_{0})\)\(\psi_{0}(L_{0})\)\(L_{1}\oplus L_{n}\)\(\psi_{0}(L_{0})\)\(L_{0}\)\(\psi_
## 4 B-side
SS4.1 Setup.As in the introduction, let \(R=k[t_{0},\ldots,t_{n}]\), let \(\mathcal{Y}_{0}=\operatorname{Spec}R[u,v]/(uv-t_{0}\cdots t_{n}))\), and let \(f\colon\mathcal{Y}_{0}\to\mathbb{A}^{n+1}\) be the morphism given by \((t_{0},\ldots,t_{n})\). This morphism \(f\) is the versal deformation of the \(A_{n}\) curve singularity. We have a toric crepant resolution \(\pi\colon\mathcal{Y}\to\mathcal{Y}_{0}\) given by a triangulation of \([0,1]\times\Delta_{n}\).
### The Van den Bergh tilting bundle.
We now describe a tilting bundle on \(\mathcal{Y}\), making explicit the construction of Van den Bergh [38, Propositions 3.2.5, 3.2.10] in this example. Recall from SS1.9 that \(\mathcal{Y}\) is the GIT quotient \(V\mathbin{\mathchoice{\hbox{$\,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}}_{\theta}\,T\), where \(V\) is the space of 2-by-\((n+1)\) matrices
\[\begin{pmatrix}x_{0}&\ldots&x_{i}&x_{i+1}&\ldots&x_{n}\\ y_{0}&\ldots&y_{i}&y_{i+1}&\ldots&y_{n}\end{pmatrix}\]
and the torus \(T=\mathbb{G}_{m}^{n}\) acts as
\[\begin{pmatrix}\lambda_{1}x_{0}&\ldots&\lambda_{i}^{-1}\lambda_{i+1}x_{i}& \lambda_{i+1}^{-1}\lambda_{i+2}x_{i+1}&\ldots&\lambda_{n}^{-1}x_{n}\\ \lambda_{1}^{-1}y_{0}&\ldots&\lambda_{i}\lambda_{i+1}^{-1}y_{i}&\lambda_{i+1} \lambda_{i+2}^{-1}y_{i+1}&\ldots&\lambda_{n}y_{n}\end{pmatrix}\]
and \(\theta\) is the character \(\theta(\lambda_{1},\ldots,\lambda_{n})=\lambda_{1}\cdots\lambda_{n}\) of \(T\).
Given another character \(\chi\colon T\to\mathbb{C}^{*}\), we get a line bundle \((V\times\mathbb{C})\mathbin{\mathchoice{\hbox{$\,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}{\hbox{$ \,\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt/$}}}_{\theta}\,T\) over \(\mathcal{Y}\), where \(T\) acts with weight \(\chi\) on \(\mathbb{C}\). Let \(\mathcal{M}_{i}\) be the line bundle corresponding to the character \(\chi_{i}(\lambda_{1},\ldots,\lambda_{n})=\lambda_{i}\). The sections of \(\mathcal{M}_{i}\) are in bijection with the polynomials in the variables \(x_{i},y_{i}\) which have weight \(\chi\) under the action of \(T\). For example, \(x_{0}\) is a section of \(\mathcal{M}_{1}\) and \(y_{n}\) is a section of \(\mathcal{M}_{n}\).
**SS4.3**: **Lemma.** _The sections of \({\cal O}_{\cal Y}\) form a ring isomorphic to \(R[u,v]/(uv-t_{0}\cdots t_{n})\). The sections of \({\cal M}_{i}\) form a module over this ring which is generated by \(\sigma_{i}\coloneqq x_{0}\cdots x_{i-1}\) and \(\tau_{i}\coloneqq y_{i}\cdots y_{n}\)._
Note that since \(\pi_{*}{\cal O}_{\cal Y}={\cal O}_{{\cal Y}_{0}}\) we can think of \(H^{0}({\cal M}_{i})\) as an \({\cal O}_{\cal Y}\)-module or an \({\cal O}_{{\cal Y}_{0}}\)-module. It is isomorphic to the \(R[u,v]/(uv-t_{0}\cdots t_{n})\)-module \((u,t_{0}\cdots t_{i-1})\) by identifying \(\sigma_{i}\) with \(u\) and \(\tau_{i}\) with \(t_{0}\cdots t_{i-1}\).
_Proof._ Consider the monomial \(x_{0}^{c_{0}}\cdots x_{n}^{c_{n}}y_{0}^{d_{0}}\cdots y_{n}^{d_{n}}\). The condition that this defines a section of \({\cal O}_{\cal Y}\) is that \(c_{i}+d_{i+1}-c_{i+1}-d_{i}=0\) for all \(i=0,\ldots,n-1\). This implies that \(c_{0}-d_{0}=\cdots=c_{n}-d_{n}\). If this common value is positive then the monomial can be written as
\[t_{0}^{d_{0}}\cdots t_{n}^{d_{n}}u^{c_{0}-d_{0}}\]
otherwise it can be written as
\[t_{0}^{c_{0}}\cdots t_{n}^{c_{n}}v^{d_{0}-c_{0}}\]
where we are defining
\[u=x_{0}\cdots x_{n},\qquad v=y_{0}\cdots y_{n},\qquad t_{i}=x_{i}y_{i}\]
as in SS1.9. The argument for the sections of \({\cal M}_{i}\) is similar except one is left with an additional factor of \(x_{0}\cdots x_{i-1}\) or \(y_{i+1}\cdots y_{n}\) depending on whether \(c_{i}>d_{i}\) or \(d_{i+1}>c_{i}\).
**SS4.4**: **Lemma.** _Let \({\cal M}=\bigoplus_{i=1}^{n}{\cal M}_{i}\). Consider the \(n-1\) sections_
\[s_{1} =(\sigma_{1},\tau_{2},0,\ldots,0)\] \[s_{2} =(0,\sigma_{2},\tau_{3},0,\ldots,0)\] \[\vdots\] \[s_{n-1} =(0,\cdots,0,\sigma_{n-1},\tau_{n})\]
_These sections are everywhere linearly independent, and hence span a copy of the trivial bundle of rank \(n-1\) inside \({\cal M}\)._
_Proof._ At each point of \({\cal Y}\), the wedge product \(s_{1}\wedge s_{2}\wedge\cdots\wedge s_{n-1}\) has components
\[\tau_{2}\cdots\tau_{n},\] \[\sigma_{1}\tau_{3}\cdots\tau_{n},\] \[\sigma_{1}\sigma_{2}\tau_{4}\cdots\tau_{n},\] \[\vdots\] \[\sigma_{1}\cdots\sigma_{n-1}.\]
If the sections are linearly dependent somewhere then all of these components vanish at that point. Let \(j\) be minimal such that \(\sigma_{j}=0\); note that this implies \(x_{j}=0\). Since
\(\sigma_{1}\cdots\sigma_{j-1}\tau_{j+1}\cdots\tau_{n}=0\) we deduce that some \(\tau_{k}=0\) for \(k>j\), and for the maximal such \(k\) we have that \(y_{k}=0\). But the unstable locus for GIT is the union of the subvarieties \(\{x_{j}=y_{k}=0\}\) for \(0\leq j<k\leq n\), so on the GIT quotient \(\mathcal{Y}\) there are no points where these sections vanish simultaneously.
**SS4.5 Corollary**.: _Let \(\mathcal{L}\) be the quotient of \(\mathcal{M}\) by the trivial subbundle spanned by these sections. Then \(\mathcal{L}\) is an ample line bundle on \(\mathcal{Y}\) and \(\mathcal{V}\coloneqq\mathcal{O}_{\mathcal{Y}}\oplus\mathcal{M}\) is a tilting bundle._
Proof.: The quotient is a line bundle and is therefore determined by its first Chern class, which is in turn determined by its restriction to the curve \(\{t_{0}=\cdots=t_{n}=0\}\subset\mathcal{Y}\). This curve is a chain comprising \(n\) copies of \(\mathbb{P}^{1}\) which generate \(H_{2}(\mathcal{Y};\mathbb{Z})\) as well as two copies of \(\mathbb{A}^{1}\) at either end of the chain. The bundle \(\mathcal{M}_{i}\) restricts to the bundle \(\mathcal{O}(1)\) on the \(i\)th \(\mathbb{P}^{1}\) and to the trivial bundle on the other \(\mathbb{P}^{1}\)s, which means that \(\mathcal{L}\) restricts to \(\mathcal{O}(1)\) on all the \(\mathbb{P}^{1}\)s. Since the compact irreducible components of fibres of \(\pi\colon\mathcal{Y}\to\mathcal{Y}_{0}\) are chains of \(\mathbb{P}^{1}\)s homologous to the positive linear combinations of \(\mathbb{P}^{1}\)s in this chain, this implies that \(\mathcal{L}\) is relatively ample.
Since the bundles \(\mathcal{M}_{i}\) are toric line bundles generated by global sections, we have [12, Corollary on p.74]
\[\operatorname{Ext}^{j}(\mathcal{O}_{\mathcal{Y}},\mathcal{M}_{i})=0\text{ for all }j>0.\]
If we can show that \(\operatorname{Ext}^{1}(\mathcal{M}_{i},\mathcal{O}_{\mathcal{Y}})=0\) then we can use [38, Lemma 3.2.3] to deduce that \(\operatorname{Ext}^{*}(\mathcal{O}_{\mathcal{Y}}\oplus\mathcal{M},\mathcal{O }_{\mathcal{Y}}\oplus\mathcal{M})\) is supported in degree zero and argue as in [38, Proposition 3.2.5] to deduce that \(\mathcal{O}_{\mathcal{Y}}\oplus\mathcal{M}\) generates.
Tensoring with \(\mathcal{M}_{i}^{-1}\) we see that \(\operatorname{Ext}^{1}(\mathcal{O}_{\mathcal{Y}},\mathcal{M}_{i})\cong H^{1}( \mathcal{M}_{i}^{-1})\). By projecting to \((t_{0},\ldots,t_{n})\), we can view \(\mathcal{Y}\) as a family over \(\mathbb{A}^{n+1}\) which is the versal family of deformations of the nodal curve of the form \(\mathbb{A}^{1}\cup_{pt}\mathbb{P}^{1}\cup_{pt}\mathbb{P}^{1}\cup_{pt}\ldots \mathbb{P}^{1}\cup_{pt}\mathbb{A}^{1}\) with \(n+1\) nodes. Any other fiber \(C_{t}\) of this family is given by a nodal curve obtained from \(C_{0}\) by smoothing the nodes corresponding the non-zero component of \(t=(t_{0},\ldots,t_{n})\). The restriction of \(\mathcal{M}_{i}^{-1}\) to these curves gives a line bundle on \(C_{t}\) whose restriction to the rational components of \(C_{t}\) are either all trivial or in at most one component it restricts to \(\mathcal{O}(-1)\). In any case, \(H^{1}(\mathcal{M}_{i}^{-1}|_{C_{t}})=0\) for any \(t\), which then implies \(H^{1}(\mathcal{M}_{i}^{-1})=0\) as claimed.
**SS4.6 Corollary**.: _The derived category of \(\mathcal{Y}\) is quasi-equivalent to the derived category of modules over \(\mathcal{A}(T^{*}S^{1},D)\)._
Proof.: Since \(\mathcal{O}_{\mathcal{Y}}\oplus\mathcal{M}\) is a tilting object, the derived category of \(\mathcal{Y}\) is quasi-equivalent to the derived category of modules of \(\operatorname{End}_{\mathcal{Y}}(\mathcal{O}_{\mathcal{Y}}\oplus\mathcal{M})\). This can be computed directly via toric geometry. Indeed, we have \(\operatorname{End}_{\mathcal{Y}}(\mathcal{M}_{i},\mathcal{M}_{j})\cong H^{0}( \mathcal{M}_{j}\otimes\mathcal{M}_{i}^{-1})\) which, as in SS4.3, can be identified with the set of polynomials \(p\in k[x_{i},y_{j}]\) in the Cox ring such that \(p(\lambda\cdot x)=\chi_{-i,j}(\lambda)p(x)\) for all \(\lambda\in T\), where \(\chi_{-i,j}(\lambda_{1},\ldots,\lambda_{n})=\lambda_{i}^{-1}\lambda_{j}\). Assuming \(i\geq j\) without loss of generality, such polynomials are generated freely over \(R\) by
\[x_{i}x_{i+1}\ldots x_{n}x_{0}\ldots x_{j-1}u^{r},\qquad y_{j}y_{j+1}\ldots y_{ i-1}v^{s}\text{ for }r,s\in\mathbb{Z}_{\geq 0}.\]
Note that \(\operatorname{End}_{\mathcal{Y}}(\mathcal{M}_{i},\mathcal{M}_{i})\cong\mathcal{O}_{ \mathcal{Y}}\) itself is freely generated over \(R\) by \(\{1,u^{r},v^{s}:r,s\in\mathbb{Z}_{\geq 0}\}\).
The identification with the algebra \(\mathcal{A}(T^{*}S^{1},D)\) follows immediately by sending the above basis over \(R=k[t_{0},t_{1},\ldots,t_{n}]\) to
\[(a_{i}\ldots a_{n}a_{0}\ldots a_{j-1})\cdot(\sum_{i=0}^{n}a_{i}\ldots a_{n}a_ {0}\ldots a_{i-1})^{r},\qquad b_{j}b_{j+1}\ldots b_{i-1}\cdot(\sum_{i=0}^{n}b_ {i}\ldots b_{n}b_{0}\ldots b_{i-1})^{s}\]
Finally, multiplication is induced by the product in the ring of \(n\)-by-\(n\) matrices with entries in the Cox ring \(k[x_{i},y_{j}]\) which then gives the desired isomorphism with \(\mathcal{A}(T^{*}S^{1},D)\).
One can also perform this calculation entirely within the category of Cohen-Macaulay modules over \(\mathcal{O}_{\mathcal{Y}_{0}}\); for details, see the forthcoming work of Zhang [40].
### Corollary (Base-change).
_Let \(S\) be a finitely generated \(R\)-algebra. Let \(\mathcal{Y}_{S,0}=\operatorname{Spec}(\mathcal{O}_{\mathcal{Y}_{0}}\otimes_{R }S)\) and consider the diagram_
_where \(\mathcal{Y}_{S}\) is the fibre product. The pullback \(j^{*}\mathcal{V}\) is a tilting bundle on \(\mathcal{Y}_{S}\) with \(\operatorname{End}_{\mathcal{Y}_{S}}(j^{*}\mathcal{V})\cong\mathcal{A}(T^{*}S ^{1},D)\otimes_{R}S\). In particular, by SS1.7(B), the derived category of perfect modules on \(\mathcal{Y}_{S}\) inherits an action of \(\Gamma(T^{*}S^{1},D)\)._
Proof.: The map \(\mathcal{Y}\to\operatorname{Spec}(R)\) is a conic fibration over \(\mathbb{A}^{n+1}\) with equidimensional fibres and smooth (in particular, Cohen-Macaulay) total space, hence flat. The endomorphism bundle \(\operatorname{End}_{\mathcal{Y}}(\mathcal{V})\) is a locally free \(\mathcal{O}_{\mathcal{Y}}\)-module, so \(\mathcal{V}\) is flat over \(\operatorname{Spec}(R)\) by [6, Lemma 2.2]. By [6, Lemma 2.9], this implies that \(j^{*}\mathcal{V}\) is a tilting bundle with \(\operatorname{End}_{\mathcal{Y}_{S}}(j^{*}\mathcal{V})\cong g_{*}\operatorname {End}_{\mathcal{Y}_{S}}(j^{*}\mathcal{V})\cong i^{*}f_{*}\operatorname{End}_{ \mathcal{Y}}(\mathcal{V})\cong i^{*}\mathcal{A}(T^{*}S^{1},D)\cong\mathcal{A} (T^{*}S^{1},D)\otimes_{R}S\). This base-change formula is used in the proof of [6, Lemma 2.9] but can also be found in [22, Lemma 2.10] where the pullbacks are left-derived; in our case all the modules are either free or locally free, so derived pullback equals pullback.
## 5 A 1-d picture of a 3-d sphere
We conclude by discussing an example which displays how one can draw 1-dimensional pictures corresponding to sheaves on the higher dimensional mirrors. Let \(n=1\); in this case \(\mathcal{Y}\) is the usual small-resolved conifold which is the total space of the vector bundle
\(\mathcal{O}(-1)\oplus\mathcal{O}(-1)\) over \(\mathbb{P}^{1}\). The pushforward of the structure sheaf of \(\mathbb{P}^{1}\) is well-known to be a 3-spherical object \(S\) in \(D^{b}\operatorname{coh}(\mathcal{Y})\). It can be resolved by line bundles as follows:
\[\mathcal{O}(2)\xrightarrow{(y_{0},-x_{1})}\mathcal{O}(1)^{\oplus 2}\xrightarrow{(x_{1}, y_{0})}\mathcal{O}\]
and \(\mathcal{O}(2)\) in turn is equivalent to \(\mathcal{O}\xrightarrow{(x_{0},y_{1})}\mathcal{O}(1)^{\oplus 2}\), where \(\mathcal{O}(i)\) denote the line bundles on \(\mathcal{Y}\) with degree \(i\) on \(\mathbb{P}^{1}\). We can, therefore, express the mirror to the 3-spherical object \(S\), in terms of the generators of \(\mathcal{W}(T^{*}S^{1},D)\) and then work out, using the surgery exact triangle on the \(A\)-side, which immersed Lagrangian it corresponds to. In Figure 7, the thick curve is this immersed Lagrangian. Note that this immersed Lagrangian is unobstructed: it does bound four "teardrops" (monogons) which would contribute to the curved \(A_{\infty}\)-operation \(\mu_{0}\), but these appear in cancelling pairs passing through the same marked point (and hence weighted by the same variable).
The gray curve is a small pushoff. The Floer complex between these two curves has eight generators, living in the following degrees:
\[\begin{array}{c|cccccc}\text{degree}&-2&-1&0&1&2&3\\ \text{generators}&y&x,z&e&m&\overline{x},\overline{z}&\overline{y}\end{array}\]
The Floer differential can be computed as follows:
\[\begin{array}{l}\partial y=t_{1}z-t_{0}x,\qquad\qquad\qquad\qquad\partial x=t _{1}e,\qquad\qquad\qquad\qquad\partial z=t_{0}e\\ \partial e=1,\qquad\qquad\qquad\qquad\partial m=t_{1}\overline{x}-t_{0} \overline{z}\\ \partial\overline{x}=t_{1}\overline{y},\qquad\qquad\qquad\qquad\partial\overline {z}=t_{0}\overline{y},\qquad\qquad\qquad\partial\overline{y}=1.\end{array}\]
which yields cohomology of \(k[t_{0},t_{1}]/(t_{0},t_{1})=k\) in degrees 0 and 3.
It is also possible to verify directly that this immersed Lagrangian corresponds to a simple module of \(\mathcal{A}(T^{*}S^{1},D)\) dual to \(L_{0}\).
Figure 7: A 3-spherical object in \(\mathcal{W}(T^{*}S^{1},D)\) where \(|D|=2\). The gray curve is a small pushoff, used to compute the Floer complex.
Derived contraction algebra
**SS6.1** Let \({\cal Y}_{0}\) be a 3-fold compound Du Val singularity admitting a small resolution \({\cal Y}\). The _derived contraction algebra_ of \({\cal Y}\) is an enhancement of the contraction algebra \(\Lambda\) of Donovan and Wemyss [10] in the sense that \(\Lambda=H^{0}(\Gamma)\). The derived contraction algebra can be understood as the Drinfeld localisation of the endomorphism algebra \({\rm End}({\cal V})\) of the tilting bundle on \({\cal Y}\) with respect to the idempotent \(e={\rm id}_{{\cal O}_{\cal Y}}\) corresponding to the structure sheaf \({\cal O}_{\cal Y}\). Recall that the Drinfeld localisation is given by
\[{\rm End}({\cal V})_{e}={\rm End}({\cal V})\langle\epsilon\rangle/(\epsilon e= e\epsilon=\epsilon,\,d\epsilon=e),\]
that is we freely introduce an element \(\epsilon\) to \({\rm End}({\cal O}_{\cal Y})\) of degree \(-1\) with \(d\epsilon=e\). This kills the corresponding object in \(D^{b}({\rm End}({\cal V}))\simeq D^{b}({\cal Y})\) after localisation:
\[D^{b}({\rm End}({\cal V})_{e})\simeq D^{b}({\cal Y})/\langle{\cal O}_{\cal Y}\rangle.\]
**SS6.2** Let us consider the case of a compound \(A_{N}\) singularity. Recall that in this case we have a 3-fold singularity given by \(uv=f_{0}(x,y)f_{1}(x,y)\ldots f_{n}(x,y)\). The relative Fukaya category is derived equivalent to the algebra \({\cal A}(T^{*}S^{1},D)\otimes_{R}S\) where \(S\coloneqq k[x,y]\) is viewed as an \(R\)-algebra by the homomorphism \(t_{i}\to f_{i}(x,y)\). By Corollary SS4.7, \({\cal A}(T^{*}S^{1},D)\otimes_{R}S\) is isomorphic to the algebra \({\rm End}_{Y_{S}}(j^{*}{\cal V})\) of endomorphisms of the tilting bundle \(j^{*}V={\cal O}_{Y_{S}}\oplus j^{*}{\cal M}\). Hence the derived contraction algebra is given by
\[\left({\cal A}(T^{*}S^{1},D)\otimes_{R}S\right)_{e_{0}},\quad e_{0}={\rm id}_ {L_{0}}\,.\]
That is, the localisation of \(D^{b}({\cal Y}_{S})\) away from \({\cal O}_{Y_{S}}\) corresponds to localisation away from the Lagrangian \(L_{0}\) in the relative Fukaya category \({\cal W}(T^{*}S^{1},D)\otimes_{R}S\). In the remainder of this section, we will give an alternative, more geometric, description of the derived contraction algebra in terms of the relative Fukaya category of a disc.
**SS6.3** Theorem.: _Let \(\Delta\) be the disc obtained by excising \(L_{0}\) from \(T^{*}S^{1}\) (Figure 8). The derived contraction algebra of a 3-fold compound \(A_{n}\) singularity is quasi-equivalent to the endomorphism algebra of \(\bigoplus_{i=1}^{n}L_{i}\) in the relative Fukaya category \({\cal W}(\Delta,D)\otimes_{R}S\)._
Proof.: We can think of the annulus \(T^{*}S^{1}\) as the result of attaching a Weinstein 1-handle to the disc, with \(L_{0}\) as the cocore of the handle. By Ganatra, Pardon and Shende [14, Proposition 11.2], this means that the localisation
\[\left({\cal W}(T^{*}S^{1},D)\otimes_{R}S\right)/\langle L_{0}\rangle\]
is quasi-equivalent to the relative Fukaya category of the disc \(\Delta\) we get by excising \(L_{0}\) from \(T^{*}S^{1}\). This proves the theorem.
### A model for the derived contraction algebra.
We now give a model for the \(A_{\infty}\)-algebra \(\operatorname{End}_{\mathcal{W}(\Delta,D)}\left(\bigoplus_{i=1}^{n}L_{i}\right)\). This can be calculated directly. It is given by taking the \(R\)-linear path algebra of the following quiver
imposing the relations (coming from the quadrilaterals with boundary \(b_{i}\cup L_{i}\cup a_{i}\cup L_{i+1}\) in \(\Delta\)):
\[b_{i}a_{i}=t_{i}e_{i},\ a_{i}b_{i} =t_{i}e_{i+1},\text{ for }i=1,\dots,n-1,\] \[\alpha^{2} =0,\qquad\beta^{2}=0,\]
and defining the differential (coming from the bigons with boundary \(\alpha\cup L_{1}\) and \(L_{n}\cup\beta\)) by
\[da_{i}=db_{i}=0\text{ for }i=1,\dots,n-1,\qquad d\alpha=t_{0}e_{1},\quad d \beta=t_{n}e_{n},\]
extending to longer paths by the graded Leibniz rule. Note that \(a_{i},b_{i}\), \(i=1,\dots,n-1\), are in degree zero whilst \(\alpha\) and \(\beta\) are in degree \(-1\).
To see that there are no higher products, we appeal to a Maslov index calculation of Ozsvath and Szabo [29, Proposition 6.2] who studied these relative categories in the context of Heegaard-Floer theory (where it is called the _pong algebra_). A rigid \((k+1)\)-gon contributing to a \(\mu_{k}\) ioperation has Maslov index \(2-k\); Ozsvath and Szabo show that the Maslov index of a holomorphic disk \(u\) with boundaries on \(L_{1},\dots,L_{n}\) is given by
Figure 8: Relative Fukaya category of the disc as a localisation.
\(\mathrm{mult}(u,z_{1})+\mathrm{mult}(u,z_{n})\), which is non-negative since \(u\) is holomorphic. It follows that \(k\leq 2\). A similar argument appears in [3, Proposition 3.6].
SS6.5 Remark.The relative wrapped Fukaya category \(\mathcal{W}(\Delta,D)\) is acted on by its center given by its Hochschild cohomology which can be identified with the symplectic cohomology \(SH(\Delta,D)\). There is a closed orbit \(\eta\) that corresponds to the boundary of \(\Delta\) which has degree \(2\). Thus, \(\mathcal{W}(\Delta,D)\) can be seen as a category over \(k[\eta]\). This recovers the familiar structure of the derived contraction algebra studied in detail in [17, Section 6].
SS6.6 Example.We can compute the case where \(n=1\) and \(f_{0}=x,f_{1}=y\). This corresponds to the conifold singularity. We get that \(\Gamma=k[x,y]\langle\alpha,\beta\rangle\) with \(\alpha^{2}=\beta^{2}=0\), \(d\alpha=x\) and \(d\beta=y\). It is easy to determine that \(H^{*}(\Gamma)=k[\eta]\) with \(\eta=\alpha\beta+\beta\alpha\) of degree \(-2\). This coincides with Booth's calculation [5, Section 4.2].
SS6.7 Example.Consider the Pagoda flop \(f_{0}=y+x^{n}\), \(f_{1}=y-x^{n}\). Our model for the derived contraction algebra gives
\[k[x,y]\langle\alpha,\beta\rangle/(\alpha^{2},\beta^{2}),\qquad d\alpha=y+x^{n},\,d\beta=y-x^{n}.\]
Assuming we are not in characteristic \(2\), we can define
\[\zeta_{1}=(\alpha+\beta)/2,\qquad\zeta_{2}=(\alpha-\beta)/2\]
so that \(d\zeta_{1}=y\) and \(d\zeta_{2}=x^{n}\). This DG-algebra is isomorphic to the graded commutative algebra
\[k[x,y,\zeta_{1},\zeta_{2}]/(\zeta_{1}^{2}+\zeta_{2}^{2}),\qquad d\zeta_{1}=y, \,d\zeta_{2}=x^{n}\]
Now, it is easy to see that the map from
\[k[x,\zeta],\qquad d\zeta=x^{n}\]
sending \(\zeta\to\zeta_{2}\) and \(x\to x\) is a quasi-isomorphism. This latter model for the derived contraction algebra of the Pagoda flop is given by Booth in [5, Lemma 4.3.8]. Note that in characteristic \(2\), the class \(x^{n}\in H^{0}(\Gamma)\) is non-trivial, so the assumption on characteristic is important here.
SS6.8 Example.Consider the \(3\)-fold \(uv=xy(x^{2}+y^{3})\). This has six different partial resolutions corresponding to different permutations of
\[f_{1}=x,f_{2}=x^{2}+y^{3},f_{3}=y.\]
We just focus on this particular choice and compare the answer our model gives for \(\Lambda=H^{0}(\Gamma)\) with that computed by August [2, Example 4.5, Figure 2]. Our model gives an algebra over \(k[x,y]\) described by the following quiver:
\(\alpha\)\(\alpha\)\(\beta\)
Lagrangians \(L_{i}\) go to line bundles \({\cal L}_{i}\) on \(C\). In particular, one can arrange that \({\cal L}_{0}\) is the trivial bundle (i.e. the structure sheaf \({\cal O}_{C}\)).
In the case \(n=0\), the mirror curve \(C\) is simply the _affine_ curve \({\mathbb{A}}^{1}\cup_{pt}{\mathbb{A}}^{1}=\operatorname{Spec}k[x,y]/(xy)\), and the category \(D^{b}\operatorname{coh}(C)\) is quasi-equivalent to the derived category of modules over \(\operatorname{End}({\cal O}_{C})\). The subcategory of perfect objects is then generated by \(\operatorname{End}({\cal O}_{C})\) itself [36, Lemma 15.78.1].
For higher \(n\), there is an \(n+1\)-fold covering map \(\pi\colon T^{*}S^{1}\setminus D\to T^{*}S^{1}\setminus\{p\}\) which respects the grading. The graph of \(\pi\) is a Lagrangian submanifold of \((T^{*}S^{1}\setminus D)^{-}\times(T^{*}S^{1}\setminus\{p\})\) (where \({}^{-}\) indicates that we reverse the sign of the symplectic form on this factor). This induces triangulated \(A_{\infty}\) quilt functors
\[\pi_{*}\colon{\cal W}(T^{*}S^{1}\setminus D)\to{\cal W}(T^{*}S^{1}\setminus\{ p\})\quad\text{respectively}\quad\pi^{*}\colon{\cal W}(T^{*}S^{1}\setminus\{p\})\to{ \cal W}(T^{*}S^{1}\setminus D).\]
Geometrically, a Lagrangian brane is sent under \(\pi_{*}\), respectively \(\pi^{*}\), to its (possibly immersed) image, respectively preimage, under \(\pi\). These functors restrict to give functors
\[\pi_{*}\colon{\cal B}(D)\to{\cal B}(p)\quad\text{respectively}\quad\pi^{*} \colon{\cal B}(p)\to{\cal B}(D).\]
Given an object of \({\cal B}(D)\), it follows as in [32, Section 9] that the object \(\pi^{*}\pi_{*}(L)\) is the sum \(\bigoplus_{g\in G}g(L)\) where \(G\) is the deck group of the covering map \(\pi\).
Write \(L_{0},\ldots,L_{n}\) for the arcs in \(T^{*}S^{1}\setminus D\) and \(\bar{L}_{0}\) for the arc in \(T^{*}S^{1}\setminus\{p\}\). By the \(n=0\) case of the proposition, if \(L\in{\cal B}(D)\) then \(\pi_{*}(L)\) is generated by \(\bar{L}_{0}\subset T^{*}S^{1}\setminus\{p\}\). Therefore \(\bigoplus_{G}g(L)\) is generated by \(\pi^{*}\bar{L}_{0}=\bigoplus_{i=0}^{n}L_{i}\), and since \(L\) is a summand of \(\bigoplus_{g\in G}g(L)\), we see that \(L\) is split-generated by \(\bigoplus_{i=0}^{n}L_{i}\), as required.
S a.2 Remark.Obviously, the Lagrangians \(L_{0},\ldots,L_{n}\) do not generate \({\cal W}(T^{*}S^{1}\setminus D)\), since the Lagrangian branes that are allowed in \({\cal W}(T^{*}S^{1}\setminus D)\) can have ends near the punctures along \(D\).
S a.3 Proposition (Generation with coefficients)._Let \(L\) be an object of \({\cal W}(T^{*}S^{1},D)\). If \(L\) generates \({\cal W}_{0}(T^{*}S^{1},D):={\cal W}(T^{*}S^{1},D)\otimes_{R}R/{\mathfrak{m}}\) then it also generates the relative wrapped category with coefficients in \(\bar{R}\), that is \({\cal W}(T^{*}S^{1},D)\otimes_{R}\bar{R}\)._
As a corollary, the category \({\cal W}(T^{*}S^{1},D)\otimes_{R}\bar{R}\) is split-generated by the Lagrangian arcs \(L_{0},\ldots,L_{n}\). The proof of this proposition will take up the rest of the appendix.
S a.4 Proof.Let
\[\bar{\cal A}=\operatorname{End}_{{\cal W}(T^{*}S^{1},D)}(L)\otimes_{R}\bar{R},\qquad{\cal A}_{0}=\operatorname{End}_{{\cal W}_{0}(T^{*}S^{1},D)}(L)= \operatorname{End}_{{\cal W}(T^{*}S^{1},D)}(L)\otimes_{R}R/{\mathfrak{m}}.\]
We have Yoneda-type functors
\[\bar{Y}\colon{\cal W}(T^{*}S^{1},D)\otimes_{R}\bar{R}\to\operatorname{mod}( \bar{\cal A})\]
\[Y_{0}\colon\mathcal{W}_{0}(T^{*}S^{1},D)\to\operatorname{mod}(\mathcal{A}_{0}).\]
The module \(Y_{0}(L)=\mathcal{A}_{0}\) (respectively \(\bar{Y}(L)=\bar{\mathcal{A}}\)) generates the subcategory \(\operatorname{perf}(\mathcal{A}_{0})\) (respectively \(\operatorname{perf}(\bar{\mathcal{A}})\)) of perfect objects. Since \(L\) generates \(\mathcal{W}_{0}(T^{*}S^{1},D)\), the functor \(Y_{0}\) lands in \(\operatorname{perf}(\mathcal{A}_{0})\) and corestricts to give a quasi-equivalence
\[Y_{0}\colon\mathcal{W}_{0}(T^{*}S^{1},D)\to\operatorname{perf}(\mathcal{A}_{0})\]
(i.e. the induced functor on homotopy categories is fully faithful and essentially surjective). We want to show that
1. \(\bar{Y}\) lands in \(\operatorname{perf}(\bar{\mathcal{A}})\);
2. the induced functor \(H(\bar{Y})\) on homotopy categories is (i) essentially surjective and (ii) fully faithful.
SSA.5 Proof of (a):The subcategory \(\operatorname{perf}(\bar{\mathcal{A}})\subset\operatorname{mod}(\bar{ \mathcal{A}})\) is precisely the triangulated subcategory of compact objects (see for example [36, Proposition 15.78.3]). An object \(C\) in a pre-triangulated \(A_{\infty}\) category is compact if and only if the functor it corepresents \(\hom(C,\ \ )\) preserves coproducts, that is,
\[\oplus_{i}\hom(C,E_{i})=\hom(C,\oplus_{i}E_{i})\]
for arbitrary direct sums \(\oplus_{i}E_{i}\). So it suffices to show that if \(K\in\mathcal{W}(T^{*}S^{1},D)\otimes_{R}\bar{R}\) is an object then
\[\oplus_{i}\hom_{\operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),E_{i} \right)=\hom_{\operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),\oplus_ {i}E_{i}\right)\]
for arbitrary direct sums \(\oplus_{i}E_{i}\) in \(\operatorname{mod}(\bar{\mathcal{A}})\).
The complexes \(\oplus_{i}\hom_{\operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),E_{i}\right)\) and \(\hom_{\operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),\oplus_{i}E_{i}\right)\) are complete filtered \(\bar{R}\)-modules with the filtration coming from the action of powers of the maximal ideal; the canonical map
\[\oplus_{i}\hom_{\operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),E_{i} \right)\to\hom_{\operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),\oplus _{i}E_{i}\right)\] (A.1)
is a morphism of filtered complexes. There are therefore spectral sequences computing both sides, and a morphism of spectral sequences induced by (A.1). By the Eilenberg-Moore comparison theorem, it suffices to check that this morphism is an isomorphism on the \(E_{0}\) pages. Note that Eilenberg-Moore requires completeness of the filtration, which is why we are working over \(\bar{R}\) instead of \(R\).
The \(E_{0}\) pages are respectively
\[E_{0}^{pq}=\oplus_{i}\hom_{\operatorname{mod}(\mathcal{A}_{0})}^{p+q}\left(Y_ {0}(K),\operatorname{gr}^{p}(E_{i})\right)\text{ and }E_{0}^{pq}=\hom_{ \operatorname{mod}(\mathcal{A}_{0})}^{p+q}\left(Y_{0}(K),\oplus_{i} \operatorname{gr}^{p}(E_{i})\right)\]
where \(\operatorname{gr}^{p}\) denotes the \(p\)th graded piece of the associated graded module. The morphism on \(E_{0}\)-pages is induced by the canonical map
\[\oplus_{i}\hom_{\operatorname{mod}(\mathcal{A}_{0})}\left(Y_{0}(K), \operatorname{gr}(E_{i})\right)\to\hom_{\operatorname{mod}(\mathcal{A}_{0})} \left(Y_{0}(K),\oplus_{i}\operatorname{gr}(E_{i})\right).\]
Since \(Y_{0}(K)\) is perfect, this is an isomorphism, which proves (a).
### Proof of (b.i):
We have \(\bar{\mathcal{A}}=\bar{Y}(L)\), and since \(\bar{\mathcal{A}}\) generates \(\operatorname{perf}(\bar{\mathcal{A}})\), the essential image of \(\bar{Y}\) in \(\operatorname{mod}(\bar{\mathcal{A}})\) contains \(\operatorname{perf}(\bar{\mathcal{A}})\).
### Proof of (b.ii):
Given objects \(K,K^{\prime}\in\mathcal{W}(T^{*}S^{1},D)\otimes_{R}\bar{R}\), the complexes
\[CF(K,K^{\prime})\otimes_{R}\bar{R}\quad\text{ and }\quad\operatorname{hom}_{ \operatorname{mod}(\bar{\mathcal{A}})}\left(\bar{Y}(K),\bar{Y}(K^{\prime})\right)\]
are filtered by powers of the maximal ideal. These filtrations give us spectral sequences and the functor \(\bar{Y}\) gives a map of filtered complexes \(CF(K,K^{\prime})\otimes_{R}\bar{R}\to\operatorname{hom}_{\operatorname{mod}( \bar{\mathcal{A}})}\left(\bar{Y}(K),\bar{Y}(K^{\prime})\right)\) and hence a morphism of spectral sequences. On the \(E_{1}\) page this is just the map
\[H\left(\operatorname{hom}_{\mathcal{W}_{0}(T^{*}S^{1},D)}(K,K^{\prime}) \right)\otimes_{R}\operatorname{gr}(\bar{R})\to H\left(\operatorname{hom}_{ \operatorname{mod}(\mathcal{A}_{0})}(Y_{0}(K),Y_{0}(K^{\prime}))\right)\otimes _{R}\operatorname{gr}(\bar{R})\]
induced from \(H(Y_{0})\colon H\left(\operatorname{hom}_{\mathcal{W}_{0}(T^{*}S^{1},D)}(K,K^{ \prime})\right)\to H\left(\operatorname{hom}_{\operatorname{mod}(\mathcal{A}_ {0})}(Y_{0}(K),Y_{0}(K^{\prime}))\right)\) (because any polygons which pass through the marked points have their contributions weighted by an element of \(\mathfrak{m}\)). This is an isomorphism because \(Y_{0}\) is cohomologically full and faithful. The Eilenberg-Moore comparison theorem then implies that the map \(H(\bar{Y})\) is an isomorphism, which proves that \(\bar{Y}\) is cohomologically full and faithful.
|
2303.14461 | Indian Language Summarization using Pretrained Sequence-to-Sequence
Models | The ILSUM shared task focuses on text summarization for two major Indian
languages- Hindi and Gujarati, along with English. In this task, we experiment
with various pretrained sequence-to-sequence models to find out the best model
for each of the languages. We present a detailed overview of the models and our
approaches in this paper. We secure the first rank across all three sub-tasks
(English, Hindi and Gujarati). This paper also extensively analyzes the impact
of k-fold cross-validation while experimenting with limited data size, and we
also perform various experiments with a combination of the original and a
filtered version of the data to determine the efficacy of the pretrained
models. | Ashok Urlana, Sahil Manoj Bhatt, Nirmal Surange, Manish Shrivastava | 2023-03-25T13:05:54Z | http://arxiv.org/abs/2303.14461v1 | # Indian Language Summarization using Pretrained Sequence-to-Sequence Models
###### Abstract
The ILSUM shared task focuses on text summarization for two major Indian languages- Hindi and Gujarati, along with English. In this task, we experiment with various pretrained sequence-to-sequence models to find out the best model for each of the languages. We present a detailed overview of the models and our approaches in this paper. We secure the first rank across all three sub-tasks (English, Hindi and Gujarati). This paper also extensively analyzes the impact of k-fold cross-validation while experimenting with limited data size, and we also perform various experiments with a combination of the original and a filtered version of the data to determine the efficacy of the pretrained models.
Indian language summarization, Sequence-to-Sequence models, Multilingual models, 2022
languageresourceLanguage Resources Center, KCIS, IIIT Hyderabad, India
## 1 Introduction
Automatic text summarization is a technique for obtaining a condensed version of a long document while retaining its relevance. The NLP community has become more interested in text summarization for Indian languages in recent years. The progress of text summarization has, however, been hindered due to the lack of high-quality datasets. Nevertheless, the availability of large-scale multilingual datasets such as XL-Sum[1] and MassiveSumm[2] have led to substantial progress in natural language generation and summarization tasks. Even though quality-wise, these datasets are far from perfect[3], they do serve as a good starting point in terms of quantity. Additionally, recent advancements in neural-based pretrained models have transformed the field significantly.
The goal of the ILSUM shared task is to create reusable corpora for Indian language summarization. The dataset is created by scraping the news articles and corresponding descriptions from publicly available news websites. ILSUM data[4, 5] consists of a summarization corpus for two major Indian languages- Hindi and Gujarati, along with Indian English.
This paper provides a comprehensive overview of the existing sequence-to-sequence models for Indian language and English summarization. For Hindi and Gujarati, we used multilingual models such as MT5[6], MBart[7] and IndicBART[8] variants. We fine-tuned the PEGASUS[9], BART[10], T5[11] and ProphetNet[12] models on English data. Out of all the models, for English, PEGASUS outperformed others, while for Hindi, MT5 gave us the best results, and for
Gujarati, MBart performed the best. In order to avoid overfitting, we have performed k-fold cross-validation on the training dataset. We have observed that Hindi k-fold experiments had better scores than the experiments performed with the full version of the released data. We have applied several filters to assess the quality of the released datasets. Various combinations of filtered and original data were used to determine the efficacy of the pretrained generation models. We talk about our models, experiments and dataset filters later in this paper.
## 2 Related Work
Text summarization has been studied extensively, especially in the English language. Early research in summarization focused on extractive approaches, wherein summary sentences were chosen directly from the input text. On the other hand, abstractive approaches to summarization, such as neural attention models[13], Seq2Seq RNNs [14], Pointer-Generator networks [15] focus on generating summaries that capture the meaning of the input text without necessarily choosing sentences directly from the text. With the emergence of large neural language models for generation tasks, abstractive approaches have become more popular and generate high-quality summaries. While there have been various improvements in model architectures and summarization techniques, a large part of the progress in English text summarization can be attributed to the availability of large-scale datasets, such as CNN/DailyMail[14, 16], Gigaword[13, 17], XSum[18], etc.
This is in contrast to Indic languages, where little work has been done in summarization or related NLG tasks, such as headline generation. In recent times, however, there has been active research in this area, with the release of datasets such as XL-Sum[1], MassiveSumm[2], etc. These multilingual datasets consist of article-summary pairs from publicly available news domains, including Indian languages such as Hindi, Gujarati, Bengali, etc. The IndicNLG Suite[19] released datasets for several Indic language NLG tasks, such as sentence summarization and headline generation. More work needs to be done in this area to have models comparable to English summarization models in performance.
## 3 Corpus Description
The dataset released for this task has been collected from several leading Indian newspaper websites. The English and Hindi datasets were scraped from indiatvnews1, and the Gujarati data was created by scraping the divyabhaskar2 and gujarati.news183 websites. The Hindi and Gujarati datasets include articles/summaries which contain English words or phrases which have been code-mixed and script-mixed. Note that we have observed a few samples of English and Gujarati datasets, where the summaries consists of only one word. The ILSUM training data statistics are mentioned in Table 1. We have used the Indic[20] tokenizer to generate the counts in Table 1.
## 4 Model Description
The pretrained language models (PLMs) used for downstream tasks are pretrained using massive amounts of unlabeled text data. A PLM encodes extensive linguistic knowledge into a vast amount of parameters[21], which stimulates universal representations and improves generation quality. We have experimented with various pretrained generation models to find the optimal architecture.
**T5**[11] model proposes defining every NLP task in a text-to-text format. The model consists of an encoder-decoder Transformer architecture finetuned on the C4 corpus. In our experiments, we use both the T5-Base (220M parameters) and T5-Large (770M parameters) versions of the model. Since T5 is trained on an English-only dataset, we also look at the multilingual variants of the model for our experiments in Hindi and Gujarati. The MT5 model[6] uses an architecture very similar to T5, and is trained on 101 languages, as described in the mC4 dataset. Owing to the large size of the models, we only finetuned the base version (580M parameters) of the MT5 model (the large version has 1.2B parameters).
**BART**[10] is a denoising autoencoder for pretraining seq2seq models, which is similar to both BERT and GPT. Since it uses a bidirectional encoder like BERT, and an autoregressive decoder like GPT. The model was trained by corrupting the text using a noising function, and reconstructing the original text. We experiment with the BART-large model (406M parameters), and then also try out versions of the BART model finetuned on different datasets, namely the BART-Large-CNN and BART-Large-XSUM model, finetuned on the CNN-Daily Mail and XSUM datasets respectively. We try out multilingual variants[7] of the BART model for Hindi and Gujarati summarization experiments, namely the MBart-Large-50 (610M parameters) model[22], trained on 50 languages.
**PEGASUS**[9] uses the extracted gap sentences (GSG) self-supervised objective strategy to train the encoder-decoder model. Rather than masking a smaller text span as in BART and T5, PEGASUS masks the entire sentence. Later, it concatenates the gap sentences into pseudo summaries. It chooses the sentences based on importance. In the same way as T5, PEGASUS does not reconstruct full sequence of inputs but only masked sentences. The pretraining is performed with C4[11] and HugeNews corpus. We finetune the PEGASUS-large model on the ILSUM English corpus.
**BRIO**[23] is a novel training paradigm to achieve neural abstractive summarization, wherein a contrastive learning component is introduced to reinforce the abstractive model's ability to estimate the probability of system-generated summaries more precisely instead of using MLE
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{**English**} & \multicolumn{2}{|c|}{**Hindi**} & \multicolumn{2}{|c|}{**Gujarati**} \\ \hline \#Pairs & \multicolumn{2}{|c|}{12564} & \multicolumn{2}{|c|}{7957} & \multicolumn{2}{|c|}{8457} \\ \hline & Text & Summary & Text & Summary & Text & Summary \\ \hline \#Avg Words & 595 & 36.24 & 553 & 40.17 & 414.43 & 32.26 \\ \hline (Min, Max) Words & (1, 5717) & (1, 113) & (17, 5034) & (6, 113) & (25, 2839) & (1, 408) \\ \hline \#Avg Sentences & 10.29 & 1.26 & 18.1 & 1.7 & 21.28 & 1.57 \\ \hline (Min, Max) Sentences & (1, 169) & (1, 17) & (1, 157) & (1, 9) & (1, 187) & (1, 46) \\ \hline \end{tabular}
\end{table}
Table 1: ILSUM Train Data Statistics
training alone. Two stages are involved in this approach: the first stage generates the candidates using a pretrained sequence-to-sequence model, and next stage selects the best one.
**ProphetNet**[12] introduced a novel self-supervised objective, wherein the goal is to predict the next-\(n\) tokens, instead of just optimizing for one-step ahead predictions. We experiment with ProphetNet in our English summarization experiments.
**IndicBART**[8] is a pretrained sequence-to-sequence model trained on 11 Indic languages and English. It follows the masked span reconstruction objective similar to MBart. In contrast to available generation models, IndicBART utilizes the orthographic similarity between the Indian languages to achieve better cross-lingual transfer learning capabilities. This model size (244M) is much smaller than MBart and MT5 models with compact vocabulary. We finetune the IndicBART model on Hindi and Gujarati datasets.
**Adapters:** Recently proposed lightweight adapters[24] are effective at mitigating the overhead of pretrained language models for downstream tasks. We can update the adapters during finetuning and freezing most of the PLM parameters. In recent work[25], adapters were applied to perform Gujarati text summarization. Adapters can not only speed up training time but are also storage efficient since they require saving only adapter weights instead of entire finetuned model weights.
\begin{table}
\begin{tabular}{c|c|c|c c c} \hline & & & \multicolumn{3}{c}{**Validation Scores**} \\ \hline
**Lang** & **Model** & **Full Data / k-fold** & **R-1** & **R-2** & **R-4** \\ \hline \multirow{8}{*}{English} & PEGASUS & Full Data & **56.85** & **45.92** & **43.36** \\ & T5\({}_{large}\) & Full Data & 56.05 & 45.03 & 42.36 \\ & BART\({}_{large}\) & k-fold & 54.83 & 43.58 & 40.71 \\ & PEGASUS xsum & Full Data & 54.66 & 43.48 & 40.64 \\ & BRIO & Full Data & 53.57 & 41.86 & 38.81 \\ & BART\({}_{large}\) xsum & k-fold & 53.35 & 41.74 & 38.75 \\ & T5\({}_{base}\) + Adapter & k-fold & 51.91 & 40.07 & 37.1 \\ & ProphetNet & k-fold & 49.51 & 36.98 & 33.83 \\ \hline \multirow{8}{*}{Hindi} & IndicBART & k-fold & **60.73** & **51.26** & **47.57** \\ & MT5\({}_{base}\) & k-fold & 60.04 & 50.72 & 46.82 \\ & MT5\({}_{base}\)* & Full Data & 58.65 & 49.09 & 45.08 \\ & IndicBART-SentSumm & k-fold & 58.09 & 47.99 & 43.72 \\ & MBart\({}_{large}\)50 + Adapters & Full Data & 56.26 & 45.56 & 41.21 \\ & MBart\({}_{large}\)50 & Full Data & 55.76 & 44.96 & 40.59 \\ \hline \multirow{8}{*}{Gujarati} & MBart\({}_{large}\)50 & Full Data & **26.20** & **16.44** & **12.16** \\ & MT5\({}_{base}\) & Full Data & 25.11 & 15.81 & 11.68 \\ \cline{1-1} & MT5\({}_{base}\)* & Full Data & 24.16 & 14.68 & 10.79 \\ \cline{1-1} & IndicBART & k-fold & 23.38 & 13.34 & 9.35 \\ \cline{1-1} & MBart\({}_{large}\)50 + Adapter & Full Data & 21.63 & 13.04 & 9.56 \\ \hline \end{tabular}
\end{table}
Table 2: ILSUM Experiments on Validation Data. *Finetuned on the combination of Hindi and Gujarati Data
## 5 Experiments and Results
We have performed experiments under two different settings: the first is with the entire released dataset (full data), and the other is where we split the dataset into 10 folds and utilize 90% data (9 folds) for training and 10% data (1 fold) for validation. In both settings, the released data in the validation phase was used for testing purposes and we report these results in Table 2. Note that doing such k-fold cross validation experiments were also essential to evaluate our models' performance because validation summaries were not provided to us.
We use the standard ROUGE metric[26] to compute all the scores. We observed that PEGASUS yields the best results for English when finetuned on the full data version in the validation phase. We achieved the best results when we finetuned IndicBART and MBart using k-fold and full data during the validation phase. Finetuning a model on k-fold data might sometimes lead to better results than finetuning it on the entire dataset, which indicates that the dataset needs to be studied more and appropriate filters need to be applied, to see which examples in the dataset contribute to the model learning something useful. We discuss this in the next section. Based on the results of the validation phase, we submit results from the best models in the test phase. While PEGASUS and MBart still give us the best results for English and Gujarati respectively, MT5 performs better than IndicBART for Hindi when finetuned on k-fold data. Hyper-parameter settings are listed in Table 4.
The multilingual models have been pretrained on large amounts of data, and they are sufficiently capable of handling the presence of code-mixing in the dataset, which we observe in the outputs as well. The models generate good summaries and can add relevant English text in Hindi and Gujarati examples where appropriate. For instance, the average number of English words in Hindi and Gujarati training summaries is 0.25 and 1.91 respectively. For the test set released for Hindi and Gujarati, the summaries generated by our models have an average of 0.23 and 1.44 English words per summary. Note that the average number of English words in Hindi summaries is less because a large number of training samples are purely in Hindi and do not contain any English words or characters.
## 6 Data Quality Assessment
To verify the quality of the data, we have applied some of the filters mentioned in TeSum[3]. Filters were applied include checking whether there are:
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline & & & \multicolumn{3}{c}{**Test Scores**} \\ \hline
**Lang** & **Model** & **Full Data / k-fold** & **R-1** & **R-2** & **R-4** \\ \hline \multirow{2}{*}{English} & PEGASUS & Full Data & **55.83** & **44.58** & **41.8** \\ & T5\({}_{large}\) & Full Data & 54.73 & 43.08 & 40.12 \\ \hline \multirow{2}{*}{Hindi} & MT5\({}_{base}\) & k-fold & **60.72** & **51.02** & **47.11** \\ & IndicBART & k-fold & 58.38 & 48.31 & 44.25 \\ \hline \multirow{2}{*}{Gujarati} & MBart\({}_{large}\)50 & Full Data & **26.11** & 16.51 & 12.41 \\ & MBart\({}_{large}\)50 & Full Data (dropout=0.2) & 26.07 & **16.60** & **12.58** \\ \hline \end{tabular}
\end{table}
Table 3: ILSUM scores on Test Data
1. Empty instances
2. Duplicate pairs and summaries within the dataset
3. Cases where the first few sentences of the article itself are taken as the summary
4. Check whether the summary is 'compressed enough', i.e., we should not have summaries comparable in size to the text that has to be summarized. Compression is a good measure of telling us if the summary provided is a shortened version of the input document/text or not.
Filters counts for all the languages can be found in Table 5. It is important to note that, based on our filters, only about 68% of the Hindi summaries are valid since many are simply the first few sentences of the article. It could also be one of the reasons for models giving better results on k-fold data. Some of the folds in the training data might contain a large percentage of high-quality, valid summaries while leaving out a significant number of summaries which we consider invalid. Note that for Gujarati and English, the number of final valid article-summary pairs is comparable to the original dataset size, which is why the top-performing models give better results when finetuned on the whole dataset as compared to k-fold subsets.
### Data Variation Experiments
The unavailability of large datasets is one of the main bottlenecks for neural models for text generation. The existing summarization datasets for Indian languages are quite small. To improve the model generation capabilities on limited dataset, we did k-fold cross-validation
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline
**Parameters** & **BART** & **T5** & **ProphetNet** & **PEGASUS** & **BRIO** & **MBart** & **MT5** & **IndicBART** \\ \hline Max source length & 512 & 512 & 512 & 512 & 512 & 512 & 512 \\ \hline Max target length & 75 & 75 & 75 & 75 & 75 & 75 & 100 & 75 \\ \hline Batch Size & 2 & 1 & 1 & 2 & 2 & 4 & 2 & 2 \\ \hline Epochs & 5 & 5 & 5 & 5 & 5 & 5 & 10 & 10 \\ \hline Vocab Size & 50265 & 32128 & 30522 & 96103 & 50264 & 250054 & 250112 & 64015 \\ \hline Beam Size & 4 & 4 & 5 & 4 & 4 & 4 & 4 \\ \hline Learning Rate & 5e-5 & 5e-5 & 5e-5 & 5e-4 & 5e-5 & 5e-5 & 5e-5 & 5e-5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental setup and parameters settings
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Filters** & **Hindi** & **Gujarati** & **English** \\ \hline
**Dataset Size** & 7957 & 8457 & 12565 \\ \hline
**Empty** & 0 & 0 & 1 \\ \hline
**Duplicate Pairs** & 23 & 0 & 0 \\ \hline
**Duplicate Summary** & 15 & 113 & 117 \\ \hline
**Prefixes** & 2518 & 135 & 486 \\ \hline
**Compression \(<\)50\%** & 11 & 37 & 182 \\ \hline
**Final Valid** & 5390 & 8172 & 11779 \\ \hline
**Valid \%** & **67.74\%** & **96.63\%** & **93.74\%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Filtration counts of ILSUM data
on the best performing models (see Table 2). The mean ROUGE scores and standard deviation scores over 10 runs are reported in Table 6. We did 10-fold cross-validation using the released training dataset with the following combinations:
1. **Original data:** Fine-tuned for 5 epochs with released training dataset
2. **Original + Filtered data:** Finetuned for 3 epochs with original + 2 epochs with Filtered data
3. **Filtered data:** Fine-tuned for 5 epochs with only filtered dataset
4. **Filtered + Original data:** Finetuned for 3 epochs with filtered data + 2 epochs with original data
To perform all the experiments, we used the 'filtered data' obtained after applying filters mentioned in Table 5. To compare the models' performance on different variations of the training dataset, we have not made any changes in the validation data. As observed in Table 6,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Lang** & **Model** & **Data composition** & **R-1** & **R-2** & **R-L** \\ \hline \hline \multirow{8}{*}{**English**} & \multirow{8}{*}{PEGASUS} & Original Data & 52.51 \(\pm\) 1.1 & 40.91 \(\pm\) 1.36 & 47.81 \(\pm\) 1.16 \\ & & Original + Filtered Data & 51.65 \(\pm\) 1.14 & 40.07 \(\pm\) 1.25 & 46 \(\pm\) 3.67 \\ & & Filtered Data & 51.88 \(\pm\) 1.25 & 40.37 \(\pm\) 1.39 & 47.32 \(\pm\) 1.31 \\ & & Filtered + Original Data & 53.28 \(\pm\) 1.18 & 41.82 \(\pm\) 1.3 & 48.67 \(\pm\) 1.2 \\ \cline{2-6} & \multirow{8}{*}{T5-large} & Original Data & **53.45 \(\pm\) 0.95** & **42.16 \(\pm\) 1.13** & **48.97 \(\pm\) 1.05** \\ & & Original + Filtered Data & 53.22 \(\pm\) 1.23 & 42.04 \(\pm\) 1.41 & 48.85 \(\pm\) 1.31 \\ & & Filtered Data & 51.9 \(\pm\) 1.37 & 40.49 \(\pm\) 1.53 & 47.38 \(\pm\) 1.46 \\ & & Filtered + Original Data & 53.33 \(\pm\) 0.83 & 42.1 \(\pm\) 0.96 & 48.92 \(\pm\) 0.86 \\ \cline{2-6} & \multirow{8}{*}{BART-large} & Original Data & 50.25 \(\pm\) 1.52 & 38.15 \(\pm\) 1.85 & 45.46 \(\pm\) 1.63 \\ & & Original + Filtered Data & 51.42 \(\pm\) 0.88 & 39.85 \(\pm\) 1.11 & 46.93 \(\pm\) 1 \\ & & Filtered Data & 51.21 \(\pm\) 1.3 & 39.83 \(\pm\) 1.57 & 46.79 \(\pm\) 1.38 \\ & & Filtered + Original Data & 52.45 \(\pm\) 1.05 & 40.98 \(\pm\) 1.29 & 48 \(\pm\) 1.17 \\ \hline \hline \multirow{8}{*}{**Hindi**} & \multirow{8}{*}{IndicBART} & Original Data & 26.36 \(\pm\) 1.02 & 12.66 \(\pm\) 0.73 & 26.28 \(\pm\) 0.98 \\ & & Original + Filtered Data & 21.58 \(\pm\) 0.66 & 9.84 \(\pm\) 0.76 & 21.45 \(\pm\) 0.6 \\ & & Filtered Data & 21.27 \(\pm\) 0.88 & 9.75 \(\pm\) 0.56 & 21.12 \(\pm\) 0.86 \\ & & Filtered + Original Data & 25.67 \(\pm\) 1.04 & 12.16 \(\pm\) 0.82 & 25.57 \(\pm\) 1 \\ \cline{2-6} & \multirow{8}{*}{MT5-base} & Original Data & **27.04 \(\pm\) 1.22** & **13.21 \(\pm\) 0.61** & **26.96 \(\pm\) 1.22** \\ & & Original + Filtered Data & 20.33 \(\pm\) 0.91 & 9.26 \(\pm\) 0.8 & 20.2 \(\pm\) 0.92 \\ & & Filtered Data & 20.61 \(\pm\) 1.55 & 9.47 \(\pm\) 0.67 & 20.51 \(\pm\) 1.53 \\ & & Filtered + Original Data & 26.73 \(\pm\) 1.11 & 12.83 \(\pm\) 0.61 & 26.64 \(\pm\) 1.1 \\ \hline \hline \multirow{8}{*}{**Gujarati**} & \multirow{8}{*}{MBart Large 50} & Original Data & 20.36 \(\pm\) 0.67 & 11.65 \(\pm\) 1.13 & 20.01 \(\pm\) 0.72 \\ & & Original + Filtered Data & 16.04 \(\pm\) 1.12 & 9.23 \(\pm\) 0.76 & 15.83 \(\pm\) 1.15 \\ \cline{1-1} & & Filtered Data & 12.82 \(\pm\) 2.28 & 6.6 \(\pm\) 1.54 & 12.38 \(\pm\) 2.36 \\ \cline{1-1} & & Filtered + Original Data & 19.55 \(\pm\) 0.74 & 11.42 \(\pm\) 0.43 & 19.2 \(\pm\) 0.72 \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{MT5-base} & Original Data & **21.55 \(\pm\) 0.77** & **11.81 \(\pm\) 0.78** & **21.19 \(\pm\) 0.83** \\ \cline{1-1} & & Original + Filtered Data & 18.63 \(\pm\) 0.93 & 9.23 \(\pm\) 0.5 & 18.19 \(\pm\) 0.92 \\ \cline{1-1} & & Filtered Data & 9.66 \(\pm\) 0.97 & 4.84 \(\pm\) 0.56 & 9.53 \(\pm\) 0.92 \\ \cline{1-1} & & Filtered + Original Data & 20.29 \(\pm\) 0.62 & 10.7 \(\pm\) 0.52 & 19.84 \(\pm\) 0.56 \\ \hline \end{tabular}
\end{table}
Table 6: Validation set ROUGE scores on ILSUM corpus. This table reports the mean ROUGE scores and its standard deviation over 10 runs
the experiments performed with 'original' data produce better scores than the 'filtered' data. Also, the models finetuned on the combination of the 'filtered + original' dataset performed better compared to the 'original+filtered' combination.
## 7 Discussion and Conclusions
While having better models finetuned exclusively on Indian languages might benefit research in the area of Indian Language Summarization, creating larger, high-quality datasets for such languages will surely lead to progress in this field. It might be interesting to look at sources other than news websites as well, and to keep in mind the filters discussed earlier while creating the dataset.
For the ILSUM task, PEGASUS, MT5 and MBart give us the best results for English, Hindi and Gujarati respectively. We conclude that the transformer-based pretrained seq2seq models are capable of generating high-quality summaries for the ILSUM shared task.
## Acknowledgements
We thank the organizers of the ILSUM shared task for their help and support.
|
2310.14559 | Branch-and-Price for Prescriptive Contagion Analytics | Predictive contagion models are ubiquitous in epidemiology, social sciences,
engineering, and management. This paper formulates a prescriptive contagion
analytics model where a decision-maker allocates shared resources across
multiple segments of a population, each governed by continuous-time dynamics.
We define four real-world problems under this umbrella: vaccine distribution,
vaccination centers deployment, content promotion, and congestion mitigation.
These problems feature a large-scale mixed-integer non-convex optimization
structure with constraints governed by ordinary differential equations,
combining the challenges of discrete optimization, non-linear optimization, and
continuous-time system dynamics. This paper develops a branch-and-price
methodology for prescriptive contagion analytics based on: (i) a set
partitioning reformulation; (ii) a column generation decomposition; (iii) a
state-clustering algorithm for discrete-decision continuous-state dynamic
programming; and (iv) a tri-partite branching scheme to circumvent
non-linearities. Extensive experiments show that the algorithm scales to very
large and otherwise-intractable instances, outperforming state-of-the-art
benchmarks. Our methodology provides practical benefits in contagion systems;
in particular, it can increase the effectiveness of a vaccination campaign by
an estimated 12-70%, resulting in 7,000 to 12,000 extra saved lives over a
three-month horizon mirroring the COVID-19 pandemic. We provide an open-source
implementation of the methodology in an online repository to enable
replication. | Alexandre Jacquillat, Michael Lingzhi Li, Martin Ramé, Kai Wang | 2023-10-23T04:23:36Z | http://arxiv.org/abs/2310.14559v1 | # Branch-and-price for prescriptive contagion analytics
###### Abstract
Predictive contagion models are ubiquitous in epidemiology, social sciences, engineering, and management. This paper formulates a prescriptive contagion analytics model where a decision-maker allocates shared resources across multiple segments of a population, each governed by continuous-time dynamics. We define four real-world problems under this umbrella: vaccine distribution, vaccination centers deployment, content promotion, and congestion mitigation. These problems feature a large-scale mixed-integer non-convex optimization structure with constraints governed by ordinary differential equations, combining the challenges of discrete optimization, non-linear optimization, and continuous-time system dynamics. This paper develops a branch-and-price methodology for prescriptive contagion analytics based on: (i) a set partitioning reformulation; (ii) a column generation decomposition; (iii) a state-clustering algorithm for discrete-decision continuous-state dynamic programming; and (iv) a tri-partite branching scheme to circumvent non-linearities. Extensive experiments show that the algorithm scales to very large and otherwise-intractable instances, outperforming state-of-the-art benchmarks. Our methodology provides practical benefits in contagion systems; in particular, it can increase the effectiveness of a vaccination campaign by an estimated 12-70%, resulting in 7,000 to 12,000 extra saved lives over a three-month horizon mirroring the COVID-19 pandemic. We provide an open-source implementation of the methodology in an online repository to enable replication.
contagion analytics, column generation, branch and price, dynamic programming, COVID-19
## 1 Introduction
Epidemiological models have played a central role throughout the COVID-19 pandemic. In the United States for instance, the Center for Disease Control maintained an ensemble forecast based mainly on compartmental contagion models (Ray et al., 2020). At their core, these models rely on susceptible-infected (SI) dynamics, which express the number of new infections proportionally to infected and susceptible individuals. These dynamics have been embedded into more complex models to capture immunization upon recovery (SIR models), time lags from exposure to infection (SEIR models), immunization from vaccinations, asymptomatic cases, quarantine, hospitalization, mortality, etc. Collectively, these models have been instrumental to guide governmental and societal response to the COVID-19 pandemic (see Adam, 2020; Hsiang et al., 2020; Dehning et al., 2020; Walker et al., 2020; Li et al., 2022; Bennouna et al., 2022, among many others).
Contagion models have a long history that predates the COVID-19 pandemic. Starting from Kermack and McKendrick (1927), SI models have been used to model infectious epidemics, such as
influenza (Casagrandi et al. 2006) and Ebola (Berge et al. 2017). In marketing, the seminal model of product adoption from Bass (1969) relies on similar dynamics to capture network externalities between existing "infected" users and potential "susceptible" adopters. Recent applications of this model span multi-generational products (Li et al. 2013), box office revenues (Chance et al. 2008), online content diffusion (Susarla et al. 2012), etc. Altogether, SI models--and broader dynamical systems--are ubiquitous in natural sciences, social sciences, engineering and management, with applications to drug addiction (Behrens et al. 2000), urban congestion (Saberi et al. 2020), online rumors (Shah and Zaman 2016), climate change mitigation (Sterman et al. 2012), firm performance (Rahmandad et al. 2018), employee compensation (Rahmandad and Ton 2020), etc.
Motivated by the success of _predictive_ contagion models, this paper tackles a _prescriptive_ contagion analytics problem to optimize spatial-temporal resource allocation over dynamical systems. Specifically, a centralized decision-maker allocates shared resources across multiple segments of a population, each governed by continuous-time dynamics. This paper focuses on a deterministic setting; we show the robustness of the solution under model misspecification but leave prescriptive contagion analytics problems under uncertainty beyond the scope of this work. To demonstrate the broad applicability of the model, we define four problems under its umbrella. The first one involves distributing a vaccine stockpile to combat an epidemic, inspired by the COVID-19 pandemic. The second one adds a discrete facility location structure to deploy mass vaccination centers (Bertsimas et al. 2022). The third one optimizes online content promotion to maximize product adoption (Lin et al. 2021). The last one deploys emergency vehicles to mitigate urban congestion, based on a new contagion model of traffic congestion developed in this paper using real-world data from Singapore.
Prescriptive dynamical systems--in particular, prescriptive contagion models--involve large-scale mixed-integer non-convex optimization with constraints governed by ordinary differential equations (ODE). As such, they are challenging to even formulate, let alone to solve to optimality. The spatial-temporal resource allocation component, by itself, involves a mixed-integer optimization structure, but the problem is further complicated by complexities of dynamical systems:
1. Continuous time dynamics: dynamical systems are governed by ODEs of the form \(\frac{dM_{i}(t)}{dt}=f_{i}(\mathbf{M}_{i}(t),\mathbf{x}_{i}(t))\), where \(\mathbf{M}_{i}(t)\) denotes a multi-dimensional state variable in segment \(i=1,\cdots,n\); \(\mathbf{x}_{i}(t)\) denotes a multi-dimensional control variable; and \(f_{i}(\cdot,\cdot)\) refers to a continuous-time transition function. An easy workaround involves time discretization to approximate the dynamics by \(\mathbf{M}_{i}(t+\Delta t)-\mathbf{M}_{i}(t)=f_{i}(\mathbf{M}_{i}(t),\mathbf{x}_{i}(t))\cdot \Delta t\). This approach, however, can lead to extensive computational requirements if the time increment \(\Delta t\) is too small, or to large approximation errors if it is too large--or both. This challenge is particularly salient in contagion systems, which are highly non-linear and therefore sensitive to even small perturbations in initial conditions. A well-known incarnation of these non-linearities lies in a disease spreading in a population if the basic reproduction number \(R_{0}\) satisfies \(R_{0}>1\) but not if \(R_{0}<1\).
2. Non-linear interactions: contagion systems are driven by non-convex bilinear interactions of the form \(\frac{dS_{i}(t)}{dt}=-\alpha S_{i}(t)I_{i}(t)\), where \(S_{i}(t)\) and \(I_{i}(t)\) refer to the susceptible and infected populations. More broadly, dynamical systems may involve a non-convex transition function \(f_{i}(\cdot,\mathbf{x}_{i}(t))\) for a given control variable \(\mathbf{x}_{i}(t)\). So, even with time discretization, the optimization problem would exhibit a mixed-integer non-convex structure. Accordingly, prescriptive contagion models have typically been solved via approximations and heuristics (see Section 2), but no exact method has been devised for resource allocation in non-linear dynamical systems.
3. Other non-linearities: system dynamics are endogenous to resource allocation decisions. This dependency can be linear: in epidemiology, for example, immunizations can be safely assumed to be proportional to vaccination rates. But interventions can also induce a non-linear transition function \(f_{i}(\mathbf{M}_{i}(t),\cdot)\) for a state variable \(\mathbf{M}_{i}(t)\)(see, e.g., Behrens et al. 2000, in drug addiction). Similarly, dynamical systems may involve non-convex cost functions. Again, these non-linearities create significant complexities in the resulting optimization problem.
#### 2.0.1 Contributions and outline
This paper develops a branch-and-price methodology for prescriptive contagion analytics. This approach separates the coupled discrete resource allocation decisions in a master problem formulated via mixed-integer linear optimization, and the segment-specific system dynamics in a pricing problem formulated via continuous-state dynamic programming. To our knowledge, it provides the first exact methodology to solve spatial-temporal resource allocation problems in contagion systems--and, more generally, in non-linear dynamical systems.
Specifically, we define a spatial-temporal resource allocation model with endogenous continuous-time non-linear dynamics (Section 3). We consider a finite-horizon setting with multiple segments of a population, each constituting a dynamical system governed by non-linear ODEs. The decision-maker optimizes the discrete allocation of a shared resource in each period, which impacts the system dynamics in each segment. This model allows a variety of objective functions and constraints in each segment--even non-linear ones--along with coupling polyhedral constraints across segments (e.g., a shared budget). We apply this formulation to define our four problems: vaccine distribution, deployment of vaccination centers, content promotion, and congestion mitigation. In particular, a byproduct of this research is a new data-driven contagion model of traffic congestion that yields significant improvements in predictive performance against state-of-the-art benchmarks.
Next, this paper develops a branch-and-price algorithm to solve the mixed-integer non-linear optimization problems with ODE constraints (Section 4). The algorithm relies on four components:
1. _A set partitioning reformulation_. This formulation uses composite variables to select a resource allocation plan in each segment for the full planning horizon, as opposed to optimizing natural resource allocation decisions for each period. By pre-processing the system dynamics into
plan-based variables, this formulation eliminates the three complexities of prescriptive contagion analytics: continuous time dynamics, non-linear interactions, and other non-linearities. However, it comes at the cost of an exponential number of plan-based variables.
2. _A scalable column generation scheme to solve its linear relaxation._ A master problem solves a coupled resource allocation problem based on a subset of plan-based variables, via mixed-integer linear optimization. A pricing problem adds new plans of negative reduced cost or proves that none exists. Thanks to the problem's structure, the pricing problem can be decomposed into independent segment-specific dynamic programming models. Yet, the system dynamics lead to a continuous state space--a notorious challenge in dynamic programming.
3. _A state-clustering algorithm for discrete-decision continuous-state dynamic programming._ Forward-enumeration backward-induction algorithms can solve the pricing problem but remain intractable in even small instances. Thus, we propose a linear-time clustering algorithm that exploits the concentration of states in dynamical systems without using the value function or the policy function (which varies from iteration to iteration in column generation). The algorithm provides guarantees on the \(\ell_{\infty}\)-diameter of each cluster, which, we prove, controls the global approximation error. The reduction in the size of the state space considerably enhances the scalability of the pricing problem at limited costs in terms of approximation errors.
4. _A novel tri-partite branching scheme to circumvent the non-linearities of the system._ We embed the column generation procedure into a branch-and-price structure to restore the integrality of resource allocation decisions. We adopt the typical approach of branching on natural resource allocation variables, as opposed to composite plan-based variables. Because of the non-linear dynamics, however, integral resource allocation decisions may not map into equivalent integral plan-based variables. We therefore develop a tri-partite branching scheme that retains a natural branching structure, while guaranteeing finite convergence and optimality.
Finally, this paper demonstrates the scalability of our methodology to otherwise-intractable prescriptive contagion analytics problems (Section 5). For resource allocation problems (vaccine allocation, content promotion, and congestion mitigation), the algorithm returns provably optimal or near-optimal solutions in manageable computational times, significantly outperforming state-of-the-art benchmarks. For instance, our algorithm solves vaccine allocation instances involving 21 decisions in each of 51 regions and 12 weeks, with \(\mathcal{O}(21^{612})\) possible decisions. The vaccinations centers problem features an even more challenging facility location structure with linking constraints; still, the branch-and-price methodology generates high-quality solutions and strong bounds in manageable computational times. From a technical standpoint, the methodology significantly outperforms state-of-the-art optimization benchmarks,. Specifically, off-the-shelf implementation based on mixed-integer quadratic optimization or discretization-based linearization do not scale to even
small instances; moreover, our methodology provides significant benefits as compared to a tailored coordinate descent heuristic used to circumvent the bilinearities of the problem. From a practical standpoint, the methodology can have a significant impact on the management of contagion-based systems. For instance, the optimized solution can increase the effectiveness of a vaccination campaign by 12-70% compared to epidemiological benchmarks, resulting in 7,000 to 12,000 extra saved lives over a three-month horizon mirroring the COVID-19 pandemic. These benefits are found to be consistent across contagion analytics problems and robust to model misspecification. Ultimately, our prescriptive contagion analytics approach can deliver significant practical benefits in a variety of domains, by fine-tuning resource allocations based on spatial-temporal system dynamics.
## 2 Literature review
**Prescriptive contagion analytics.** The prevalence of contagion models has motivated prescriptive methods to optimize interventions in contagion-based systems. In epidemiology for instance, Goldman and Lightwood (2002) showed the benefits of social planning interventions against user equilibrium behaviors. Rowthorn et al. (2009) found, using a two-region SIS model, that medical interventions should target regions with fewer infections, whereas Ndeffo Mbah and Gilligan (2011) prescribed, using a SIRS model, to prioritize regions with more infections. Yamin and Gavious (2013) designed incentivization schemes for influenza vaccinations by embedding SI-based dynamics into an equilibrium model of vaccination behaviors. In the context of COVID-19, several models traded off health impacts and economic costs to design differentiated lockdowns across age tranches (Acemoglu et al., 2021), and to optimize the timing, duration and intensity of lockdowns (Caulkins et al., 2020, Alvarez et al., 2021, Balderrama et al., 2022). Similarly, the Bass model has been used to guide operational and marketing decisions surrounding product innovation, such as sales planning and time-to-market (Ho et al., 2002); pricing, production and inventory (Shen et al., 2014); multi-product pricing (Li, 2020); free access and premium subscriptions (Mai and Hu, 2022); dynamic pricing (Cosguner and Seetharaman, 2022, Zhang et al., 2022, Agrawal et al., 2021); etc. The Bass model has also been used to design drug prevention and treatment policies (Behrens et al., 2000), advertising campaigns (Krishnan and Jain, 2006), crowdfunding campaigns (Zhang et al., 2022), etc. Methodologically, this literature relies on dynamic programming and control to characterize optimal interventions in a single dynamical system (or two systems).
In contrast, our paper considers spatial-temporal resource allocation decisions with coupling constraints across multiple contagion systems. A seemingly related problem is the ventilator sharing problem from Mehrotra et al. (2020) and Bertsimas et al. (2021). However, ventilator availability does not impact the dynamics of the pandemic, so this problem could be formulated via mixed-integer linear optimization by decoupling upstream epidemiological predictions from downstream
- ventilator allocation decisions. In sharp contrast, vaccinations impact contagion dynamics, which require to integrate epidemiological dynamics into prescriptive resource allocation models.
Thus, our problem involves discrete optimization with endogenous contagion dynamics, combining the mixed-integer optimization difficulties of resource allocation and the non-convex dynamics of contagion models. Existing methods for this class of problems rely on heuristics and approximations. In product adoption, Alban et al. (2022) used a heuristic to optimize the deployment of mobile healthcare units; and Lin et al. (2021) designed an approximation algorithm with a \(1-1/e\) guarantee. In epidemiology, Long et al. (2018) used myopic linear optimization and approximate dynamic programming to allocate Ebola treatment units. Bertsimas et al. (2022) optimized where to open COVID-19 mass vaccination facilities and how to allocate vaccines in the United States, using a coordinate descent heuristic. Fu et al. (2021) solved a robust vaccine allocation problem with uncertainty in epidemiological forecasts, using linear approximations based on discretized numbers of infections or McCormick reformulations (Fu et al. 2021). Our paper tackles a deterministic prescriptive contagion analytics problem, and contributes an exact optimization approach that does not rely on finite difference approximations of continuous-time ODE dynamics and that does not involve approximations and heuristics to handle the non-convexities of the problem.
**Mixed-integer non-linear optimization.** Our problem exhibits a mixed-integer non-linear optimization (MINLO) structure with ODE constraints,. It therefore remains highly challenging even with finite difference approximations of ODEs (see Burer and Letchford 2012, for a detailed review of MINLO). Bilinear SI-based models fall into mixed-integer quadratic optimization. Methods include the reformulation-linearization technique (Sherali and Adams 2013), semi-definite relaxations (Fujie and Kojima 1997, Anstreicher 2009), disjunctive programming (Saxena et al. 2010, 2011), perspective reformulations (Gunluk and Linderoth 2010, Anstreicher and Burer 2021), augmented Lagrangian duality (Gu et al. 2020), etc. General-purpose MINLO methods include spatial branch-and-bound (Lee and Grossmann 2001), branch-and-reduce (Ryoo and Sahinidis 1996, Tawarmalani and Sahinidis 2004), and \(\alpha\)-branch-and-bound (Androulakis et al. 1995). Recent versions of Gurobi Optimization (2020) tackle mixed-integer quadratic optimization problems by combining RLT and spatial branch-and-bound, which we use as a benchmark in Section 5.
The ODE constraints also link to the mixed-integer optimal control literature. When the feasible region can be enumerated, partial outer approximation and sum-up rounding can converge to the optimal solution as time discretization becomes infinitely granular (Sager et al. 2012, Hante and Sager 2013, Manns and Kirches 2020). In practice, however, very granular time discretization may be required, and optimality is not guaranteed in the presence of control costs or combinatorial constraints. Jung et al. (2015) proposed a continuous optimal control problem and a combinatorial
integral approximation problem to restore integer feasibility; Gottlich et al. (2021) separated a mixed-integer optimal control problem from time-coupling combinatorial constraints; and Bestehorn et al. (2021) restored integral feasibility via a shortest path formulation. Our paper departs from this literature by considering large-scale problems with spatial coupling across multiple dynamical systems, and by proposing a new branch-and-price methodology.
Since its introduction by Barnhart et al. (1998), branch-and-price has been widely applied to mixed-integer linear optimization. In non-linear optimization, Andreas et al. (2008) solved a reliable \(h\)-paths problem with non-linearities stemming from failure probabilities. Nowak et al. (2018) proposed an inner-and-outer-approximation of MINLO problems by combining column generation with non-linear optimization. Allman and Zhang (2021) synthesized a generic branch-and-price algorithm for non-convex mixed-integer optimization. Our paper proposes a tailored branch-and-price decomposition for spatial-temporal resource allocation problems over continuous-time dynamical systems--including, notably, a novel tri-partite branching disjunction to handle non-linearities.
Finally, one of the main bottlenecks of our methodology lies in the discrete-decision continuous-state dynamic programming structure of the pricing problem (see, e.g., Bertsekas 2015, Powell 2022, for reviews of approximate dynamic programming and reinforcement learning). Continuous-state problems are typically handled via state discretization. Bertsekas (1975) showed that static state discretization converges to the optimum as the grid becomes increasingly granular. To manage the growth in the number of states, adaptive discretization methods build a finer grid where the cost function changes rapidly (Grune and Semmler 2004, Borraz-Sanchez and Haugland 2011). Several studies combined adaptive discretization with reinforcement learning to aggregate states with similar cost-to-go functions (see, e.g. Bertsekas et al. 1988, Pyeatt et al. 2001, Lolos et al. 2017, Sinclair et al. 2022). Bennouna et al. (2021) used transition data to learn a discrete partition of a continuous state space. In our problem, however, these methods are not readily applicable because the cost function changes at each column generation iteration. Instead, our state-clustering approach exploits stability in the state space--as opposed to stability in the cost-to-go function.
## 3 Prescriptive contagion analytics: definition and formulation
### Model formulation
We consider a general problem of spatial-temporal resource allocation in dynamical systems, depicted in Figure 1. The "spatial" component refers to resource allocation across \(n\) segments of a population (e.g., across regions in vaccine allocation, across products in content promotion). The temporal component refers to \(S\) decision epochs throughout a planning horizon of length \(T\). We denote by \(\tau_{s}\) the time stamp of epoch \(s=1,\cdots,S\), with \(\tau_{1}=0\) and \(\tau_{S+1}=T\).
We characterize centralized resource allocation decisions via variables \(\boldsymbol{x}_{is}\in\mathcal{F}_{is}\subseteq\mathbb{R}^{d_{is}}\) for each segment \(i=1,\cdots,n\) and each epoch \(s=1,\cdots,S\). These variables are defined as \(d_{is}\)-dimensional
vectors to allow for the allocation of multiple resources (e.g., treatment vs. prevention vehicles in congestion mitigation). We also introduce a variable \(\mathbf{y}\in\mathbb{Z}^{q}\times\mathbb{R}^{r}\) to capture other decisions (e.g., facility location in our vaccination centers problem). Let \(\Delta_{s}=\sum_{i=1}^{n}d_{is}\); let \(\mathbf{X}_{s}\in\mathbb{R}^{\Delta_{s}}\) denote the concatenated resource allocation variable at epoch \(s=1,\cdots,S\). Resource allocation in segment \(i=1,\cdots,n\) at epoch \(s=1,\cdots,S\) comes at a cost \(\Gamma_{is}(\mathbf{x}_{is})\), and other decisions come at a linear cost \(\mathbf{d}^{\top}\mathbf{y}\). We make no restriction on the feasible regions \(\mathcal{F}_{is}\) and the cost functions \(\Gamma_{is}(\cdot)\) governing segment-specific resource allocations, thus allowing non-linear decision problems. Importantly, we focus on discrete resource allocation decisions, where each region \(\mathcal{F}_{is}\) has finite cardinality \(D_{is}=|\mathcal{F}_{is}|<\infty\). In many cases, resources are naturally restricted to discrete quantities (e.g., emergency vehicles, in our congestion mitigation problem). Even when the decision space is infinite, operational constraints often lead to a discretized set of possible decisions. For example, Moderna (2020) shipped COVID-19 vaccines in pallets of around 20,000 vaccines. Still, the resource allocation problem remains high-dimensional due to coupled spatial-temporal resource allocations--if \(D_{is}=\mathcal{O}(D)\) for all \(i\), \(s\), the decision space grows in \(\mathcal{O}(D^{n\times S})\), which quickly becomes very large.
Besides segment-specific constraints captured in \(\mathcal{F}_{is}\), global constraints are characterized by a polyhedral set \(\{(\mathbf{X},\mathbf{y})\in\mathbb{R}^{\Delta_{s}+q+r}:\mathbf{U}_{s}\mathbf{X}+\mathbf{V}_{s}\bm {y}\geq\mathbf{w}_{s}\}\), where \(\mathbf{U}_{s}\in\mathbb{R}^{m_{s}\times\Delta_{s}}\), \(\mathbf{V}_{s}\in\mathbb{R}^{m_{s}\times(q+r)}\) and \(\mathbf{w}_{s}\in\mathbb{R}^{m_{s}}\). In other words, resource allocation decisions are subject to the following linear constraints, where \(\mathbf{u}_{sji}\in\mathbb{R}^{d_{is}}\) denotes the vector corresponding to the \(j^{\text{th}}\) constraint and the \(i^{\text{th}}\) segment in \(\mathbf{U}_{s}\), and \(\mathbf{v}_{sj}\in\mathbb{R}^{q+r}\) denotes the vector corresponding to the \(j^{\text{th}}\) constraint in \(\mathbf{V}_{s}\):
\[\left(\sum_{i=1}^{n}\mathbf{u}_{sji}^{\top}\mathbf{x}_{is}\right)+\mathbf{v}_{sj}^{\top}\bm {y}\geq w_{sj},\ \forall s=1,\cdots,S,\ \forall j=1,\cdots,m_{s}\]
Next, each population segment \(i=1,\cdots,n\) constitutes a dynamical system, governed by a continuous-time state variable \(\mathbf{M}_{i}(t)\in\mathbb{R}^{r_{i}}\). We assume that the dynamics are independent across the \(n\) segments. In the epidemiological context, for instance, this captures intra-region interactions (e.g., within each state or each country) but not inter-region interactions (see, e.g., Hsiang et al.
Figure 1: Schematic representation of the spatial-temporal resource allocation problem in dynamical systems.
2020, Walker et al. 2020, Li et al. 2022, Bennouna et al. 2022, for similar assumptions). Specifically, in each segment \(i=1,\cdots,n\), the state variable \(\mathbf{M}_{i}(t)\) is determined by an initial condition \(\mathbf{M}_{i}^{0}\), and varies according to the following system of ODEs, where \(f_{i}(\cdot,\cdot)\) denotes the transition function:
\[\frac{d\mathbf{M}_{i}(t)}{dt}=f_{i}(\mathbf{M}_{i}(t),\mathbf{x}_{is}),\ \forall s=1,\cdots,S,\ \forall t \in\left[\tau_{s},\tau_{s+1}\right].\]
The system is associated with a continuous-time cost function \(g_{it}(\mathbf{M}_{i}(t))\) and a terminal cost function \(h_{i}(\mathbf{M}_{i}(T))\). We impose no restriction on the transition function \(f_{i}(\cdot,\cdot)\) and the cost functions \(g_{it}(\cdot)\) and \(h_{i}(\cdot)\) so, by design, our model encompasses non-convex dynamical systems.
The spatial-temporal resource allocation problem, referred to as Problem (\(\mathcal{P}\)), minimizes total costs subject to resource allocation constraints and the system's dynamics. It is written as follows:
\[(\mathcal{P}) \min \sum_{i=1}^{n}\left(\int_{0}^{T}g_{it}(\mathbf{M}_{i}(t))dt+h_{i}( \mathbf{M}_{i}(T))\right)+\sum_{i=1}^{n}\sum_{s=1}^{S}\Gamma_{is}(\mathbf{x}_{is})+\bm {d}^{\top}\mathbf{y}\] (1) s.t. \[\left(\sum_{i=1}^{n}\mathbf{u}_{sji}^{\top}\mathbf{x}_{is}\right)+\mathbf{v}_ {sj}^{\top}\mathbf{y}\geq w_{sj},\ \forall s=1,\cdots,S,\ \forall j=1,\cdots,m_{s} \tag{2}\] \[\frac{d\mathbf{M}_{i}(t)}{dt}=f_{i}(\mathbf{M}_{i}(t),\mathbf{x}_{is}),\ \forall i=1,\cdots,n,\ \forall s=1,\cdots,S,\ \forall t\in\left[\tau_{s},\tau_{s+1}\right]\] (3) \[\mathbf{M}_{i}(\tau_{1})=\mathbf{M}_{i}^{0},\ \forall i=1,\cdots,n\] (4) \[\mathbf{x}_{is}\in\mathcal{F}_{is},\ \forall i=1,\cdots,n,\ \forall s=1, \cdots,S\] (5) \[\mathbf{y}\in\mathbb{Z}^{q}\times\mathbb{R}^{r} \tag{6}\]
This formulation, however, is intractable due to continuous time dynamics (Equation (3)), non-convex system dynamics (functions \(f_{i}(\cdot,\cdot)\), \(g_{it}(\cdot)\) and \(h_{i}(\cdot)\)), and non-convex segment-wise decisions (functions \(\Gamma_{is}(\cdot)\), regions \(\mathcal{F}_{is}\)). One workaround would be to model (\(\mathcal{P}\)) via dynamic programming, by defining a state variable that encompasses _all_ variables \(\mathbf{M}_{1}(\tau_{s}),\cdots,\mathbf{M}_{n}(\tau_{s})\) as well as all required information to enforce resource allocation constraints (Equation (2)). This approach, however, would scale in \(\mathcal{O}(D^{n\times S})\) if \(D_{is}=\mathcal{O}(D)\) for all \(i\), \(s\), quickly growing into the curse of dimensionality. Instead, we propose in Section 4 a branch-and-price methodology that separates coupled resource allocation decisions across segments and continuous-time dynamics in each segment. We still use dynamic programming in the pricing problem, but we leverage segment-specific state variables \(\mathbf{M}_{i}(t)\in\mathbb{R}^{r_{i}}\) rather than a higher-dimensional state variable of the form \((\mathbf{M}_{1}(\tau_{s}),\cdots,\mathbf{M}_{n}(\tau_{s}))\in\mathbb{R}^{r_{1}}\times \cdots\times\mathbb{R}^{r_{n}}\). Thus, the pricing problem scales in \(\mathcal{O}(D^{S})\), instead of \(\mathcal{O}(D^{n\times S})\).
Finally, let us underscore that, following Powell (2022) we define the state variable \(\mathbf{M}_{i}(t)\) as a necessary and sufficient function of history to compute the cost function, the constraints, and the transition function in segment \(i=1,\cdots,n\). For example, an inter-temporal budget constraint of the form \(\sum_{s=1}^{S}\mathbf{x}_{is}\leq\mathbf{\gamma}_{i}\) would require state augmentation to capture the budget used up to
epoch \(s\). In contrast, our model can accommodate budget constraints across segments of the form \(\sum_{i=1}^{n}\mathbf{x}_{is}\leq\mathbf{\beta}_{s}\). Although the problem is not directly modeled as a dynamic program, this structure enables segment-wise dynamic programming decomposition in our algorithm.
### Prescriptive contagion analytics model, and applications
The special case of contagion systems is derived from the general formulation by defining, in each segment \(i=1,\cdots,n\), a susceptible state \(S_{i}(t)\in\mathbb{R}\), an infected state \(I_{i}(t)\in\mathbb{R}\), and other states represented via a vector \(\mathbf{R}_{i}(t)\in\mathbb{R}^{V}\). We capture bilinear interactions between susceptible and infected populations as follows, with infection rate \(\alpha\) and transitions \(f_{i}^{S}(\cdot)\), \(f_{i}^{I}(\cdot)\), \(f_{i}^{R}(\cdot)\):
\[\frac{dS_{i}(t)}{dt}=-\alpha S_{i}(t)I_{i}(t)+f_{i}^{S}(S_{i}(t), I_{i}(t),\mathbf{R}_{i}(t)) \tag{7}\] \[\frac{dI_{i}(t)}{dt}=+\alpha S_{i}(t)I_{i}(t)+f_{i}^{I}(S_{i}(t), I_{i}(t),\mathbf{R}_{i}(t))\] (8) \[\frac{dR_{iv}(t)}{dt}=f_{i}^{R}(S_{i}(t),I_{i}(t),\mathbf{R}_{i}(t)), \quad\forall v=1,\cdots,V \tag{9}\]
Similarly, we define functions \(g_{it}^{SIR}(\cdot)\) and \(h_{i}^{SIR}(\cdot)\) to characterize the costs of the system dynamics in each segment. We then formulate the prescriptive contagion model by minimizing the following objective function, subject to the resource allocation constraints (Equation (2) and Equations (5)-(6)) and the contagion dynamics (initial conditions, and Equations (7)-(9)):
\[\sum_{i=1}^{n}\left(\int_{0}^{T}g_{it}^{SIR}(S_{i}(t),I_{i}(t), \mathbf{R}_{i}(t))dt+h_{i}^{SIR}(S_{i}(T),I_{i}(T),\mathbf{R}_{i}(T))\right)+\sum_{i=1} ^{n}\sum_{s=1}^{S}\Gamma_{is}(\mathbf{x}_{is})+\mathbf{d}^{\top}\mathbf{y} \tag{10}\]
We define our four problems from this generic formulation, all inspired from real-world problems and data-driven contagion models. We defer details and their full formulation to Appendix A.1.
_Vaccine allocation:_ The problem optimizes the number of vaccines (\(x\)) received in 51 regions (50 US states plus Washington, DC) to maximize the number of saved lives. We use the DELPHI-V model of COVID-19 from Li et al. (2022) and Bertsimas et al. (2022), with 7 non-vaccinated states (susceptible (S), exposed (E), infected (I), undetected (U), hospitalizations (H), quarantine (Q), death (D)) and 4 vaccinated states (susceptible (S'), exposed (E'), infected (I'), immunized (M)). Our experiments consider allocating a weekly stockpile of 2.5 to 7 million vaccines across 51 states and 12 weeks. We discretize decisions into up to 21 decisions per epoch, with \(\mathcal{O}\left(21^{612}\right)\) possible decisions. This setup mirrors the situation from February to April 2021 in the United States.
_Vaccination centers._ The problem extends the vaccine allocation problem by jointly optimizing the location of mass vaccination centers and the subsequent distribution of vaccines within a geographical radius (Bertsimas et al., 2022). Due to the complexity of the problem, we break down the United States into 10 groups defined by the Centers for Disease Control, with 4-7 states and 20 candidate facilities in each (Appendix C.1). Each group receives a budget of vaccines proportionally
to its share of the US population, discretized into 25,000-vaccine pallets, plus some flexibility buffer. We set a budget of 20 and 30 facilities across the country, again mirroring the situation in 2021. With 20 (resp. 30) facilities, each group needs to select 2 (resp. 3) facilities, and residents can travel up to 150 miles (resp. 100 miles), possibly across states, to access a vaccination center.
_Content promotion._ This problem optimizes product promotion to maximize adoption, subject to a sparsity constraint to promote up to \(K\) products per period, a cover constraint, a linking constraint, and contagion dynamics (Lin et al., 2021). The decision variables includes which products should be promoted (\(x^{1}\) variables) and the share of the population to which each product shall be promoted (\(x^{2}\) variables). Each segment corresponds to a product, and adoption follows a Bass model with susceptible users (A) and adopters (B). In our experiments, we consider up to 20 products, 10 decision epochs, 21 decisions per epoch, and 2-6 products promoted per epoch.
_Congestion mitigation._ This problem deploys \(B^{1}\) prevention vehicles and \(B^{2}\) treatment vehicles per hour to mitigate congestion costs. Prevention resources (\(x^{1}\)) respond to minor accidents and prevent congestion, whereas treatment resources (\(x^{2}\)) respond to major accidents and ease recovery. Contagion models reflect the propagation of congestion from "infected" roads to free-flow "susceptible" roads (Saberi et al., 2020). We leverage new data sources from Singapore on traffic speeds,
Figure 2: Contagion models for the four problems. Thick red lines indicate transitions that are endogenous to resource allocation. Dotted lines indicate bilinear susceptible-infected interactions.
road work, and traffic accidents to propose a richer six-state model, (Figure 2c): susceptible roads (S), road work (W), accidents (A), road work and accidents (\(A^{\prime}\)), congested (I), and recovered (R). We calibrate this model using a three-peak structure for morning, afternoon, and evening traffic. A byproduct of this paper is a new contagion model of urban congestion, which improves predictive performance against state-of-the-art benchmarks (Appendix A.2). In our experiments, we consider a setting with 5 neighborhoods, up to 8 hours and up to 12 vehicles to allocate, developed in collaboration with city officials to capture real-world situations in Singapore.
_Discussion._ The vaccine allocation problem has the largest state space. The vaccination centers problem adds a discrete structure with a switching variable \(\mathbf{y}\) (facility location) and linking constraints governing \(\mathbf{x}_{is}\) (vaccine allocation). The other two problems include multi-dimensional variables \(\mathbf{x}_{is}\), and the congestion mitigation problem involves a novel contagion model. Together, they cover a range of applications and optimization structures in prescriptive contagion analytics.
## 4 The method: branch-and-price for prescriptive contagion analytics
We propose a branch-and-price algorithm to solve Problem (\(\mathcal{P}\)), using a set partitioning reformulation (Section 4.1), a column generation procedure (Section 4.2), a state-clustering algorithm for the continuous-state pricing problem (Section 4.3), and a tri-partite branching scheme to handle non-convexities (Section 4.4). Section 4.5 summarizes the algorithm and establishes its exactness.
### Set partitioning reformulation
The reformulation optimizes over composite plan-based variables in each segment (which characterize resource allocations over the full planning horizon), as opposed to natural variables (which characterize resource allocation in each period). Specifically, we define the set of feasible plans in each segment \(i=1,\cdots,n\) as the combination of feasible resource allocation decisions, as follows:
\[\mathcal{P}_{i}=\mathcal{F}_{i1}\times\cdots\times\mathcal{F}_{iS},\ \forall i=1,\cdots,n\]
In each segment \(i=1,\cdots,n\), a plan \(p\in\mathcal{P}_{i}\) defines a sequence of resource allocations denoted by \((\mathbf{\alpha}_{i1}^{p},\cdots,\mathbf{\alpha}_{iS}^{p})\in\mathcal{F}_{i1}\times \cdots\times\mathcal{F}_{iS}\), as well as a cost parameter \(C_{i}^{p}\) governed by the system's dynamics:
\[C_{i}^{p}= \int_{0}^{T}g_{it}(\mathbf{M}_{i}(t))dt+h_{i}(\mathbf{M}_{i}(T))+\sum_{s= 1}^{S}\Gamma_{is}(\mathbf{x}_{is})\] (11) s.t. \[\frac{d\mathbf{M}_{i}(t)}{dt}=f_{i}(\mathbf{M}_{i}(t),\mathbf{\alpha}_{is}^{ p}),\ \forall s=1,\cdots,S,\ \forall t\in[\tau_{s},\tau_{s+1}] \tag{12}\] \[\mathbf{M}_{i}(\tau_{1})=\mathbf{M}_{i}^{0} \tag{13}\]
We define the following plan-based decision variables:
\[z_{i}^{p}=\begin{cases}1&\text{if plan $p\in\mathcal{P}_{i}$ is selected for segment $i=1,\cdots,n$,}\\ 0&\text{otherwise.}\end{cases}\]
The set partitioning formulation, referred to as (\(\mathcal{SP}\)), minimizes total costs (Equation (14)), while enforcing the coupling constraints (Equation (15)) and ensuring that one plan is selected in each segment (Equation (16)). For completeness, we reformulate our four problems in Appendix A.3.
\[(\mathcal{SP})\qquad\min \sum_{i=1}^{n}\sum_{p\in\mathcal{P}_{i}}C_{i}^{p}z_{i}^{p}+\mathbf{d} ^{\top}\mathbf{y}\] (14) s.t. \[\left(\sum_{i=1}^{n}\sum_{p\in\mathcal{P}_{i}}\mathbf{u}_{sji}^{\top} \mathbf{\alpha}_{is}^{p}z_{i}^{p}\right)+\mathbf{v}_{sj}^{\top}\mathbf{y}\geq w_{sj},\ \forall s=1,\cdots,S,\ \forall j=1,\cdots,m_{s} \tag{15}\] \[\sum_{p\in\mathcal{P}_{i}}z_{i}^{p}=1\quad\forall i=1,\cdots,n\] (16) \[z_{i}^{p}\in\{0,1\}\quad\forall i=1,\cdots,n,\ \forall p\in \mathcal{P}_{i}\] (17) \[\mathbf{y}\in\mathbb{Z}^{q}\times\mathbb{R}^{r} \tag{18}\]
The set partitioning formulation (\(\mathcal{SP}\)) is equivalent to Problem (\(\mathcal{P}\)).
This reformulation eliminates structural complexities of Problem (\(\mathcal{P}\)) by pre-processing the continuous-time dynamics and the non-linear functions into plan-based variables and the corresponding input parameters. The (\(\mathcal{SP}\)) formulation therefore exhibits a mixed-integer linear optimization structure, but involves an exponential number of composite plan-based variables. In an instance with \(\mathcal{O}(D)\) decisions in each segment and at each epoch, the number of plans scales in \(\mathcal{O}(D^{n\times S})\). It is therefore intractable to even enumerate the full set of plans, let alone to estimate the cost parameters by running a dynamical model for each one (Equations (11)-(13)).
### Solving the linear optimization relaxation via column generation
To address this challenge, we generate plans iteratively via column generation. A restricted master problem (RMP) solves the linear relaxation of (\(\mathcal{SP}\)) over subsets of plans \(\mathcal{P}_{i}^{0}\subseteq\mathcal{P}_{i}\):
\[(RMP)\qquad\min \sum_{i=1}^{n}\sum_{p\in\mathcal{P}_{i}^{0}}C_{i}^{p}z_{i}^{p}+ \mathbf{d}^{\top}\mathbf{y}\] (19) s.t. \[\left(\sum_{i=1}^{n}\sum_{p\in\mathcal{P}_{i}^{0}}\mathbf{u}_{sji}^{ \top}\mathbf{\alpha}_{is}^{p}z_{i}^{p}\right)+\mathbf{v}_{sj}^{\top}\mathbf{y}\geq w_{sj}, \ \forall s=1,\cdots,S,\ \forall j=1,\cdots,m_{s} \tag{20}\] \[\sum_{p\in\mathcal{P}_{i}^{0}}z_{i}^{p}=1\quad\forall i=1,\cdots,n\] (21) \[z_{i}^{p}\geq 0\quad\forall i=1,\cdots,n,\ \forall p\in \mathcal{P}_{i}^{0}\] (22) \[\mathbf{y}\in\mathbb{R}^{q+r} \tag{23}\]
Let \(\lambda_{sj}\in\mathbb{R}_{+}\) be the dual variable of the coupling constraint (Equation (20)) and \(\mu_{i}\in\mathbb{R}\) the dual variable of the set partitioning constraint (Equation (21)). The pricing problem exploits
segment-wise decomposition to generate new plan-based variables or a certificate that none exists. Specifically, it seeks the plan in \(\mathcal{P}_{i}\) with the minimal reduced cost, for each segment \(i=1,\cdots,n\):
\[(PP_{i}) \min \left(\int_{0}^{T}g_{it}(\mathbf{M}_{i}(t))dt+h_{i}(\mathbf{M}_{i}(T))+ \sum_{s=1}^{S}\Gamma_{is}(\mathbf{x}_{is})-\sum_{s=1}^{S}\sum_{j=1}^{m_{s}}\lambda_{ sj}\mathbf{u}_{sji}^{\top}\mathbf{x}_{is}-\mu_{i}\right)\] (24) s.t. \[\mathbf{x}_{is}\in\mathcal{F}_{is},\ \forall s=1,\cdots,S \tag{25}\] \[\frac{d\mathbf{M}_{i}(t)}{dt}=f_{i}(\mathbf{M}_{i}(t),\mathbf{x}_{is}),\ \forall s=1,\cdots,S,\ \forall t\in[\tau_{s},\tau_{s+1}]\] (26) \[\mathbf{M}_{i}(\tau_{1})=\mathbf{M}_{i}^{0} \tag{27}\]
The pricing problem exhibits a discrete non-linear optimization structure with continuous-time dynamics. However, we can leverage temporal decomposition to formulate it via dynamic programming. By definition, the continuous-time state variable \(\mathbf{M}_{i}(t)\) encapsulates all necessary system history to compute the cost function, the constraints in \(\mathcal{F}_{is}\), and the transition function \(f_{i}(\cdot,\cdot)\), so the system follows Markovian dynamics. At each epoch \(s=1,\cdots,S\), the dynamic programming state variable is denoted by \(\mathbf{N}_{s}^{i}\), equal to \(\mathbf{M}_{i}(\tau_{s})\). The decision is \(\mathbf{x}_{is}\in\mathcal{F}_{is}\), and the transition function is governed by the continuous-time dynamics between \(\tau_{s}\) and \(\tau_{s+1}\), with initial condition determined by the state variable. The problem involves a per-period cost characterizing the continuous-time cost \(g_{it}(\cdot)\) accrued between \(\tau_{s}\) and \(\tau_{s+1}\), the cost of resource allocation \(\Gamma_{is}(\cdot)\), and the dual price of the coupling constraints. In addition, the problem involves a terminal cost characterizing the end-state cost \(h_{i}(\cdot)\) and the dual price of the set partitioning constraints. The Bellman equation is given as follows, using \(J_{s}(\cdot)\) to define the cost-to-go function at epoch \(s=1,\cdots,S\):
\[J_{s}\left(\mathbf{N}_{s}^{i}\right)= \min_{\mathbf{x}_{is}\in\mathcal{F}_{is}}\left\{\int_{\tau_{s}}^{ \tau_{s+1}}g_{it}(\mathbf{M}_{i}(t))dt+\Gamma_{is}(\mathbf{x}_{is})-\sum_{j=1}^{m_{s}} \lambda_{sj}\mathbf{u}_{sji}^{\top}\mathbf{x}_{is}+J_{s+1}\left(\mathbf{N}_{s+1}^{i} \right)\right\} \tag{28}\] \[\text{where }\mathbf{M}_{i}(\tau_{s})\leftarrow\mathbf{N}_{s}^{i};\ \frac{d\mathbf{M}_{i}(t)}{dt}=f_{i}(\mathbf{M}_{i}(t),\mathbf{x}_{is}),\ \forall t\in[\tau_{s},\tau_{s+1}];\ \text{and}\ \mathbf{N}_{s+1}^{i} \leftarrow\mathbf{M}_{i}(\tau_{s+1})\] \[J_{S+1}\left(\mathbf{N}_{S+1}^{i}\right)= h_{i}\left(\mathbf{N}_{S+1}^{i}\right)-\mu_{i} \tag{29}\]
The column generation procedure solves the linear relaxation of the \(\mathcal{SP}\) formulation to optimality. Starting with initial sets \(\mathcal{P}_{i}^{0}\) of plan-based variables, the RMP provides a feasible primal solution at each iteration, along with the dual variables \(\lambda_{sj}\) and \(\mu_{i}\). We then solve \((PP_{i})\) for each segment \(i=1,\cdots,n\); if its optimal value is negative, we expand the set \(\mathcal{P}_{i}^{0}\) with the new solution and proceed. Otherwise, the incumbent RMP solution is optimal for the \((\mathcal{SP})\) relaxation.
This column generation scheme separates the two complexities of the problem: the RMP handles the coupled resource allocation decisions via mixed-integer optimization (Equations (19)-(22)), and the pricing problem handles the continuous-time system dynamics via dynamic programming (Equations (28)-(29)). However, the pricing problem exhibits a continuous-state dynamic programming structure--a notoriously challenging class of problems. This motivates our state-clustering dynamic programming algorithm to solve it efficiently at each column generation iteration.
### A state-clustering dynamic programming algorithm for the pricing problem
Since the pricing problem is separable across segments \(i=1,\cdots,n\), we ignore the index \(i\) in this section; for instance, \(\mathbf{N}_{s}\) refers to the state, and \(\mathbf{x}_{s}\in\mathcal{F}_{s}\) to the decision with \(D_{s}=|\mathcal{F}_{s}|\).
**Exact algorithm.** Any finite-horizon discrete-decision continuous-state dynamic programming model can be solved via forward enumeration and backward induction, as follows:
1. _Forward enumeration_: From the initial state \(\mathbf{M}^{0}\), evaluate all possible decisions \(\mathbf{x}_{1},\cdots,\mathbf{x}_{S}\), and store all possible states at each epoch \(s=1,\cdots,S+1\) in an exhaustive set \(\mathcal{N}_{s}^{*}\).
2. _Backward induction_: Starting from \(s=S\), derive the optimal policy \(\mathbf{\pi}_{s}^{*}(\cdot)\) at epoch \(s=S,\cdots,1\) and update the cost-to-go function \(J_{s}(\cdot)\), as follows: \[\mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right) =\operatorname*{arg\,min}_{\mathbf{x}_{s}\in\mathcal{F}_{s}}\left\{ \int_{\tau_{s}}^{\tau_{s+1}}g_{t}(\mathbf{M}(t))dt+\Gamma_{s}(\mathbf{x}_{s})-\sum_{j =1}^{m_{s}}\lambda_{sj}\mathbf{u}_{sj}^{\top}\mathbf{x}_{s}+J_{s+1}\left(\mathbf{N}_{s+1} \right)\right\}\] (30) \[J_{s}\left(\mathbf{N}_{s}\right) =\int_{\tau_{s}}^{\tau_{s+1}}g_{t}(\mathbf{M}(t))dt+\Gamma_{s}\left( \mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right)\right)-\sum_{j=1}^{m_{s}}\lambda_{sj} \mathbf{u}_{sj}^{\top}\mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right)+J_{s+1}\left(\mathbf{N}_ {s+1}\right)\] (31) \[\text{where }\mathbf{M}(\tau_{s})\leftarrow\mathbf{N}_{s};\,\frac{d\mathbf{M}(t) }{dt}=f(\mathbf{M}(t),\mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right)),\,\,\forall t\in[ \tau_{s},\tau_{s+1}];\,\text{and }\mathbf{N}_{s+1}\leftarrow\mathbf{M}(\tau_{s+1})\]
This approach is detailed in Algorithm 2 in Appendix B.2. Per the Bellman principle of optimality, the optimal path is optimal at every decision point (Proposition 2).
**Proposition 2**: _Algorithm 2 returns a feasible and optimal solution to Equations (28)-(29)._
However, Algorithm 2 is highly computationally expensive due to exhaustive state enumeration in the forward-enumeration and backward-induction loops. Complexity grows in \(\mathcal{O}\left(\prod_{s=1}^{S}D_{s}\right)\), or \(\mathcal{O}(D^{S})\) if \(D_{s}=\mathcal{O}(D),\,\,\forall s\), which can quickly become very large as the planning horizon and the decision space increase. Even for a simple eight-period problem with two decisions that can each take four values, \(\prod_{s=1}^{S}D_{s}=16^{8}\sim 1\) billion, which severely hinders exact dynamic programming.
As noted earlier, most approximate dynamic programming and reinforcement learning methods depend on learning approximations of the cost-to-go function \(J_{s}(\mathbf{N}_{s})\) and/or the optimal policy function \(\mathbf{\pi}_{s}^{*}(\mathbf{N}_{s})\). In our problem, however, both functions change from one column generation iteration to the next, due to the updates in the dual values \(\lambda_{sj}\) from the RMP. Thus, standard approaches would need be to re-learned at every iteration, resulting in significant inefficiencies.
Instead, we propose a state-clustering algorithm that exploits the concentration of states, but not the cost function. This approach proceeds by aggregation to reduce the state space during the forward-enumeration step of dynamic programming. Note that forward enumeration does not depend on the cost function, so the state clustering algorithm needs to be applied only once at the beginning of the column generation algorithm. Then, we leverage the backward-induction algorithm with the clustered state space to speed up the pricing problem at each column generation iteration.
**A state-clustering acceleration.** Many dynamical systems exhibit a natural concentration of states. In vaccine allocation, for example, there are only mild differences between a no-vaccine baseline and an allocation of just a few vaccines; similarly, two allocations that merely swap decisions between epochs \(s\) and \(s+1\) may yield similar future outcomes. We leverage this observation to approximate the full set of states \(\mathcal{N}_{s}^{*}\) by a clustered state space \(\mathcal{N}_{s}\) at each epoch \(s=1,\cdots,S\).
Specifically, at epoch \(s=1,\cdots,S\), we enumerate all pairs of states in \(\mathcal{N}_{s}\) and all decisions in \(\mathcal{F}_{s}\); for each one, we compute the next state at period \(s+1\) and the corresponding cost accrued between times \(\tau_{s}\) and \(\tau_{s+1}\). This yields \(Q=|\mathcal{N}_{s}|\times|\mathcal{F}_{s}|\) state-decision pairs \((\boldsymbol{N}_{s}^{1},\boldsymbol{x}_{s}^{1}),\cdots,(\boldsymbol{N}_{s}^{Q},\boldsymbol{x}_{s}^{Q})\) with \(Q\) subsequent states \(\boldsymbol{N}_{s+1}^{1},\cdots,\boldsymbol{N}_{s+1}^{Q}\) and corresponding cost parameters \(\chi_{s}^{1},\cdots,\chi_{s}^{Q}\). These are defined as follows:
\[\boldsymbol{M}^{q}(\tau_{s})\leftarrow\boldsymbol{N}_{s}^{q}, \quad\frac{d\boldsymbol{M}^{q}(t)}{dt}=f(\boldsymbol{M}^{q}(t),\boldsymbol{x}_ {s}^{q}),\quad\boldsymbol{N}_{s+1}^{q}\leftarrow\boldsymbol{M}^{q}(\tau_{s+1} ),\quad\forall q=1,\cdots,Q\] \[\chi_{s}^{q}=\int_{\tau_{s}}^{\tau_{s+1}}g_{t}(\boldsymbol{M}^{q} (t))dt+\boldsymbol{1}\{s=S\}(h(\boldsymbol{M}^{q}(T))),\quad\forall q=1, \cdots,Q\]
We group states \(\boldsymbol{N}_{s+1}^{1},\cdots,\boldsymbol{N}_{s+1}^{Q}\) into \(\Omega_{s+1}\) clusters, and treat the centroids as representative states in \(\mathcal{N}_{s+1}\) at epoch \(s+1\). We then apply the backward-induction algorithm in the clustered state space. By design, this procedure can reduce the state space significantly as long as \(\Omega_{s+1}\ll|\mathcal{N}_{s}|\times|\mathcal{F}_{s}|\). Obviously, the resulting dynamic program is an approximation of the full dynamic program, but the optimality loss can be small if the full state space is indeed concentrated.
The next question lies in the design of the clustering algorithm. Algorithms based on global similarity measures have at least quadratic complexity to compute the entire distance matrix. In our setup, \(|\mathcal{N}_{s}|\times|\mathcal{F}_{s}|\) is already very large, so a complexity of \(\mathcal{O}\left(|\mathcal{N}_{s}|^{2}\times|\mathcal{F}_{s}|^{2}\right)\) would be intractable. Most linear-time clustering algorithms, such as \(k\)-means, cannot effectively bound the diameter of each cluster, potentially leading to significant error propagation in non-linear dynamical systems. Instead, we propose a linear-time clustering algorithm with guarantees on cluster diameter in an \(\ell_{\infty}\)-space. Unlike \(k\)-means, the algorithm creates clusters dynamically without a pre-specified number of clusters. The benefits of our approach are showed theoretically in Proposition 4 (namely, the bound on cluster diameter bounds the global approximation error) and numerically in Section 5 (namely, the smaller global approximation error induces stronger solutions than a \(k\)-means benchmark).
Specifically, for each cluster \(\omega\in\{1,\cdots,\Omega_{s+1}\}\), we store: (i) the number of data points \(\eta(\omega)\) and the sum of all states \(\boldsymbol{N}^{\Sigma}(\omega)\) (to define the centroid); (ii) the element-wise minimum and maximum states, \(\underline{\boldsymbol{N}}(\omega)\) and \(\overline{\boldsymbol{N}}(\omega)\) (to bound the cluster diameter); and (iii) the set of state-decision pairs \(\mathcal{X}(\omega)\) that lead to the cluster and the total cost \(c(\omega)\) (to define cost functions in the clustered state space). For a diameter tolerance \(\varepsilon\), we cluster the states \(\boldsymbol{N}_{s+1}^{1},\cdots,\boldsymbol{N}_{s+1}^{Q}\) as follows
1. We assign the first state \(\boldsymbol{N}_{s+1}^{1}\) to a new cluster \(\omega_{1}\). We initialize: \(\eta(\omega_{1})\gets 1;\;\boldsymbol{N}^{\Sigma}(\omega_{1})\gets \boldsymbol{N}_{s+1}^{1};\;\underline{\boldsymbol{N}}(\omega_{1}) \gets\boldsymbol{N}_{s+1}^{1};\;\overline{\boldsymbol{N}}(\omega_{1}) \gets\boldsymbol{N}_{s+1}^{1};\;\mathcal{X}(\omega_{1})\leftarrow\{( \boldsymbol{N}_{s}^{1},\boldsymbol{x}_{s}^{1})\};\;c^{\Sigma}(\omega_{1}) \leftarrow\chi_{s}^{1}\)
2. We assign \(\mathbf{N}_{s+1}^{q}\) to a cluster based on its \(\ell_{\infty}\) distance to the minimum and maximum states: * If \(\min_{\ell=1,\cdots,k}\max(\|\mathbf{N}_{s+1}^{q}-\underline{\mathbf{N}}(\omega_{\ell})\| _{\infty},\|\mathbf{N}_{s+1}^{q}-\overline{\mathbf{N}}(\omega_{\ell})\|_{\infty})\leq\varepsilon\), then \(\mathbf{N}^{q}\) is assigned to \(\omega_{l^{*}}\) where \(\ell^{*}\) denotes the index of the cluster that attains the minimum: \[\eta(\omega_{l^{*}})\leftarrow\eta(\omega_{l^{*}})+1;\ \mathbf{N}^{\Sigma}( \omega_{l^{*}})\leftarrow\mathbf{N}^{\Sigma}(\omega_{l^{*}})+\mathbf{N}_{s+1}^{q};\ \underline{\mathbf{N}}(\omega_{l^{*}})\leftarrow\min( \underline{\mathbf{N}}(\omega_{l^{*}}),\mathbf{N}_{s+1}^{q})\] \[\overline{\mathbf{N}}(\omega_{l^{*}})\leftarrow\max(\overline{\mathbf{N}} (\omega_{l^{*}}),\mathbf{N}_{s+1}^{q});\ \mathcal{X}(\omega_{l^{*}})\leftarrow\mathcal{X}(\omega_{l^{*}})\cup\{(\mathbf{N} _{s}^{q},\mathbf{x}_{s}^{q})\};\ c^{\Sigma}(\omega_{l^{*}})\gets c^{\Sigma}( \omega_{l^{*}})+\chi_{s}^{q},\] * Otherwise, \(\mathbf{N}_{s+1}^{q}\) is assigned to a new cluster \(\omega_{k+1}\): \(\eta(\omega_{k+1})\gets 1\); \(\mathbf{N}^{\Sigma}(\omega_{k+1})\leftarrow\mathbf{N}_{s+1}^{q}\); \(\underline{\mathbf{N}}(\omega_{k+1})\leftarrow\mathbf{N}_{s+1}^{q}\); \(\overline{\mathbf{N}}(\omega_{k+1})\leftarrow\mathbf{N}_{s+1}^{q}\); \(\mathcal{X}(\omega_{k+1})\leftarrow\{(\mathbf{N}_{s}^{q},\mathbf{x}_{s}^{q})\}\); \(\ c^{\Sigma}(\omega_{k+1})\leftarrow\chi_{s}^{q}\)
3. We retrieve each cluster's centroid \(\mathbf{N}_{s+1}(\omega_{\ell})=\mathbf{N}^{\Sigma}(\omega_{\ell})/\eta(\omega_{\ell})\) and define the cost function in the clustered state space, as follows: \(c(\mathbf{N}_{s},\mathbf{x}_{s})=\frac{c^{\Sigma}(\omega_{\ell})}{\eta(\omega_{\ell})}+ \Gamma_{s}(\mathbf{x}_{s})\) for all \((\mathbf{N}_{s},\mathbf{x}_{s})\in\mathcal{X}(\omega_{\ell})\).
We solve a pricing problem approximation in the clustered state space, via backward induction:
\[\mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right) =\operatorname*{arg\,min}_{\mathbf{x}_{s}\in\mathcal{F}_{s}}\left\{c( \mathbf{N}_{s},\mathbf{x}_{s})-\sum_{j=1}^{m_{s}}\lambda_{sj}\mathbf{u}_{sj}^{\top}\mathbf{x}_ {s}+J_{s+1}\left(\mathbf{N}_{s+1}\right)\right\} \tag{32}\] \[J_{s}\left(\mathbf{N}_{s}\right) =c(\mathbf{N}_{s},\mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right))-\sum_{j=1} ^{m_{s}}\lambda_{sj}\mathbf{u}_{sj}^{\top}\mathbf{\pi}_{s}^{*}\left(\mathbf{N}_{s}\right) +J_{s+1}\left(\mathbf{N}_{s+1}\right)\] (33) \[\text{where }\mathbf{M}(\tau_{s})\leftarrow\mathbf{N}_{s};\ \frac{d\mathbf{M}(t)}{dt}=f(\mathbf{M}(t),\mathbf{\pi}_{s}^{*} \left(\mathbf{N}_{s}\right)),\ \forall t\in[\tau_{s},\tau_{s+1}];\ \text{and}\ \mathbf{N}_{s+1} \leftarrow\mathbf{M}(\tau_{s+1})\]
Pseudo-code is given in Algorithm 3 in Appendix 0.B.2. The algorithm makes a single pass through all the states, thus terminating in linear time (as long as the number of clusters is much smaller than the number of states). Still, the algorithm controls the maximum pair-wise distance within each cluster, guaranteeing an \(\ell_{\infty}\)-diameter within \(\varepsilon\). This is formalized in Proposition 3.
**Proposition 3**: _At each decision epoch \(s=1,\cdots,S+1\), each cluster \(\omega_{1},\cdots,\omega_{k}\) satisfies:_
\[\|\overline{\mathbf{N}}(\omega_{\ell})-\underline{\mathbf{N}}(\omega_{\ell})\|_{\infty} \leq\varepsilon. \tag{34}\]
This guarantee allows us to bound the global approximation error proportionally to \(\varepsilon\), in Proposition 4. Note that the error grows exponentially in time, which is typical in dynamical systems (Sager et al. 2012). Nonetheless, by controlling the cluster diameter, the state-clustering algorithm can control the global error. Our results in Section 5.1 show that, in practice, our state-clustering algorithm induces moderate errors in small instances and can scale to large instances. Moreover, by controlling the cluster diameter, it leads to stronger optimization solutions than \(k\)-means.
**Proposition 4**: _Assume that the transition function \(f(.,\mathbf{x}_{s})\) is \(L_{s}\)-Lipschitz in the state variables in the \(l_{\infty}\) metric, for \(s\in\{1,\cdots,S+1\}\). For each true state \(\mathbf{N}_{s}^{*}\in\mathcal{N}_{s}^{*}\) (from Algorithm 2), there exists a clustered state \(\mathbf{N}_{s}\in\mathcal{N}_{s}\) (from Algorithm 3) such that:_
\[\|\mathbf{N}_{s}^{*}-\mathbf{N}_{s}\|_{\infty}\leq \begin{cases}0&\text{if }s=1\\ \varepsilon\sum\limits_{\sigma=3}^{s+1}\exp\left(\sum\limits_{\nu=\sigma}^{s}L _{\nu-1}(\tau_{\nu}-\tau_{\nu-1})\right),&\text{for all }s\geq 2\end{cases} \tag{35}\]
### A tri-partite branching disjunction
We embed the column generation procedure into a branch-and-bound structure to retrieve an integral solution to the (\(\mathcal{SP}\)) formulation. We proceed via multi-phase branching: we first branch on the mixed-integer variables \(\boldsymbol{y}\); we then branch on the natural resource allocation variables \(\boldsymbol{x}_{is}\); and we finally restore integrality of the plan-based variables \(z_{i}^{p}\) via a tri-partite branching disjunction.
_Branching on mixed-integer variables._ In the case where the formulation involves discrete variables \(\boldsymbol{y}\), we first branch on those variables whenever one of its components is fractional:
\[\underbrace{(y_{\ell}\leq\lfloor\widehat{y}_{\ell}\rfloor)}_{\text{left branch}}\vee \underbrace{(y_{\ell}\geq\lceil\widehat{y}_{\ell}\rceil)}_{\text{right branch}},\text{ where }\widehat{y}_{\ell}\text{ denotes the }\ell^{\text{th}}\text{ component of the incumbent solution} \tag{36}\]
_Branching on the natural variables._ To avoid building deep and one-sided trees and to maintain the structure of the pricing problem, a typical approach in branch-and-price involves branching on the natural variables (i.e., the resource allocation decisions \(\boldsymbol{x}_{is}\) in our case) as opposed to branching on the composite plan-based variables (i.e., the variables \(z_{i}^{p}\)). Let \(\boldsymbol{x}_{is}(\boldsymbol{z})\in\mathbb{R}^{d_{is}}\) denote the resource allocation variable in segment \(i=1,\cdots,n\) and epoch \(s=1,\cdots,S\) for a plan-based solution \(\boldsymbol{z}\); and let \(x_{is}^{k}(\boldsymbol{z})\in\mathbb{R}\) be its \(k^{\text{th}}\) component, for \(k=1,\cdots,d_{is}\). A column generation solution \(\widehat{\boldsymbol{z}}\) leads to infeasible resource allocations if \(\boldsymbol{x}_{is}(\widehat{\boldsymbol{z}})\notin\mathcal{F}_{is}\). In that case, we exploit the discreteness of the feasible region \(\mathcal{F}_{is}\) to create a valid disjunction. Since \(\mathcal{F}_{is}\) does not necessarily comprise contiguous integers, we introduce dedicated floor and ceiling functions: \(\lfloor\boldsymbol{a}\rfloor_{is}^{k}=\max\{\beta\leq a^{k}:\exists\boldsymbol{ b}\in\mathcal{F}_{is},b^{k}=\beta\}\) and \(\lceil\boldsymbol{a}\rceil_{is}^{k}=\min\{\beta\geq a^{k}:\exists\boldsymbol{ b}\in\mathcal{F}_{is}:b^{k}=\beta\}\) (where \(a^{k}\) and \(b^{k}\) are the \(k^{\text{th}}\) components of \(\boldsymbol{a}\) and \(\boldsymbol{b}\)).
Armed with these notations, we can then define the following valid branching disjunction:
\[\underbrace{\left(x_{is}^{k}(\boldsymbol{z})\leq\lfloor\boldsymbol{x}_{is}( \widehat{\boldsymbol{z}})\rfloor_{is}^{k}\right)}_{\text{left branch}}\vee \underbrace{\left(x_{is}^{k}(\boldsymbol{z})\geq\lceil\boldsymbol{x}_{is}( \widehat{\boldsymbol{z}})\rceil_{is}^{k}\right)}_{\text{right branch}},\text{ with }\boldsymbol{x}_{is}(\widehat{\boldsymbol{z}})=\sum_{p\in\mathcal{P}_{i}} \boldsymbol{\alpha}_{is}^{p}\widehat{z}_{i}^{p} \tag{37}\]
Out of these disjunctions, we select one that corresponds to a variable \(x_{is}^{k}(\widehat{\boldsymbol{z}})\) with the largest value of \(\min\left\{x_{is}^{k}(\widehat{\boldsymbol{z}})-\lfloor\boldsymbol{x}_{is}( \widehat{\boldsymbol{z}})\rfloor_{is}^{k},\lceil\boldsymbol{x}_{is}(\widehat{ \boldsymbol{z}})\rceil_{is}^{k}-x_{is}^{k}(\widehat{\boldsymbol{z}})\right\}\). This branching strategy is an analog to the _most fractional_ branching strategy in integer optimization. Importantly, this disjunction preserves the structure of the pricing problem, by merely restricting the search from the full feasible regions \(\mathcal{F}_{is}\) to their subregions defined by the corresponding lower-bound and upper-bound constraints.
_Tri-partite branching._ By design, the above branching schemes enforce integrality of the \(\boldsymbol{y}\) variables and ensure that the plan-based variables define feasible resource allocation decisions \(\boldsymbol{x}_{is}\). However, they may not guarantee integral plan-based variables \(z_{i}^{p}\); for instance, we may obtain \(\alpha_{is}^{p_{1}}=4\), \(\alpha_{is}^{p_{2}}=8\), \(z_{i}^{p_{1}}=z_{i}^{p_{2}}=0.5\), and \(6\in\mathcal{F}_{is}\). In linear problems, this solution can be brought into an equivalent feasible solution by considering the "average plan" \(p^{*}\) with \(\alpha_{is}^{p^{*}}=6\). In our problem, however, the non-convex system dynamics break this equivalence because \(C_{i}^{p^{*}}\neq 0.5C_{i}^{p_{1}}+0.5C_{i}^{p_{2}}\) in general. Thus, the plan \(p^{*}\) is no longer guaranteed to form an optimal solution to Problem (\(\mathcal{P}\)).
A direct way to enforce the integrality of the plan-based variables would be to create disjunctions of the form \((z_{i}^{p}=0)\vee(z_{i}^{p}=1)\). However, as noted earlier, this disjunction can lead to weak and imbalanced tree structures. Moreover, it breaks the structure of the pricing problem--notably, it is difficult to seek the "second-best" solution in the branch associated with the \(z_{i}^{p}=0\) disjunction.
Instead, we devise a novel tri-partite branching disjunction to handle the non-linearities. Consider a node with a solution \(\widehat{\mathbf{z}}\). Assume that \(\mathbf{x}_{is}(\widehat{\mathbf{z}})\in\mathcal{F}_{is}\) for all \(i,s\) but that there exists a fractional plan-based variable \(z_{i}^{p}\in(0,1)\). Let \(\alpha_{is}^{pk}(\mathbf{z})\in\mathbb{R}\) denote the \(k^{\text{th}}\) component of \(\mathbf{\alpha}_{is}^{p}\) for \(k=1,\cdots d_{is}\). There must exist a segment \(i\), an epoch \(s\), a component \(k\) and a plan \(p_{0}\) such that:
\[\mathbf{x}_{is}(\widehat{\mathbf{z}})=\sum_{p\in\mathcal{P}_{i}}\mathbf{\alpha}_{is}^{p} \widehat{z}_{i}^{p}\in\mathcal{F}_{is}\text{ and }\alpha_{is}^{k,p_{0}}\neq x_{is}^{k}(\widehat{\mathbf{z}}) \text{ and }\widehat{z}_{i}^{p_{0}}>0. \tag{38}\]
Intuitively, \(\mathbf{x}_{is}(\widehat{\mathbf{z}})\) defines a "promising" resource allocation decision for segment \(i\) at epoch \(s\). Accordingly, we create a corresponding branch in the tree. To retain a mutually exclusive and collectively exhaustive tree, we create two additional branches with lower and higher resource allocations. Formally, our tri-partite branching disjunction is defined via the following disjunction:
\[\underbrace{\left(x_{is}^{k}(\mathbf{z})<x_{is}^{k}(\widehat{\mathbf{z}})\right)}_{ \text{left branch}}\vee\underbrace{\left(x_{is}^{k}(\mathbf{z})=x_{is}^{k}( \widehat{\mathbf{z}})\right)}_{\text{middle branch}}\vee\underbrace{\left(x_{is}^{ k}(\mathbf{z})>x_{is}^{k}(\widehat{\mathbf{z}})\right)}_{\text{right branch}},\text{ with }\mathbf{x}_{is}(\widehat{\mathbf{z}})=\sum_{p\in\mathcal{P}_{i}}\mathbf{\alpha}_{is}^{p} \widehat{z}_{i}^{p} \tag{39}\]
A final question lies in variable selection to determine which \(x_{is}^{k}(\widehat{\mathbf{z}})\) to branch upon. We extend the _most fractional_ principle, by selecting the variable with the largest weighted difference between the plan-based allocations and the resource allocations from the column generation solution. Formally, we select \(x_{is}^{k}(\widehat{\mathbf{z}})\) such that the \((i,k,s)\) tuple maximizes \(\sum_{p\in\mathcal{P}_{i}}\widehat{z}_{i}^{p}|\alpha_{is}^{pk}-x_{is}^{k}( \widehat{\mathbf{z}})|\).
Again, this branching disjunction does not impact the structure of the pricing problem, which can easily accommodate lower-bounding and upper-bounding constraints on the decision variables. As we shall establish in Theorem 3.1, this procedure eliminates any infeasible solution satisfying the conditions in Equation (38) and yields integer plan-based variables \(z_{i}^{p}\) upon termination.
### Summary of the branch-and-price algorithm
Algorithm 3.1 summarizes our solution approach. In each node, the algorithm solves the linear relaxation of \(\mathcal{SP}\) via column generation (Step 1), iterating between the RMP (Step 1.1) and the dynamic programming algorithm for the pricing problem (Step 1.2), until convergence. We also apply in Step 2 an optional upper-bounding scheme for acceleration: in our resource allocation problems for instance, we solve a restricted master problem with integrality constraints; in the vaccination centers problem, this becomes much more cumbersome, so we fix facility construction variables based on the linear relaxation solution and solve the subsequent vaccine allocation problem. The algorithm then proceeds to bi-partite branching when the variables \(\mathbf{y}\) are not integral or when the
natural resource allocation variables \(\mathbf{x}\) are infeasible (Steps 3 and 4), and to tri-partite branching to restore integrality in plan-based variables (Step 5). This overall branching strategy is illustrated in Figure 3. Step 6 corresponds to the node selection step--we implemented breadth-first and depth-first search strategies, and found comparable performance. The algorithm uses bounding and feasibility rules to prune leaves (Step 2) and terminates when no remaining leaf is active (Step 6).
Theorem 1 establishes the convergence and exactness of the algorithm, as long as the pricing problem is solved to optimality (Step 1.2). Exactness stems from the facts that the algorithm maintains valid lower and upper bounds. Finiteness follows from the fact that the branching disjunction eliminates any infeasible solution and from the finiteness of the decision space. Note that the pricing problem only needs to be solved to optimality upon convergence; therefore, an exact acceleration involves applying the state-clustering dynamic programming approximation in initial iterations and then turning to the exact algorithm. In our experiments, we merely apply the state-clustering approximation to retain tractability, but we evaluate the performance of the optimized solution in view of the full dynamical system--not the clustered approximation (Section 5.3).
**Theorem 1**: _The branch-and-price algorithm with the tri-partite branching disjunction converges in a finite number of iterations and returns an optimal solution to the \(\mathcal{SP}\) formulation._
Figure 3: Illustration of the branching structure, combining bi-partite branching and tri-partite branching. Red nodes are pruned by bound; blue nodes are pruned by feasibility; white nodes trigger bi-partite branching; blue squares trigger tri-partite branching; the green node indicates the optimal solution.
## 5 Experimental results
We evaluate our methodology on the four problems described in Section 3. All experiments were run on Julia v1.5.2 with the JuMP package (Dunning et al., 2017) on 10-core i9-10900k CPU, with a 3-hour limit and a 0.1% tolerance. All instances and code are made available online.1
Footnote 1: [https://github.com/martinrame24/Prescriptive-Contagion-Analytics](https://github.com/martinrame24/Prescriptive-Contagion-Analytics)
### Benefits of the state-clustering dynamic programming algorithm
Our methodology hinges on the ability to solve the pricing problem efficiently. We first compare our state-clustering dynamic programming algorithm (Algorithm 3) to the exact dynamic programming algorithm based on state space enumeration (Algorithm 2). Table 1 reports computational times, broken down into initialization times to create the state space ("init.") and dynamic programming times ("DP"). For all instances solved by the exact algorithm, we report the Median Average Percentage Error (MAPE) and Median Absolute Error (MAE) between clustered and true states.
Note, first, that the number of states grows very quickly, reflecting the "curse of dimensionality" in dynamic programming (Powell, 2022). Even by exploiting the segment-based decomposition in the pricing problem, the state space scales in \(\mathcal{O}(n\times D^{S})\), with trillions of states in our largest instances.
Exhaustive enumeration does not scale to even small instances, requiring minutes to converge with 4 segments and 4 decision epochs. The state-clustering algorithm considerably reduces the state space--by up to a factor of \(10^{10}\) to \(10^{11}\). Thus, it accelerates convergence by two orders of magnitude in small instances (seconds versus minutes), and scales to the largest instance with 51 states, 10 epochs and 21 decisions in minutes when the exact algorithm fails to terminate.
Moreover, the state-clustering algorithm provides a high-quality approximation of the full state space. With \(\varepsilon=0.01\), the algorithm results in a median absolute error up to \(4.9\times 10^{-4}\) and a median absolute percentage error up to \(13.6\%\). A tolerance of \(\varepsilon=0.002\) further reduces the MAE within \(6\times 10^{-5}\) and the MAPE within \(1.1\%\). As expected, the quality of the approximation deteriorates over time due to the propagation of errors. Whereas we cannot estimate the error over longer time horizons because of the limitations of the enumerative algorithm, this observation could motivate dynamic implementations of our methodology, for instance, by re-evaluating the system's dynamics and re-optimizing resource allocations at each epoch. Nonetheless, these results suggest that the drift in state approximation remains moderate, underscoring the critical role of the state-clustering algorithm to enhance the tractability of the pricing problem at small costs in terms of accuracy.
Finally, we provide in Appendix D a detailed comparison of our state-clustering algorithm (Algorithm 3) and a \(k\)-means benchmark. By design, the \(k\)-means algorithm leads to a smaller average
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline & & & & & & \multicolumn{3}{c}{**Time (sec.)**} & \multicolumn{3}{c}{**MAPE (\%)**} & \multicolumn{3}{c}{**MAE (\(\times 10^{-4}\))**} \\ \cline{4-13} \(n\) & \(S\) & \(D\) & Method & Tolerance & \(|S|\) & Init. & DP & \(s=1\) & \(s=2\) & \(s=3\) & \(s=4\) & \(s=1\) & \(s=2\) & \(s=3\) & \(s=4\) \\ \hline
51 & 4 & 6 & Enumeration & — & 79,305 & 20.01 & 1.33 & — & — & — & — & — & — & — \\ & & & Clustering & \(\varepsilon=0.002\) & 2,350 & 3.33 & 0.08 & 0.00 & 0.00 & 0.48 & 0.69 & 0.00 & 0.00 & 0.29 & 0.27 \\ & & & Clustering & \(\varepsilon=0.005\) & 1,458 & 2.20 & 0.04 & 0.00 & 0.00 & 1.32 & 2.08 & 0.00 & 0.00 & 1.04 & 0.95 \\ & & & Clustering & \(\varepsilon=0.01\) & 982 & 1.57 & 0.03 & 0.00 & 0.01 & 7.22 & 9.38 & 0.00 & 0.01 & 3.72 & 4.02 \\ \hline
51 & 4 & 11 & Enumeration & — & 821K & 194 & 17.1 & — & — & — & — & — & — & — \\ & & & Clustering & \(\varepsilon=0.002\) & 4,971 & 13.3 & 0.33 & 0.00 & 0.08 & 0.57 & 0.86 & 0.00 & 0.08 & 0.37 & 0.36 \\ & & & Clustering & \(\varepsilon=0.005\) & 2,533 & 6.74 & 0.19 & 0.00 & 0.12 & 2.86 & 3.67 & 0.00 & 0.29 & 1.59 & 1.42 \\ & & & Clustering & \(\varepsilon=0.01\) & 1,521 & 4.29 & 0.10 & 0.00 & 0.11 & 6.27 & 10.02 & 0.00 & 0.29 & 3.58 & 4.23 \\ \hline
51 & 4 & 21 & Enumeration & — & 10.4M & 2,950 & 234 & — & — & — & — & — & — & — \\ & & & Clustering & \(\varepsilon=0.002\) & 10,615 & 74.6 & 1.09 & 0.00 & 0.08 & 0.87 & 1.08 & 0.00 & 0.08 & 0.52 & 0.41 \\ & & & Clustering & \(\varepsilon=0.005\) & 4,477 & 26.9 & 0.58 & 0.00 & 0.09 & 3.51 & 5.12 & 0.00 & 0.01 & 1.86 & 1.57 \\ & & & Clustering & \(\varepsilon=0.01\) & 2,473 & 14.8 & 0.29 & 0.00 & 0.22 & 7.17 & 13.64 & 0.00 & 0.28 & 3.78 & 4.91 \\ \hline
51 & 6 & 21 & Enumeration & — & 4.59B & n/a & n/a & — & — & — & — & — & — & — \\ & & & Clustering & \(\varepsilon=0.002\) & 29,020 & 370 & 7.41 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ & & & Clustering & \(\varepsilon=0.005\) & 9,212 & 75.0 & 1.48 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ & & & Clustering & \(\varepsilon=0.01\) & 4,328 & 31.3 & 0.55 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ \hline
51 & 8 & 21 & Enumeration & — & 2.06T & n/a & n/a & — & — & — & — & — & — & — \\ & & & Clustering & \(\varepsilon=0.002\) & 56,635 & 1,057 & 13.6 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ & & & Clustering & \(\varepsilon=0.005\) & 15,076 & 147 & 3.50 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ & & & Clustering & \(\varepsilon=0.01\) & 6,341 & 52.4 & 1.47 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ \hline
51 & 10 & 21 & Enumeration & — & 893T & n/a & n/a & — & — & — & — & — & — & — \\ & & & Clustering & \(\varepsilon=0.002\) & 89,685 & 2,586 & 32.8 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ & & & Clustering & \(\varepsilon=0.005\) & 21,308 & 238 & 5.50 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ & & & Clustering & \(\varepsilon=0.01\) & 8,353 & 77.9 & 2.13 & n/a & n/a & n/a & n/a & n/a & n/a & n/a \\ \hline \multicolumn{13}{l}{“n/a” means that the exact dynamic programming algorithm does not terminate due to memory limitations.} \\ \end{tabular}
\end{table}
Table 1: State-clustering dynamic programming against exhaustive enumeration (vaccine allocation problem).
distance but a larger worst-case distance within each cluster. Per Proposition 4, this worst-case error can propagate over time in dynamical systems. In turn, our algorithm results in much higher fidelity across the time horizon, with significant benefits in the downstream optimization--achieving a 1.4% to 25% cost reduction against the \(k\)-means benchmark for the vaccine allocation problem.
### Benefits of the branch-and-price algorithm
Armed with the state-clustering algorithm, we now solve our four prescriptive contagion analytics problems. Table 2 reports computational times, broken down into the state-clustering initialization, the restricted master problems, and the pricing problems. It also reports the solution and the lower bound obtained via column generation, via bi-partite branching (i.e., Algorithm 1 without Step 5), and with the full branch-and-price algorithm with tri-partite branching. The column generation and bi-partite branching solutions are obtained by solving the restricted master problem with integrality constraints upon convergence. For brevity, the table reports results for "easy", "medium" and "hard" instances. Full results in Appendix C.2 show the robustness of these findings.
Note, first, that the column generation algorithm is highly scalable for the three resource allocation problems: vaccine allocation, content promotion, and congestion mitigation. Indeed, column generation solves the set partitioning relaxation in seconds even in the harder instances. The restricted master problem is virtually instantaneous but the pricing problem is more time-consuming due to the non-linear system dynamics--reinforcing the need for an efficient dynamic programming algorithm (Section 5.1). In fact, the state-clustering algorithm shifts the complexity away from the dynamic programming algorithm itself--enabling the online column generation algorithm to terminate in seconds--toward the offline state space clustering--which can take up to minutes.
Moreover, the column generation algorithm can derive high-quality solutions, thanks to the tight set partitioning formulation for these problems. In the vaccine allocation problem in particular, the column generation algorithm yields solutions within a 1-2% optimality gap. For the content promotion and congestion mitigation problems, column generation generally performs well but can also leave a larger optimality gap--due to the sparsity constraint in content promotion and to the coordination of prevention and treatment resources in congestion mitigation. This limitation motivates the branch-and-price algorithm to further improve solutions and tighten optimality gaps.
Turning to our main observation, the branch-and-price algorithm solves every instance of the vaccine allocation, content promotion and congestion mitigation problems to near-optimality. In fact, it reaches optimality within a 0.1% tolerance in most instances, and leaves a small optimality gap otherwise. The number of nodes generally remains moderate but can grow larger for harder instances, leading to higher computational requirements. As a result, the branch-and-price algorithm terminates in minutes to hours, but can still lead to strong solution improvements from column
generation. These benefits can go up to 6-12% in the content promotion and congestion mitigation cases. Even in vaccine allocation, where column generation leaves a small gap, the branch-and-price solution yields a 1% improvement, amounting to an extra 2,000 lives saved over three months.
Finally, note that the standard bi-partite branching scheme is not sufficient to guarantee convergence to an optimal solution. Notably, it leaves a 1% optimality gap in content promotion instances
and a 0.2% gap in congestion mitigation instances. This result underscores the impact of system non-linearities on the branch-and-price algorithm. Instead, the tri-partite branching scheme developed in this paper can be instrumental to ensure convergence to an optimal solution, while retaining an efficient branching tree and an efficient pricing problem structure.
Next, recall that the vaccination centers problem features a joint facility location and resource allocation structure, leading to a looser set-partitioning relaxation. As a result, the column generation heuristic leaves large optimality gaps up to 30%. Restoring integrality via branch-and-price is much more challenging. Interestingly, the restricted master problem ends up being more time-consuming than the pricing problem, thus highlighting the joint complexities from coupled resource allocation and from non-linear system dynamics. Yet, the branch-and-price algorithm leads to significant solution improvements and gap reductions: it returns an optimal solution in the "easy" two-facility instance, a near-optimal solution in the "hard" two-facility instances, and moderate optimality gaps in the three-facility cases (0-4% versus 3-30% with column generation).
To shed more light into these findings, Table 10 reports results for the all ten groups constituting the full United States. Despite variations in the problem's complexity, our core observations are highly robust: the column generation algorithm leaves consistently large optimality gaps, and the branch-and-price algorithm significantly improves the solution and tightens the gap. Ultimately, the branch-and-price algorithm results in an estimated 5,000 extra lives saved across the country over a 6-week period--a 17% improvement over the column generation solution.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & & & & & & \multicolumn{3}{c}{CPU times (s)} & \multicolumn{3}{c}{Solution quality} \\ \cline{4-13} Group & n & S & D & F & Method & Nodes & Init. & RMP & PP & Total & Upper bound & Solution & Gap \\ \hline A & 7 & 6 & 12 & 3 & Column Generation & 1 & 47 & 0.27 & 0.31 & 1.11 & 8,200 & 6,822 & 19.07\% \\ & & & & & Tri-partite B\&P & 16,358 & 47 & 363 & 209 & 10,955 & 8,048 & 7,716 & 4.13\% \\ \hline B & 7 & 6 & 10 & 3 & Column Generation & 1 & 43 & 0.06 & 0.34 & 0.88 & 3,659 & 3,322 & 9.20\% \\ & & & & & Tri-partite B\&P & 11,965 & 43 & 329 & 134 & 10,806 & 3,569 & 3,467 & 2.87\% \\ \hline C & 4 & 6 & 13 & 3 & Column Generation & 1 & 38 & 0.04 & 0.11 & 0.55 & 3,880 & 3,171 & 18.28\% \\ & & & & & Tri-partite B\&P & 10,101 & 38 & 369 & 98 & 10,812 & 3,880 & 3,692 & 4.84\% \\ \hline D & 4 & 6 & 5 & 3 & Column Generation & 1 & 38 & 0.04 & 0.04 & 0.48 & 1,056 & 733 & 30.65\% \\ & & & & & Tri-partite B\&P & 22,600 & 38 & 539 & 45 & 10,824 & 1,052 & 1,000 & 4.95\% \\ \hline E & 6 & 6 & 16 & 3 & Column Generation & 1 & 55 & 0.08 & 0.51 & 1.02 & 8,331 & 6,927 & 16.86\% \\ & & & & & Tri-partite B\&P & 7,824 & 55 & 214 & 199 & 10,832 & 8,316 & 8,053 & 3.16\% \\ \hline F & 5 & 6 & 13 & 3 & Column Generation & 1 & 47 & 0.07 & 0.25 & 0.83 & 3,593 & 2,466 & 45.25\% \\ & & & & & Tri-partite B\&P & 16,882 & 47 & 698 & 181 & 10,822 & 3,530 & 3,328 & 5.74\% \\ \hline G & 4 & 6 & 4 & 3 & Column Generation & 1 & 42 & 0.02 & 0.02 & 0.50 & 1,985 & 1,679 & 15.39\% \\ & & & & & Tri-partite B\&P & 39,651 & 42 & 609 & 72 & 10,802 & 1,955 & 1,921 & 1.76\% \\ \hline H & 6 & 6 & 4 & 3 & Column Generation & 1 & 43 & 0.02 & 0.03 & 0.57 & 678 & 652 & 3.75\% \\ & & & & & Tri-partite B\&P & 58,496 & 43 & 1,382 & 137 & 10,801 & 675 & 674 & 0.23\% \\ \hline I & 4 & 6 & 20 & 3 & Column Generation & 1 & 39 & 0.07 & 0.23 & 0.68 & 3,573 & 2,587 & 34.89\% \\ & & & & & Tri-partite B\&P & 55,864 & 39 & 4,357 & 507 & 10,871 & 3,423 & 3,399 & 0.72\% \\ \hline J & 4 & 6 & 5 & 3 & Column Generation & 1 & 36 & 0.03 & 0.03 & 0.47 & 626 & 565 & 9.73\% \\ & & & & & Tri-partite B\&P & 25,016 & 36 & 327 & 41 & 10,803 & 624 & 565 & 9.46\% \\ \hline Full & 51 & 6 — & 3 & Column Generation & — & — & — & — & — & 35,581 & 28,924 & 18.7\% \\ & & & & & Tri-partite B\&P & — & — & — & — & 35,072 & 33,815 & 3.58\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Detailed results for the vaccination centers problem (full United States, 3 facilities per group)
### Practical impact of the methodology
Tables 4 and 5 evaluate the practical benefits of our branch-and-price methodology, as compared to a do-nothing baseline, practical benchmarks (i.e., easily-implementable heuristics) and optimization benchmarks (i.e., existing methods from the literature). To provide a fair assessment, we evaluate all solutions with the full continuous-state contagion models, as opposed to the state-clustering approximation used in our algorithm--thus ensuring an apples-to-apples comparison.
We define two practical benchmarks for the vaccine allocation and the congestion mitigation problems, as well as slight variants for the content promotion problem due to the sparsity constraint.
* **uniform allocation**: each segment receives the same amount of resources of at each epoch (in content promotion, we randomize the \(K\) products at each epoch, repeated 100 times).
* **cost-based allocation**: a domain-based benchmark in which each segment receives a constant share of resources proportionally to the total cost under the do-nothing baseline (in content promotion, we select the \(K\) products for which promotions have the strongest impact).
For the vaccination centers problem, we consider a demographic-based **"top-K"** benchmark that selects the \(K\) facilities that can serve the most people within access distance restrictions. We test it both with uniform vaccine allocations across the resulting \(K\) facilities, and with optimized vaccine allocations (obtained by fixing the facility variables and optimizing subsequent resource allocation).
We also define three optimization benchmarks for the vaccine allocation and vaccination centers problems, from the recent prescriptive contagion analytics literature in epidemiology applications:
* **MIQO implementation**: mixed-integer bilinear optimization implementation of Problem (\(\mathcal{P}\)) in Gurobi 9.5 (Gurobi 2019), based on a time-discretization approximation of the ODEs.
* **discretization** to approximate Problem (\(\mathcal{P}\)) via mixed-integer linear optimization, using time discretization to eliminate continuous-time dynamics and a staircase approximation of the infected population to handle bilinearities. This benchmark mirrors the approach from Fu et al. (2021) in a robust optimization setting. To optimize its performance, we divide the \([0,2\%]\) interval into sub-intervals of length \(\delta\) (since the infected population never exceeded \(2\%\)). We consider a coarse and a granular discretization, with \(\delta=0.001\) and \(\delta=0.002\).
* **coordinate descent heuristic**, which circumvents bilinearities by iterating between optimizing vaccine allocations over discretized time increments with fixed numbers of infections (via mixed-integer linear optimization) and re-estimating infections (Bertsimas et al. 2022).
Results show that our solution provides significant benefits against all benchmarks. Table 4 reports results for the vaccine allocation problem, with weekly budgets of 2.5 million and 7 million vaccines--corresponding to the supplies originally planned and actually available in the United States in 2021, respectively. Without vaccinations, the pandemic would lead to an estimated 650,000
fatalities over three months (accounting for undetected deaths). Under a uniform allocation strategy, the vaccination campaign can save around 21,000 of these fatalities (or 3.3%) with the smaller vaccine budget, and 43,000 fatalities (or 6.6%) with the larger budget. The cost-based benchmark saves up to 30% of additional lives, by capturing epidemiological information. But then, our solution saves 20-70% additional lives over six weeks and 12-40% additional lives over three months, as compared to the cost-based benchmark. In other words, optimized stockpile management can increase the effectiveness of the vaccination campaign by a factor of 1.12 to 1.7 without increasing vaccine capacity or vaccine efficacy. In absolute terms, these improvements represent 7,000 to 12,000 extra lives saved over three months, demonstrating the importance of vaccine distribution and the edge of optimization to guide resource allocation in complex contagion systems.
We obtain similar findings for the other three problems (Table 5). In the content promotion and congestion mitigation problems, the relative gains of the optimized solution are even more signifi
\begin{table}
\begin{tabular}{c l c c c c c c} \hline \hline & & \multicolumn{2}{c}{\(S=6\)} & \multicolumn{2}{c}{\(S=8\)} & \multicolumn{2}{c}{\(S=12\)} \\ \cline{3-8} Budget & Method & Time (sec.) & Deaths & Time (sec.) & Deaths & Time (sec.) & Deaths \\ \hline — & Do nothing & — & 573.37K & — & 604.22K & — & 649.75K \\ \hline
2.5M & Uniform allocation & — & -5.91K & — & -11.02K & — & -21.42K \\ & Cost-based allocation & — & -6.59K & — & -12.92K & — & -27.64K \\ \hline & MIQO implementation & n/a & n/a & n/a & n/a & n/a & n/a \\ & Discretization (\(\delta=0.002\)) & 1,000\({}^{*}\) & -5.03K & n/a & n/a & n/a & n/a \\ & Discretization (\(\delta=0.001\)) & 1,000\({}^{*}\) & -5.56K & n/a & n/a & n/a & n/a \\ & Coordinate Descent & 5.43 & **-11.36K** & 4.73 & -20.02K & 6.03 & -39.01K \\ \hline & Branch-and-price (\(\varepsilon=0.002\)) & 426.2 & **-11.28K** & 1234.2 & **-20.43K** & 3671.4 & **-39.41K** \\ \hline
7M & Uniform allocation & — & -13.22K & — & -23.67K & — & -43.60K \\ & Cost-based allocation & — & -17.44K & — & -31.77K & — & -58.71K \\ \hline & MIQO implementation & n/a & n/a & n/a & n/a & n/a & n/a \\ & Discretization (\(\delta=0.002\)) & 1,000\({}^{*}\) & -5.88K & n/a & n/a & n/a & n/a \\ & Discretization (\(\delta=0.001\)) & 1,000\({}^{*}\) & -2.88K & n/a & n/a & n/a & n/a \\ & Coordinate Descent & 5.46 & -20.61K & 12.7 & -36.22K & 118.5 & -62.55K \\ \hline & Branch-and-price (\(\varepsilon=0.002\)) & 67.5 & **-21.46K** & 156.8 & **-37.38K** & 708.7 & **-65.65K** \\ \hline \hline \end{tabular} \(*\) and “n/a”: no optimal and feasible solution, respectively. Bold font: solutions within 1% of the best-found solution.
\end{table}
Table 4: **Death toll comparison for the vaccine allocation problem (full country, \(D=21\)).**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & & & \multicolumn{2}{c}{Content Promotion} & \multicolumn{2}{c}{Congestion Mitigation} \\ \cline{3-8} Method & Time (sec.) & Deaths & Method & Time (sec.) & Market share & Method & Time (sec.) & Cost \\ \hline Do nothing & — & 573K & Do nothing & — & 2.1 p.p. & Do nothing & — & 18.98 \\ \hline Top-K, Uniform & — & -6.98K & Uniform (random) & — & +5.1 p.p. & Uniform & — & -1.80\% \\ Top-K, Optimized & — & -7K & Cost-based & — & +7.3 p.p. & Cost-based & — & -1.79\% \\ \hline MIQO & n/a & n/a & — & — & — & — & — \\ Discretization & 10,275\({}^{*}\) & -4.3K & — & — & — & — & — \\ Coordinate descent & 254 & -7.7K & — & — & — & — & — \\ \hline Branch-and-price & 10,978 & **-8.6K** & Branch-and-price & 11,864 & **+10.0 p.p.** & Branch-and-price & 416 & **-3.32\%** \\ \hline \hline \end{tabular}
* \(*\) and “n/a”: no optimal and feasible solution, respectively. Bold font: solutions within 1% of the best-found solution. Vaccination centers: \(S=6\), \(K=3\), results for the full country across 10 groups.
Content promotion: \(n=20\)\(K=2\), \(S=10\), \(D=21\) (“hard” instance); market share averaged across 20 products.
Congestion mitigation: \(n=5\); \(B^{1}=6\); \(B^{2}=4\); \(S=6\) (“hard” instance).
\end{table}
Table 5: **Performance comparison: vaccination centers, congestion mitigation and content promotion problems.**
cant, estimated at 84-96% against uniform allocation and at 37-85% against cost-based allocation. Simple benchmarks perform comparatively worse due to the sparsity constraint in the content promotion problem and the interdependencies between prevention and treatment vehicles in the congestion mitigation problem--leading to stronger benefits of optimization. In the vaccination centers problem, the optimized solution also provides significant improvements against the simple demographic-based top-K benchmark. Specifically, the benefit of optimization amounts to 23-31% under uniform vaccine allocations and to 3-23% under optimized vaccine allocations. Exactly like epidemiological proxies are insufficient to guide tactical resource allocation, simple demographic proxies are insufficient to guide strategic planning in dynamical and non-linear contagion systems.
Finally, our branch-and-price methodology is instrumental to reap the benefits of optimization, with significant impact against simpler optimization benchmarks. First, Tables 4 and 5 show the strong limitations of simple workarounds to circumvent the non-convex contagion dynamics in the optimization problem. Direct implementation using mixed-integer quadratic optimization solvers fails to even provide a feasible solution in small six-week instances. Similarly, mixed-integer linear optimization implementation (using a discretization-based staircase approximation of the number of infections) performs worse than even uniform allocation in small instances and does not return a feasible solution in larger instances. In comparison, our algorithm achieves a Pareto improvement: faster computational times, stronger scalability, and much higher-quality solutions. Our methodology can also outperform the tailored coordinate descent heuristic, albeit in longer computational times. For the vaccine allocation problem, our solution yields a 1-5% improvement, resulting in up to 3,000 extra lives saved over a three-month horizon. In fact, the benefits of our methodology increases as we increase the number of available vaccines, because the interaction between the vaccine allocation decisions and the infection dynamics become stronger, favoring our global optimization approach. In the vaccination centers problem, the benefits can also be significant when coordinate descent leads to local optima that induce a different set of facilities. In the \(F=3\) instance, our methodology results in an extra 900 lives saved over six weeks--a 12% improvement.
We conclude by illustrating vaccine allocations in Figure 4 and vehicle allocations in Figure 5. In vaccine allocation, the states where the pandemic is more costly (e.g., those with a larger population, more infections) generally receive more vaccines, but the relationship is not monotonic. Moreover, vaccine allocations exhibit different temporal patterns; for instance, New York and Illinois receive most vaccines early on; California and Texas receive most of their vaccines later on; and Florida stands in between. In the congestion mitigation problem, treatment vehicles are mainly sent to dense urban centers and near the airport, to clear large accidents and ease recovery; in contrast, prevention vehicles are primarily sent to suburban regions to prevent the spread of congestion. In both cases, there is no one-size-fits-all allocation strategy. Instead, the benefits of
optimization stem from fine-tuning resource allocations to mitigate near-term impact via treatment interventions and to manage network propagation via prevention interventions.
In summary, our branch-and-price methodology generates consistently high-quality solutions in large-scale practical instances of our prescriptive contagion analytics problems, with significant
Figure 4: Spatial-temporal vaccine allocation over a 12 weeks. States are ordered from top to bottom in decreasing order of the total cost under do-nothing—a proxy for the prevalence of the pandemic.
Figure 5: Allocation of treatment vehicles (top) and prevention vehicles (bottom) in Singapore over three hours.
benefits as compared to all practical and optimization benchmarks. In Appendix C.3, we report additional results showing the robustness of these findings to parameter estimation errors.
## 6 Conclusion
Predictive contagion systems have shown considerable success in epidemiology and other domains of science, engineering and management. This paper developed a prescriptive algorithm to optimize spatial-temporal resource allocation decisions in systems governed by contagion dynamics--and in more general dynamical systems. By combining the difficulties of combinatorial optimization and those of dynamical systems, however, this class of problems exhibits a large-scale and complex mixed-integer non-convex optimization structure with continuous-time ODE constraints. In response, this paper has developed a branch-and-price algorithm, using (i) a set partitioning reformulation to eliminate non-linearities; (ii) column generation to separate combinatorial optimization in a master problem from non-linear dynamics in a pricing problem; (iii) a novel state-clustering algorithm for discrete-decision continuous-state dynamic programming to solve the pricing problem; and (iv) a novel tri-partite branching scheme on natural variables to circumvent the non-linearities.
We implemented the methodology on four prescriptive analytics contagion problems: vaccine allocation, mass vaccination centers deployment, content promotion, and congestion mitigation. The branch-and-price algorithm scales to realistic and otherwise-intractable instances, with up to 21 decisions, 51 regions and 12 decision epochs in vaccine allocation, hence \(\mathcal{O}(21^{612})\) possible decisions overall. From a practical standpoint, the methodology significantly outperforms easily-implementable benchmarks based on demographic or epidemiological proxies, as well as state-of-the-art optimization algorithms. In the vaccine allocation example, the methodology can improve the effectiveness of a vaccination campaign by 12-70%, resulting in 7,000 to 12,000 extra lives saves in a situation mirroring the midst of the COVID-19 pandemic in the United States. These results are robust across problem instances and robust to parameter estimation errors. Ultimately, our prescriptive contagion analytics methodology can deliver significant benefits across contagion-based domains, by fine-tuning resource allocations based on spatial-temporal system dynamics.
The promising results reported in this paper also motivate extensions of our methodology to tackle prescriptive contagion analytics problems with different decision-making structures (e.g., non-discrete resource allocation decisions, dynamic resource allocations) and more complex system dynamics (e.g., inter-population mixing). Perhaps the main limitation of this paper is the focus on deterministic resource allocation problems; although our robustness tests in Appendix C.3 showed the benefits of our solutions under model misspecification, our optimization methodology could be combined in future work with the robust epidemiological approach from Fu et al. (2021). Yet, this paper provides methodological foundations to support complex resource allocation decisions in a broad class of contagion-based systems, and of dynamical systems more generally. |
2302.03234 | Leibniz Cohomology with Adjoint Coefficients | With the Poincar\'e group $\mathbf{R}^{3,\,1}\rtimes O(3,\,1)$ as the model
of departure, we focus, for $n=p+q$, $p+q\ge 4$, on the affine indefinite
orthogonal group $\mathbf{R}^{p,q}\rtimes SO(p,\,q)$. Denote by
$\mathfrak{h}_{p,\,q}$ the Lie algebra of the affine indefinite orthogonal
group. We compute the Leibniz cohomology of $\mathfrak{h}_{p,\,q}$ with adjoint
coefficients, written $HL^*(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})$. We
calculate several indefinite orthogonal invariants, and $\mathfrak{h}_{p,\,q}
$-invariants and provide the Leibniz cohomology in terms of these invariants. | Shitu Fawaz Jimoh | 2023-02-07T03:30:27Z | http://arxiv.org/abs/2302.03234v1 | # Leibniz cohomology with adjoint coefficients
###### Abstract.
With the Poincare group \({\bf R}^{3,\,1}\rtimes O(3,\,1)\) as the model of departure, we focus, for \(n=p+q\), \(p+q\geq 4\), on the affine indefinite orthogonal group \({\bf R}^{p,q}\rtimes SO(p,\,q)\). Denote by \(\mathfrak{h}_{p,\,q}\) the Lie algebra of the affine indefinite orthogonal group. We compute the Leibniz cohomology of \(\mathfrak{h}_{p,\,q}\) with adjoint coefficients, written \(HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\). We calculate several indefinite orthogonal invariants, and \(\mathfrak{h}_{p,\,q}\)-invariants and provide the Leibniz cohomology
## 1. Introduction
In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics. The indefinite orthogonal group \(O(p,\,q)\) is among the most important groups that are broadly used in physics. Of particular interest in physics is the Lorentz group \(O(3,1)\)- the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena [11] and the Poincare group-the affine group of the Lorentz group \(O(3,1)\), namely \({\bf R}^{3,1}\rtimes O(3,1)\). The Lorentz group is the setting for electromagnetism and special relativity.
Classification of differentiable structures can be done with connections. This method involves calculation which sometimes can be unpleasant to work with. Connections on manifolds occur naturally as a cochain in the complex for Leibniz cohomology of vector fields with coefficients in the adjoint representation [14]. In [6] a definition of Leibniz cohomology, \(HL^{*}\), for differentiable manifolds was proposed. Consequently, we can reduce the problem of classifying differentiable structures using connections to computing Leibniz cohomology. This is one of many motivations for studying Leibniz algebras and Leibniz (co)homology. Leibniz (co)homolgy, \(HL^{*}(\mathfrak{h},\,\mathfrak{h})\), is difficult to compute for an arbitrary Leibniz algebra \(\mathfrak{h}\).
Leibniz cohomology with coefficients in the adjoint representation for semi-simple Leibniz algebras has been studied in [9], while \(HL^{*}(\mathfrak{h};\,\mathfrak{h})\) has been used to study deformation theory in the category of Leibniz algebras [3], [4].
We offer a calculation of \(HL(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\), where \(\mathfrak{h}_{p,\,q}\) is the affine indefinite orthogonal Lie algebra, \(p+q\geq 4\). We compute this cohomology by identifying some indefinite orthogonal invariants in terms of balanced tensors. We then provide the Leibniz cohomology in terms of these invariants. The main tools used in this computation are the Hochschild-Serre spectral sequence and the Pirashvili spectral sequence. In low dimension \(HL^{0}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=0\), \(HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=\langle I\rangle\) and \(HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=\langle\rho\rangle\). Higher dimensions of \(HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) contain echoes of these classes against a tensor algebra. We prove in section 7 that there is an isomorphism of graded vector spaces
\[HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq\langle I,\rho \rangle\otimes T(\gamma^{*}_{pq}),\]
where \(\langle I,\rho\rangle\) is the real vector space with basis \(\{I,\,\rho\}\) and \(T(\gamma^{*}_{pq})\) is the tensor algebra on the class of \(\gamma^{*}_{pq}\). We prove in section 5 that \(\gamma^{*}_{pq}\) and \(\rho\) are \(\mathfrak{so}(p,\,q)\)-invariant. Also,
\(I\) is \(\mathfrak{h}_{p,\,q}\)-invariant.
(1.1) \[\begin{split}& I(\alpha_{ij})=0,\ \ 1\leq i<j\leq p,\ \ p+1\leq i<j\leq n,\\ & I(\beta_{ij})=0,\ \ 1\leq i\leq p,\ \ p+1\leq j\leq n,\\ & I(\frac{\partial}{\partial x^{i}})=\frac{\partial}{\partial x^{ i}},\ \ i=1,2,\cdots,n.\\ &\rho(\alpha_{ij}\,\wedge g)=0,\ \ 1\leq i<j\leq p,\ \ p+1\leq i<j\leq n,\ \ \mathrm{for\,all}\,g\in\mathfrak{h}_{p,\,q},\\ &\rho(\beta_{ij}\,\wedge g)=0,\ \ 1\leq i\leq p,\ \ p+1\leq j\leq n,\ \ \mathrm{for\,all}\,g\in\mathfrak{h}_{p,\,q},\\ &\rho(\frac{\partial}{\partial x^{i}}\wedge\frac{\partial}{ \partial x^{j}})=\alpha_{ij},\ \ 1\leq i\leq j\leq p,\\ &\rho(\frac{\partial}{\partial x^{i}}\wedge\frac{\partial}{ \partial x^{j}})=-\alpha_{ij},\ \ p+1\leq i\leq j\leq n,\\ &\rho(\frac{\partial}{\partial x^{i}}\wedge\frac{\partial}{ \partial x^{j}})=\beta_{ij},\ \ 1\leq i\leq p,\ \ p+1\leq j\leq n.\\ &\gamma^{*}_{pq}=\sum_{1\leq i<j\leq p}(-1)^{i+j}dx^{1}\wedge \cdots\wedge\widehat{dx^{i}}\wedge\cdots\wedge\widehat{dx^{j}}\wedge\cdots \wedge dx^{n}\otimes\alpha^{*}_{ij}\\ &\
## 2. The Indefinite Orthogonal Lie Algebra
Let
\[I_{p,\,q}=\begin{pmatrix}I_{p}&0\\ 0&-I_{q}\end{pmatrix} \tag{2.1}\]
with \(I_{k}\) denoting the \(k\times k\) identity matrix. Then we define the indefinite orthogonal Lie algebra,
\[\mathfrak{so}(p,\,q)=\{X\in M_{n}(\mathbf{R})\,|\,X^{T}I_{p,\,q}=-I_{p,\,q}X\}\]
with \(\,n=p+q\,\) and \(\,p,\,q\in\,\)\(\mathbf{N}\).
Consider the standard coordinates on \(\mathbf{R^{n}}\) given by \((x_{1},x_{2},...,x_{n})\) with unit vector fields \(\frac{\partial}{\partial x_{i}}\) parallel to the \(x_{i}\) axes.
Let
\[\begin{split}\alpha_{ij}&:=x_{i}\frac{\partial}{ \partial x^{j}}-x_{j}\frac{\partial}{\partial x^{i}},\quad 1\leq i<j\leq p,\quad\,p+1\leq i<j \leq n,\\ \beta_{ij}&:=x_{i}\frac{\partial}{\partial x^{j}}+x_{j} \frac{\partial}{\partial x^{i}},\quad 1\leq i\leq p,\quad p+1\leq j\leq n.\end{split} \tag{2.2}\]
Then \(\{\alpha_{ij}\}\), \(\{\beta_{ij}\}\), is a vector space basis for a lie algebra isomorphic to \(\mathfrak{so}(p,\,q)\). Let \(\mathfrak{I}_{n}\) be the \(\mathbf{R}\) vector space spanned by
\[\{\frac{\partial}{\partial x^{i}}\},\quad i=1,2,\cdots,n.\]
Then \(\mathfrak{I}_{n}\) is an Abelian Lie algebra. Let \(\mathfrak{h}_{p,\,q}\) be the Lie algebra with basis given by the union of \(\{\alpha_{ij}\},\{\beta_{ij}\},\{\frac{\partial}{\partial x^{i}}\}\). Then \(\mathfrak{I}_{n}\) is an ideal of \(\mathfrak{h}_{p,\,q}\) and there is a short exact sequence of Lie algebras [12, Page 217]
\[0\longrightarrow\mathfrak{I}_{n}\longrightarrow\mathfrak{h}_{p,\,q} \longrightarrow\mathfrak{so}(p,\,q)\longrightarrow 0\]
with \(\mathfrak{h}_{p,\,q}/\mathfrak{I}_{n}\simeq\mathfrak{so}(p,\,q)\) and \(\mathfrak{h}_{p,\,q}\) is an affine extension of \(\mathfrak{so}(p,\,q)\).
Let \(\alpha_{ij}^{*},\ \beta_{ij}^{*}\) be the dual of \(\alpha_{ij},\ \beta_{ij}\) respectively with respect to the basis \(\{\alpha_{ij}\}\ \cup\ \{\beta_{ij}\}\ \cup\ \{\frac{\partial}{\partial x_{i}}\}\) of \(h_{p,\,q}\), and let \(dx^{i}\) be the dual of \(\frac{\partial}{\partial x^{i}}\).
## 3. Lie algebra cohomology with coefficients
Let \(\mathfrak{g}\) be a Lie algebra over a ring \(k\) and \(V\) be any \(\mathfrak{g}\)-module. The Lie algebra cohomology of \(\mathfrak{g}\) with coefficients in the module \(V\), written \(H^{*}_{\text{Lie}}(\mathfrak{g};\,V)\), is the cohomology of the cochain complex (Chevalley-Eilenberg complex),
\[\text{Hom}_{k}(\mathbf{R},\,V)\xrightarrow{\delta}\text{Hom}_{k}(\mathfrak{g },\,V)\xrightarrow{\delta}\text{Hom}_{k}(\mathfrak{g}^{\wedge 2},\,V) \xrightarrow{\delta}\text{Hom}_{k}(\mathfrak{g}^{\wedge 3},\,V)\xrightarrow{ \delta}\cdots,\]
where \(\mathfrak{g}^{\wedge n}\) is the nth exterior power of \(\mathfrak{g}\) over \(k\) and where the coboundary \(\delta f\) of such an \(n\)-cochain is the \((n+1)\)-cochain
\[\begin{split}&\delta f(g_{1}\wedge...\wedge g_{n+1})=\sum_{i=1}^{n+ 1}(-1)^{i}g_{i}\cdot f(g_{1}\wedge...\hat{g_{i}}...\wedge g_{n+1})\\ &+\sum_{1\leq i<j\leq n+1}(-1)^{j}f(g_{1}\wedge...\wedge g_{i-1} \wedge[g_{i},g_{j}]\wedge...\hat{g_{j}}...\wedge g_{n+1}).\end{split} \tag{3.1}\]
In particular when \(V=\mathfrak{g}\), we obtain the Lie algebra cohomology with coefficients in the adjoint representation, written \(H^{*}_{\text{Lie}}(\mathfrak{g};\,\mathfrak{g})\) and when \(V=g^{\prime}:=\text{Hom}_{k}(\mathfrak{g},\,k)\), we obtain the Lie algebra cohomology with coefficients in the co-adjoint representation, written \(H^{*}_{\text{Lie}}(\mathfrak{g};\,\mathfrak{g}^{\prime})\). We consider \(k\) as a trivial \(\mathfrak{g}\)-module, so \(H^{*}_{\text{Lie}}(\mathfrak{g};\,k)\) is the Lie algebra cohomology with trivial coefficients.
The (left) invariant submodule \(V^{\mathfrak{g}}\) of a \(\mathfrak{g}\)-module \(V\) is defined as:
\[V^{\mathfrak{g}}=\{v\in V:\ xv=0\text{ for all }x\in\mathfrak{g}\}\]
The action of \(\mathfrak{g}\) on \(k\) is trivial as mentioned above. This means \(g\cdot c=0\) for all \(c\in k\), for all \(g\in\mathfrak{g}\). Let \(\mathfrak{g}\) act on itself via the adjoint representation. For \(g\in\mathfrak{g}\), the linear map \(\text{ad}_{g}\): \(\mathfrak{g}\rightarrow\mathfrak{g}\) is given by
\[g\cdot x:=\text{ad}_{g}(x)=[g,x],\text{ for all }x\in\mathfrak{g}.\]
For \(g\in\mathfrak{g}\) and \(f\in\text{Hom}_{k}(\mathfrak{g}^{\wedge n},\,V)\), we define the action of \(\mathfrak{g}\) on \(\text{Hom}_{k}(\mathfrak{g}^{\wedge n},\,V)\) by
\[\begin{split}&(gf)(x_{1}\wedge x_{2}\wedge...\wedge x_{n})=g \cdot f(x_{1}\wedge x_{2}\wedge...\wedge x_{n})\\ &+\sum_{i=1}^{n}f(x_{1}\wedge...\wedge x_{i-1}\wedge[x_{i},g] \wedge x_{i+1}\wedge...\wedge g_{n})\end{split} \tag{3.2}\]
The action of \(\mathfrak{g}\) on itself extends to \(\mathfrak{g}^{\wedge n}\) by
\[[g,\,x_{1}\wedge x_{2}\wedge...\wedge x_{n}]=\sum_{i=1}^{n}x_{1}\wedge_{2} \wedge...\wedge[g,\,x_{i}]\wedge...\wedge x_{n},\text{ for }g,\,x_{i}\in\mathfrak{g}\text{ for all i.} \tag{3.3}\]
## 4. Leibniz Cohomology with Coefficients
Let \(\mathfrak{g}\) be a Leibniz algebra over a ring \(k\) and \(V\) be a representation \(\mathfrak{g}\). Let
\[CL^{n}(\mathfrak{g},\,V):=\text{Hom}_{k}(\mathfrak{g}^{\otimes n},\,V),\,\,\, \text{ where }\,\,n\geq 0.\]
The Leibniz cohomology of \(\mathfrak{g}\) with coefficients in the representation \(V\), written \(HL^{*}(\mathfrak{g};\,V)\), is the cohomology of the cochain complex \(CL^{*}(\mathfrak{g},\,V)\),
\[\text{Hom}_{k}(\textbf{R},\,V)\xrightarrow{\delta}\text{Hom}_{k}(\mathfrak{g },\,V)\xrightarrow{\delta}\text{Hom}_{k}(\mathfrak{g}^{\otimes 2},\,V) \xrightarrow{\delta}\text{Hom}_{k}(\mathfrak{g}^{\otimes 3},\,V) \xrightarrow{\delta}\cdots,\]
where \(\mathfrak{g}^{\otimes n}\) is the nth tensor power of \(\mathfrak{g}\) over \(k\) and where the coboundary \(\delta f\) of such an \(n\)-cochain is the \((n+1)\)-cochain
\[\begin{split}&\delta f(g_{1}\otimes\cdots\otimes g_{n+1}):=[g_{1}, \,f(g_{2}\otimes\cdots\otimes g_{n+1})]\\ &+\sum_{i=2}^{n+1}(-1)^{i}[f(g_{1}\otimes\cdots\hat{g_{i}}\cdots \otimes g_{n+1}),\,g_{i}]\\ &+\sum_{1\leq i<j\leq n+1}(-1)^{j+1}f(g_{1}\otimes\cdots\otimes g _{i-1}\otimes[g_{i},g_{j}]\otimes\cdots\hat{g_{j}}\cdots\otimes g_{n+1}).\end{split}\]
It is easy to compute \(HL^{*}(\mathfrak{g},\,V)\) in low dimension. For higher dimensions of \(HL^{*}(\mathfrak{g},\,V)\), we use the Pirashvili spectral sequence [8] and a long exact sequence induced by the canonical projection map \(\pi_{\rm rel}:\mathfrak{g}^{\otimes(n+2)}\rightarrow\mathfrak{g}^{\wedge(n+2)}\), \(\pi_{\rm rel}(g_{1}\otimes g_{2}\otimes...\otimes g_{n+2})=g_{1}\wedge g_{2} \wedge...\wedge g_{n+2}\).
## 5. Invariants for Indefinite Orthogonal Lie Algebra
We present the invariants of \(\wedge^{*}\mathfrak{I}_{n}\), \({\rm Hom}(\wedge^{*}\mathfrak{I}_{n},\mathfrak{h}_{p,\,q})\) and \(\wedge^{*}\mathfrak{I}_{n}\otimes\mathfrak{h}_{p,\,q}\) under the action of \(\mathfrak{so}(p,\,q)\) where \(n=p+q\).
**Lemma 5.1**.: _There is a vector space isomorphism_
\[[\wedge^{*}\mathfrak{I}_{n}]^{\mathfrak{so}(p,\,q)}\simeq\mathbb{R}\oplus \langle v\rangle,where\ \ v=\frac{\partial}{\partial x^{1}}\wedge\frac{\partial}{\partial x^{2}}\wedge \cdots\wedge\frac{\partial}{\partial x^{n}}. \tag{5.1}\]
Proof.: Lemma 5.1 is proved in [5] and can easily be found by direct calculations.
\[[\mathfrak{I}_{n}^{\wedge 0}]^{so(p,\,q)}=\mathbf{R},\ \ \ [\mathfrak{I}_{n}^{ \wedge n}]^{so(p,q)}=\langle v\rangle\ {\rm and}\]
\[[\mathfrak{I}_{n}^{\wedge k}]^{so(p,q)}=\{0\}\ \ {\rm whenever}\ \ k\notin\{0,n\}.\]
The following lemma is proved in [13] and can also be found by direct calculations.
**Lemma 5.2**.: _There is a vector space isomorphism_
\[[\mathfrak{I}_{n}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)}\simeq \langle I_{pq}\rangle\ \text{where}\]
\[I_{pq}=\sum_{i=1}^{p}\tfrac{\partial}{\partial x^{i}}\otimes\tfrac{\partial}{ \partial x^{i}}-\sum_{i=p+1}^{n}\tfrac{\partial}{\partial x^{i}}\otimes\tfrac{ \partial}{\partial x^{i}},\]
\[[\mathfrak{I}_{n}^{\wedge 2}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)} \simeq\langle\rho_{pq}\rangle\ \ where\]
\[\rho_{pq}=\sum_{1\leq i<j\leq p}\tfrac{\partial}{\partial x^{i}}\wedge\tfrac{ \partial}{\partial x^{j}}\otimes\alpha_{ij}-\sum_{p+1\leq i<j\leq n}\tfrac{ \partial}{\partial x^{i}}\wedge\tfrac{\partial}{\partial x^{j}}\otimes\alpha_ {ij}\]
\[-\sum_{\begin{subarray}{c}1\leq i\leq p\\ p+1\leq j\leq n\end{subarray}}\tfrac{\partial}{\partial x^{i}}\wedge\tfrac{ \partial}{\partial x^{j}}\otimes\beta_{ij},\]
\[[\mathfrak{I}_{n}^{\wedge(n-1)}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}( p,\,q)}\simeq\langle\beta_{pq}\rangle\ \ where\]
\[\beta_{pq}= \sum_{i=1}^{p}(-1)^{i+1}\tfrac{\partial}{\partial x^{1}}\wedge \cdots\wedge\tfrac{\partial}{\partial x^{i}}\wedge\cdots\wedge\tfrac{\partial }{\partial x^{n}}\otimes\tfrac{\partial}{\partial x^{i}}\] \[- \sum_{i=p+1}^{n}(-1)^{i}\tfrac{\partial}{\partial x^{1}}\wedge \cdots\wedge\tfrac{\partial}{\partial x^{i}}\wedge\cdots\wedge\tfrac{\partial }{\partial x^{n}}\otimes\tfrac{\partial}{\partial x^{i}},\]
\[[\mathfrak{I}_{n}^{\wedge(n-2)}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p, \,q)}\simeq\langle\gamma_{pq}\rangle\ \ where\]
\[\gamma_{pq}= \sum_{1\leq i<j\leq p}(-1)^{i+j}\tfrac{\partial}{\partial x^{1}} \wedge...\wedge\tfrac{\partial}{\partial x^{i}}\wedge...\wedge\tfrac{\partial }{\partial x^{j}}\wedge...\wedge\tfrac{\partial}{\partial x^{n}}\otimes\alpha _{ij}\] \[- \sum_{p+1\leq i<j\leq n}(-1)^{i+j+1}\tfrac{\partial}{\partial x^{1 }}\wedge...\wedge\tfrac{\partial}{\partial x^{i}}\wedge...\wedge\tfrac{ \partial}{\partial x^{j}}\wedge...\wedge\tfrac{\partial}{\partial x^{n}} \otimes\alpha_{ij}\] \[- \sum_{\begin{subarray}{c}1\leq i\leq p\\ p+1\leq j\leq n\end{subarray}}(-1)^{i+j+1}\tfrac{\partial}{\partial x^{1}} \wedge...\wedge\tfrac{\partial}{\partial x^{i}}\wedge...\wedge\tfrac{ \partial}{\partial x^{j}}\wedge...\wedge\tfrac{\partial}{\partial x^{n}} \otimes\beta_{ij}.\]
\[[\mathfrak{I}_{n}^{\wedge k}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q )}\simeq\{0\}\ \text{for}\ k\notin\{1,2,n-1,n-2\}\text{,}\]
**Lemma 5.3**.: _There is a vector space isomorphism_
\[\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge k},\,\mathfrak{h}_{p,\,q})\simeq \mathfrak{I}_{n}^{\wedge k}\otimes\mathfrak{h}_{p,\,q}\,\ for\ \ k=0,1,2,\cdots. \tag{5.2}\]
Proof.: We define \(\psi:\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge k},\,\mathfrak{h}_{p,\,q}) \longrightarrow\mathfrak{I}_{n}^{\wedge k}\otimes\mathfrak{h}_{p,\,q}\),
\[\psi(\phi):=\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots< i_{k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}z\otimes \phi(z), \tag{5.3}\]
for all \(z=\frac{\partial}{\partial x^{i_{1}}}\wedge\frac{\partial}{\partial x^{i_{2}} }\wedge\ldots\wedge\frac{\partial}{\partial x^{i_{k}}}\in\mathfrak{I}_{n}^{ \wedge k}\) and \(\phi\in\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge k},\mathfrak{h}_{p,\,q})\).
\(\psi\) is an isomorphism and \(\mathfrak{so}(p,\,q)\)-equivariant. The proof of isomorphism is straightforward. To show the equivariant property, we start by looking at the action of \(\mathfrak{so}(p,\,q)\) on \(\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge k},\,\mathfrak{h}_{p,\,q})\) and \(\mathfrak{I}_{n}^{\wedge k}\otimes\mathfrak{h}_{p,\,q}\). For \(g\in\mathfrak{so}(p,\,q)\) and \(\phi\in\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge k},\,\mathfrak{h}_{p,\,q})\), the action of \(g\) on \(\phi\) is given by;
\[(g\cdot\phi)(z)=[g,\,\phi(z)]+\phi([z,\,g])=[g,\,\phi(z)]-\phi([g,\,z]),\ \text{for all}\ z\in\mathfrak{I}_{n}^{\wedge k}. \tag{5.4}\]
By defintion, we have
\[\psi(g\cdot\phi) =\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots <i_{k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}z\otimes (g\cdot\phi)(z),\] \[=\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots <i_{k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}z\otimes [g,\,\phi(z)]\] \[-\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots <i_{k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}z\otimes \phi([g,\,z])\]
\[g\cdot\psi(\phi) =\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots<i_ {k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}[g,\,z] \otimes\phi(z)\] \[+\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots<i_ {k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}z\otimes[g,\, \phi(z)]\]
We show for all \(g\in\mathfrak{so}(p,\,q)\) and \(z\in\mathfrak{I}_{n}^{\wedge k}\)
\[\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots<i_ {k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}[g,\,z] \otimes\phi(z)\] \[= -\sum_{s=0}^{k}\sum_{\begin{subarray}{c}1\leq i_{1}<i_{2}<\cdots <i_{k-s}\leq p\\ p+1\leq i_{k-s+1}<i_{k-s+2}<\cdots<i_{k}\leq n\end{subarray}}(-1)^{s}z\otimes \phi([g,\,z])\]
and then conclude \(\psi(g\cdot\phi)=g\cdot\psi(\phi)\).
The details of Lemma 5.3 are in my dissertation.
**Corollary 5.4**.: _There is a vector space isomorphism_
\[[\mathrm{Hom}(\mathfrak{I}_{n}^{\wedge k},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{ so}(p,\,q)}\simeq[\mathfrak{I}_{n}^{\wedge k}\otimes\mathfrak{h}_{p,\,q}]^{ \mathfrak{so}(p,\,q)},\ for\ all\ k. \tag{5.5}\]
Proof.: The following results follow because \(\psi\) in Lemma 5.3 is \(\mathfrak{so}(p,\,q)\)-equivariant. The invariants \([\mathrm{Hom}(\mathfrak{I}_{n}^{\wedge*},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{ so}(p,\,q)}\) can be found by direct calculations although this is difficult.
For \(k=1\), \([\mathrm{Hom}(\mathfrak{I}_{n},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{ so}(p,\,q)}=\langle I\rangle\),
\[\psi(I):=\sum_{i=1}^{p}\tfrac{\partial}{\partial x^{i}}\otimes I(\tfrac{ \partial}{\partial x^{i}})-\sum_{i=p+1}^{n}\tfrac{\partial}{\partial x^{i}} \otimes I(\tfrac{\partial}{\partial x^{i}})=I_{pq}. \tag{5.6}\]
where
\[I(\tfrac{\partial}{\partial x^{i}})=\tfrac{\partial}{\partial x^{i}}\,,i=1,2, \cdots,n.\]
Note that \(\langle I\rangle=\langle-I\rangle\).
For \(k=2\), \([\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge 2},\,\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}=\langle\rho\rangle\),
\[\begin{split}\psi(\rho)&:=\sum_{1\leq i<j\leq p} \tfrac{\partial}{\partial x^{i}}\wedge\tfrac{\partial}{\partial x^{j}}\otimes \rho(\tfrac{\partial}{\partial x^{i}}\wedge\tfrac{\partial}{\partial x^{j}}) \\ &+\sum_{p+1\leq i<j\leq n}\tfrac{\partial}{\partial x^{i}}\wedge \tfrac{\partial}{\partial x^{j}}\otimes\rho(\tfrac{\partial}{\partial x^{i}} \wedge\tfrac{\partial}{\partial x^{j}})\\ &-\sum_{\begin{subarray}{c}1\leq i\leq p\\ p+1\leq j\leq n\end{subarray}}\tfrac{\partial}{\partial x^{i}}\wedge\tfrac{ \partial}{\partial x^{j}}\otimes\rho(\tfrac{\partial}{\partial x^{i}}\wedge \tfrac{\partial}{\partial x^{j}})=\rho_{pq}\end{split} \tag{5.7}\]
where
\[\begin{split}\rho(\tfrac{\partial}{\partial x^{i}}\wedge\tfrac{ \partial}{\partial x^{j}})&=\beta_{ij},\,1\leq i\leq p,\,p+1\leq j \leq n.\end{split} \tag{5.8}\]
For \(k=n-1\), \([\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge(n-1)},\,\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}=\langle\beta\rangle\),
\[\begin{split}\psi(\beta):&=\sum_{i=1}^{p}\tfrac{ \partial}{\partial x^{1}}\wedge\cdots\wedge\tfrac{\partial}{\partial x^{i}} \wedge\cdots\wedge\tfrac{\partial}{\partial x^{n}}\otimes\beta(\tfrac{ \partial}{\partial x^{1}}\wedge\cdots\wedge\tfrac{\partial}{\partial x^{i}} \wedge\cdots\wedge\tfrac{\partial}{\partial x^{n}})\\ &-\sum_{i=p+1}^{n}\tfrac{\partial}{\partial x^{1}}\wedge\cdots \wedge\tfrac{\partial}{\partial x^{i}}\wedge\cdots\wedge\tfrac{\partial}{ \partial x^{n}}\otimes\beta(\tfrac{\partial}{\partial x^{1}}\wedge\cdots \wedge\tfrac{\partial}{\partial x^{i}}\wedge\cdots\wedge\tfrac{\partial}{ \partial x^{n}})\\ &=\beta_{pq}\end{split} \tag{5.9}\]
where
\[\beta(\tfrac{\partial}{\partial x^{1}}\wedge\cdots\wedge\tfrac{\partial}{ \partial x^{i}}\wedge\cdots\wedge\tfrac{\partial}{\partial x^{n}})=(-1)^{i+1} \tfrac{\partial}{\partial x^{i}},i=1,2,\cdots,p\]
\[\beta(\frac{\partial}{\partial x^{1}}\wedge\cdots\wedge\frac{\hat{\partial}}{ \partial x^{i}}\wedge\cdots\wedge\frac{\partial}{\partial x^{n}})=(-1)^{i} \frac{\partial}{\partial x^{i}},i=p+1,\cdots,n.\]
For \(k=n-2\), \([\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge(n-2)},\,\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}=\langle\gamma\rangle\),
\[\psi(\gamma):=\] \[\sum_{1\leq i<j\leq p}\frac{\partial}{\partial x^{1}}\wedge\cdots \frac{\hat{\partial}}{\partial x^{i}}\cdots\frac{\hat{\partial}}{\partial x^{ j}}\cdots\wedge\frac{\partial}{\partial x^{n}}\otimes\gamma(\frac{\partial}{ \partial x^{1}}\wedge\cdots\frac{\hat{\partial}}{\partial x^{i}}\cdots\frac{ \hat{\partial}}{\partial x^{j}}\cdots\wedge\frac{\partial}{\partial x^{n}})\] \[-\sum_{p+1\leq i<j\leq n}\frac{\partial}{\partial x^{1}}\wedge \cdots\frac{\hat{\partial}}{\partial x^{i}}\cdots\frac{\hat{\partial}}{ \partial x^{j}}\cdots\wedge\frac{\partial}{\partial x^{n}}\otimes\gamma(\frac{ \partial}{\partial x^{1}}\wedge\cdots\frac{\hat{\partial}}{\partial x^{i}} \cdots\frac{\hat{\partial}}{\partial x^{j}}\cdots\wedge\frac{\partial}{ \partial x^{n}})\] \[-\sum_{\begin{subarray}{c}1\leq i\leq p\\ p+1\leq j\leq n\end{subarray}}\frac{\partial}{\partial x^{1}}\wedge\cdots \frac{\hat{\partial}}{\partial x^{i}}\cdots\frac{\hat{\partial}}{\partial x^ {j}}\cdots\wedge\frac{\partial}{\partial x^{n}}\otimes\gamma(\frac{\partial}{ \partial x^{1}}\wedge\cdots\frac{\hat{\partial}}{\partial x^{i}}\cdots\frac{ \hat{\partial}}{\partial x^{j}}\cdots\wedge\frac{\partial}{\partial x^{n}})\] \[=\gamma_{pq} \tag{5.10}\]
where
\[\gamma(\frac{\partial}{\partial x^{1}}\wedge\cdots\frac{\hat{ \partial}}{\partial x^{i}}\cdots\frac{\hat{\partial}}{\partial x^{j}}\cdots \wedge\frac{\partial}{\partial x^{n}})=(-1)^{i+j+1}\alpha_{ij},\,\,1\leq i \leq j\leq p+1,\] \[\gamma(\frac{\partial}{\partial x^{1}}\wedge\cdots\frac{\hat{ \partial}}{\partial x^{i}}\cdots\frac{\hat{\partial}}{\partial x^{j}}\cdots \wedge\frac{\partial}{\partial x^{n}})=(-1)^{i+j+1}\alpha_{ij},\,\,p+1\leq i \leq j\leq n,\] \[\gamma(\frac{\partial}{\partial x^{1}}\wedge\cdots\frac{\hat{ \partial}}{\partial x^{i}}\cdots\frac{\hat{\partial}}{\partial x^{j}}\cdots \wedge\frac{\partial}{\partial x^{n}})=(-1)^{i+j+1}\beta_{ij},\,\,1\leq i \leq p,\,p+1\leq j\leq n. \tag{5.11}\]
\([\operatorname{Hom}(\mathfrak{I}_{n}^{\wedge k},\,\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}=\{0\}\), whenever \(k\notin\{1,2,n-1,n-2\}\).
## 6. Lie Algebra Cohomology of \(\mathfrak{h}_{p,\,q}\) with Coefficients
Let \(\mathfrak{g}\) be a Lie algebra over a ring \(k\) and \(V\) be any \(\mathfrak{g}\)-module. For each \(n\geq 0\), let \(\pi_{R}:\mathfrak{g}\otimes\mathfrak{g}^{\wedge(n+1)}\to\mathfrak{g}^{\wedge(n +2)}\) be the canonical projection \(\pi_{R}(g_{1}\otimes g_{2}\wedge\cdots\wedge g_{n+2})=g_{1}\wedge g_{2}\wedge...\wedge g_{n+2}\), and \(\pi_{R}^{*}:\operatorname{Hom}(\mathfrak{g}^{\wedge(n+2)},V)\to\operatorname{ Hom}(\mathfrak{g}\otimes\mathfrak{g}^{\wedge(n+1)},V)\) be the map induced by \(\pi_{R}\).
Let \(CR^{n}(\mathfrak{g})=\text{coker}[\text{Hom}(\mathfrak{g}^{\wedge(n+2)},V) \xrightarrow{\pi_{R}^{*}}\text{Hom}(\mathfrak{g}\otimes\mathfrak{g}^{\wedge(n+1 )},V)]\) and \(HR^{*}(\mathfrak{g})\) be the cohomology of the complex \(CR^{n}(\mathfrak{g})\).
There is a short exact sequence
\[0\xrightarrow{}\text{Hom}(\mathfrak{g}^{\wedge(n+2)},V)\xrightarrow{\pi_{R}^ {*}}\text{Hom}(\mathfrak{g}\otimes\mathfrak{g}^{\wedge(n+1)},V)\xrightarrow{} CR^{n}(\mathfrak{g})\xrightarrow{}0 \tag{6.1}\]
which induces a long exact sequence
\[\begin{split}\cdots\xrightarrow{}& H_{\text{Lie}}^{n+ 2}(\mathfrak{g};V)\xrightarrow{\pi_{R}^{*}}H_{\text{Lie}}^{n+1}(\mathfrak{g}; \mathfrak{g}^{{}^{\prime}})\xrightarrow{}HR^{n}(\mathfrak{g})\xrightarrow{c_{ R}}\\ & H_{\text{Lie}}^{n+3}(\mathfrak{g};V)\xrightarrow{\pi_{R}^{*}}H _{\text{Lie}}^{n+2}(\mathfrak{g};\mathfrak{g}^{{}^{\prime}})\xrightarrow{} \cdots\end{split} \tag{6.2}\]
where \(c_{R}\) is the connecting homomorphism and \(\mathfrak{g}^{{}^{\prime}}=\text{Hom}(\mathfrak{g},\,V)\).
In this section my calculation is done with \(k=V=\mathbf{R}\).
Let \(\alpha_{ij}^{*}\), \(\beta_{ij}^{*}\) be the dual of \(\alpha_{ij}\), \(\beta_{ij}\) respectively with respect to the basis \(\{\alpha_{ij}\}\)\(\cup\)\(\{\beta_{ij}\}\)\(\cup\)\(\{\frac{\partial}{\partial x^{i}}\}\) of \(\mathfrak{h}_{p,\,q}\), and let \(dx^{i}\) be the dual of \(\frac{\partial}{\partial x^{i}}\). Let
\[\begin{split}\gamma_{pq}^{*}&=\sum_{1\leq i<j\leq p }(-1)^{i+j}dx^{1}\wedge\cdots\wedge\widehat{dx^{i}}\wedge\cdots\wedge\widehat {dx^{j}}\wedge\cdots\wedge dx^{n}\otimes\alpha_{ij}\\ &-\sum_{p+1\leq i<j\leq n}(-1)^{i+j+1}dx^{1}\wedge\cdots\wedge \widehat{dx^{i}}\wedge\cdots\wedge\widehat{dx^{j}}\wedge\cdots\wedge dx^{n} \otimes\alpha_{ij}\\ &-\sum_{\begin{subarray}{c}1\leq i\leq p\\ p+1\leq j\leq n\end{subarray}}(-1)^{i+j+1}dx^{1}\wedge\cdots\wedge\widehat{ dx^{i}}\wedge\cdots\wedge\widehat{dx^{j}}\wedge\cdots\wedge dx^{n}\otimes\beta_{ij} \end{split} \tag{6.3}\]
**Lemma 6.1**.: _There is an isomorphism in Lie-algebra cohomology_
\[\begin{split} H_{\text{Lie}}^{*}(\mathfrak{h}_{p,\,q};\mathbf{R} )\simeq H_{\text{Lie}}^{*}(\mathfrak{so}(p,\,q));\mathbf{R})\otimes[H_{\text{ Lie}}^{*}(\mathfrak{I}_{n};\mathbf{R})]^{\mathfrak{so}(p,\,q)}\\ H_{\text{Lie}}^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q}^{{}^{ \prime}})\simeq H_{\text{Lie}}^{*}(\mathfrak{so}(p,\,q);\,\mathbf{R})\otimes[H _{\text{Lie}}^{*}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q}^{{}^{\prime}})]^{ \mathfrak{so}(p,\,q)}\end{split} \tag{6.4}\]
_where \(\mathfrak{h}^{{}^{\prime}}_{p,\,q}=\operatorname{Hom}(\mathfrak{h}_{p,\,q},\, \mathbf{R})\)._
Proof.: We apply the Hochschild-Serre spectral sequence [10] to the ideal \(\mathfrak{I}_{n}\) of \(\mathfrak{h}_{p,\,q}\) and use the isomorphism of Lie algebras \(\mathfrak{h}_{p,\,q}/\mathfrak{I}_{n}\simeq\mathfrak{so}(p,\,q)\).
From section section 5, \([\mathfrak{I}^{\wedge 0}_{n}]^{\mathfrak{so}(p,\,q)}=\mathbf{R}\), \([\mathfrak{I}^{\wedge n}_{n}]^{\mathfrak{so}(p,\,q)}=\left\langle v\right\rangle,\operatorname{where}v=\frac{\partial}{\partial x^{1}}\wedge\frac{\partial}{ \partial x^{2}}\wedge...\wedge\frac{\partial}{\partial x^{n}}\), and \([\mathfrak{I}^{\wedge k}_{n}]^{\mathfrak{so}(p,\,q)}=\{0\}\), \(k\notin\{0,n\}\).
**Theorem 6.2**.: _For \(p+q\geq 4\),_
\[H^{*}_{\mathrm{Lie}}(\mathfrak{h}_{p,\,q};\,\mathbf{R})\simeq H^{*}_{\mathrm{ Lie}}(\mathfrak{so}(p,\,q);\,\mathbf{R})\otimes(\mathbf{R}\oplus\left\langle v ^{*}\right\rangle), \tag{6.5}\]
_where \(v^{*}=dx^{1}\wedge dx^{2}\wedge\cdots\wedge dx^{n}\)._
Proof.: \([H^{*}_{\mathrm{Lie}}(\mathfrak{I}_{n};\,\mathbf{R})]^{\mathfrak{so}(p,\,q)}\) is the cohomology of the cochain complex \([\operatorname{Hom}(\wedge^{*}\mathfrak{I}_{n},\,\mathbf{R})]^{\mathfrak{so} (p,\,q)}\). Note that
\[[\operatorname{Hom}(\wedge^{*}\mathfrak{I}_{n},\,\mathbf{R})]^{\mathfrak{so}(p,\,q)}\simeq\operatorname{Hom}([\wedge^{*}\mathfrak{I}_{n}]^{\mathfrak{so}(p, \,q)},\,\mathbf{R}).\]
So we will compute \([H^{*}_{\mathrm{Lie}}(\mathfrak{I}_{n};\,\mathbf{R})]^{\mathfrak{so}(p,\,q)}\) from the complex:
\[\begin{split}&\operatorname{Hom}([\mathfrak{I}^{\wedge 0}_{n}]^{ \mathfrak{so}(p,\,q)},\,\mathbf{R})\xrightarrow{\delta}\operatorname{Hom}([ \mathfrak{I}_{n}]^{\mathfrak{so}(p,\,q)},\,\mathbf{R})\xrightarrow{\delta} \operatorname{Hom}([\mathfrak{I}^{\wedge 2}_{n}]^{\mathfrak{so}(p,\,q)},\,\mathbf{R}) \xrightarrow{\delta}\cdots\\ &\xrightarrow{\delta}\operatorname{Hom}([\mathfrak{I}^{\wedge n }_{n}]^{\mathfrak{so}(p,\,q)},\,\mathbf{R})\xrightarrow{\delta}0.\end{split} \tag{6.6}\]
\[[H^{0}_{\mathrm{Lie}}(\mathfrak{I}_{n};\,\mathbf{R})]^{\mathfrak{so}(p,\,q )}\simeq\mathbf{R},\,[H^{n}_{\mathrm{Lie}}(\mathfrak{I}_{n};\,\mathbf{R})]^{ \mathfrak{so}(p,\,q)}\simeq\operatorname{Hom}(\left\langle v\right\rangle\, \mathbf{R})\text{ and }\]
\[[H^{k}_{\mathrm{Lie}}(\mathfrak{I}_{n};\,\mathbf{R})]^{\mathfrak{so}(p,\,q)} \simeq\{0\}\text{ for }k\notin\{0,n\}\text{. As a consequence, }[H^{*}_{\mathrm{Lie}}(\mathfrak{I}_{n};\,\mathbf{R})]^{\mathfrak{so}(p,\,q)}\simeq \mathbf{R}\oplus\left\langle v^{*}\right\rangle\text{. By Lemma \ref{lem:
Proof.: \([H^{*}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}^{{}^{\prime}}_{p,\,q}]^{ \mathfrak{so}(p,\,q)}\) is the cohomology of the cochain complex \([\text{Hom}(\wedge^{*}\mathfrak{I}_{n},\,\mathfrak{h}^{{}^{\prime}}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}\). Note that
\[[\text{Hom}(\wedge^{*}\mathfrak{I}_{n},\,\mathfrak{h}^{{}^{\prime}}_{p,\,q})] ^{\mathfrak{so}(p,\,q)} =[\text{Hom}(\wedge^{*}\mathfrak{I}_{n},\,\text{Hom}(\mathfrak{h} _{p,\,q},\,\mathbf{R})]^{\mathfrak{so}(p,\,q)}\] \[\simeq[\text{Hom}(\wedge^{*}\mathfrak{I}_{n}\otimes\mathfrak{h} _{p,\,q},\,\mathbf{R})]^{\mathfrak{so}(p,\,q)}\] \[\simeq\text{Hom}([\wedge^{*}\mathfrak{I}_{n}\otimes\mathfrak{h} _{p,\,q}]^{\mathfrak{so}(p,\,q)},\,\mathbf{R})\]
We use some invariants provided in section 5 to compute the Lie-algebra homology groups \([H^{\text{Lie}}_{*}(\mathfrak{I}_{n};\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p, \,q)}\) and then apply the universal coefficient theorem to find \([H^{*}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}^{{}^{\prime}}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}\).
Lie-algebra homology groups \([H^{\text{Lie}}_{*}(\mathfrak{I}_{n};\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p, \,q)}\) can be computed from the complex:
\[\begin{split}[\mathfrak{I}_{n}^{\wedge 0}\otimes\mathfrak{h}_{p, \,q}]^{\mathfrak{so}(p,\,q)}\overset{\delta}{\leftarrow}[\mathfrak{I}_{n} \otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)}\overset{\delta}{\leftarrow }[\mathfrak{I}_{n}^{\wedge 2}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)}\overset{ \delta}{\leftarrow}\cdots\\ \cdots\leftarrow[\mathfrak{I}_{n}^{\wedge n-2}\otimes\mathfrak{h }_{p,\,q}]^{\mathfrak{so}(p,\,q)}\overset{\delta}{\leftarrow}[\mathfrak{I}_{ n}^{\wedge n-1}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)}\gets 0.\end{split} \tag{6.8}\]
Recall that \([\mathfrak{I}_{n}^{\wedge 0}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)} \simeq\{0\}\), \([\mathfrak{I}_{n}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)}\simeq \langle I_{pq}\rangle\), \([\mathfrak{I}_{n}^{\wedge 2}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p,\,q)}\simeq \langle\rho_{pq}\rangle\), \([\mathfrak{I}_{n}^{\wedge(n-1)}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p, \,q)}\simeq\langle\beta_{pq}\rangle\) and \([\mathfrak{I}_{n}^{\wedge(n-2)}\otimes\mathfrak{h}_{p,\,q}]^{\mathfrak{so}(p, \,q)}\simeq\langle\gamma_{pq}\rangle\).
Since \(\delta(I_{pq})=0\), \(\delta(\rho_{pq})=I_{pq}\), \(\delta(\beta_{pq})=0\), \(\delta(\gamma_{pq})=0\) it follows that
\[[H^{\text{Lie}}_{0}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}( p,\,q)}=\{0\},\]
\[[H^{\text{Lie}}_{1}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p, \,q)}=\langle I_{pq}\rangle/\langle I_{pq}\rangle\simeq\{0\},\]
\[[H^{\text{Lie}}_{2}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p, \,q)}=\{0\},\]
\[[H^{\text{Lie}}_{n-1}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so} (p,\,q)}=\langle\beta_{pq}\rangle,\]
and
\[[H^{\rm Lie}_{n-2}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}= \langle\gamma_{pq}\rangle.\]
Consequently, \([H^{\rm Lie}_{*}(\mathfrak{I}_{n};\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}\) has two classes; \(\langle\beta_{pq}\rangle\) and \(\langle\gamma_{pq}\rangle\), and
\[[H^{\rm Lie}_{*}(\mathfrak{I}_{n};\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q )}\simeq\langle\beta_{pq},\gamma_{pq}\rangle.\]
By the universal coefficient theorem,
\[[H^{*}_{\rm Lie}(\mathfrak{I}_{n};\,\mathfrak{h}^{{}^{\prime}}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}\simeq{\rm Hom}(\ [H^{\rm Lie}_{*}(\mathfrak{I}_{n};\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)};\,{\bf R}).\]
Therefore,
\[[H^{*}_{\rm Lie}(\mathfrak{I}_{n};\,\mathfrak{h}^{{}^{\prime}}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}\simeq\langle\beta^{*}_{pq},\gamma^{*}_{pq}\rangle,\]
where \(\beta^{*}_{pq}\) and \(\gamma^{*}_{pq}\) are the dual of \(\langle\beta_{pq}\rangle,\langle\gamma_{pq}\rangle\) respectively. By Lemma (6.1),
\[H^{*}_{\rm Lie}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}^{{}^{\prime}}_{p,\,q}) \simeq H^{*}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\otimes\langle\beta^{*} _{pq},\ \gamma^{*}_{pq}\rangle.\]
### The \(HR^{*}(\mathfrak{h}_{p,\,q})\) Cohomology Groups
**Theorem 6.4**.: _For \(m\geq 0\), there is a vector space isomorphism_
\[HR^{m}(\mathfrak{h}_{p,\,q})\simeq H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{ \bf R})\oplus(H^{m+3-n}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\otimes\langle \gamma^{*}_{pq}\rangle). \tag{6.9}\]
Proof.: We compute \(HR^{*}(\mathfrak{h}_{p,\,q})\) from the long exact sequence (6.2)
\[HR^{m}(\mathfrak{h}_{p,\,q})\stackrel{{ c_{R}}}{{ \longrightarrow}}H^{m+3}_{\rm Lie}(\mathfrak{h}_{p,\,q};{\bf R})\stackrel{{ \pi^{*}_{R}}}{{\longrightarrow}}H^{m+2}_{\rm Lie}(\mathfrak{h}_{p,\,q}; \mathfrak{h}^{{}^{\prime}}_{p,\,q})\xrightarrow{\ \
For \(\theta\in H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\), \(\pi^{*}_{R}(\theta)=0\) in \({\rm H}^{m+2}_{\rm Lie}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}^{{}^{\prime}}_{p,\,q})\). Consequently, we have \(\pi^{*}_{R}[H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q);{\bf R})]=0\), since \(\theta\) is arbitrary.
For \(\theta\,\otimes\,v^{*}\in H^{m+3-n}_{\rm Lie}(\mathfrak{so}(p,\,q);{\bf R}) \otimes\langle v^{*}\rangle\), \(\pi^{*}_{R}(\theta\,\otimes\,v^{*})=\theta\otimes\beta^{*}_{pq}\) in \({\rm H}^{m+2}_{\rm Lie}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}^{{}^{\prime}}_{p, \,q})\), so \(\theta\,\otimes\,\beta^{*}_{pq}\) will be trivial in \(HR^{m+1}(\mathfrak{h}_{p,\,q})\) and \(\theta\,\otimes\,\gamma^{*}_{pq}\in{\rm Im}[H^{m+2}_{\rm Lie}(\mathfrak{h}_{p,\,q};\mathfrak{h}^{{}^{\prime}}_{p,\,q})\to HR^{m+1}(\mathfrak{h}_{p,\,q})]\).
We conclude from the above that \({\rm Im}c_{R}={\rm ker}\pi^{*}_{R}\simeq H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q); \,R)\) and \({\rm coker}\pi^{*}_{R}\simeq H^{m+3-n}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{\mathbf{R} })\otimes\langle\gamma^{*}_{pq}\rangle\).
There are also two classes in \(HR^{m}(\mathfrak{h}_{p,\,q})\);
1. \(\theta^{\prime}\), where \(c_{R}(\theta^{\prime})=\theta\in H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q),{\mathbf{R} })\) (Those in \({\rm ker}\pi^{*}_{R}\))
2. \(\theta\,\otimes\,\gamma^{*}_{pq}\), where \(\theta\in H^{i}_{\rm Lie}(\mathfrak{so}(p,\,q),{\mathbf{R}})\) and \(i=0,1,2,\cdots\) (Those in \({\rm coker}\pi^{*}_{R}\)).
Finally,
\[HR^{m}(\mathfrak{h}_{p,\,q})\simeq H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q);\,R )\oplus(H^{m+3-n}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\otimes\langle \gamma^{*}_{pq}\rangle),{\rm where}\,m\geq 0.\]
\(\square\)
## 7. Leibniz Cohomology of \(\mathfrak{h}_{p,\,q}\) with Coefficients
Let \(\mathfrak{g}\) be a Lie algebra over a ring \(k\) and \(V\) be any \(\mathfrak{g}\)-module. For each \(n\geq 0\), let \(\pi_{\rm rel}:\mathfrak{g}^{\otimes(n+2)}\to\mathfrak{g}^{\wedge(n+2)}\) be the canonical projection map \(\pi_{\rm rel}(g_{1}\otimes g_{2}\otimes\ldots\otimes g_{n+2})=g_{1}\wedge g_{2 }\wedge\ldots\wedge g_{n+2}\), and \(\pi^{*}_{\rm rel}:{\rm Hom}(\mathfrak{g}^{\wedge(n+2)},V)\longrightarrow{\rm Hom }(\mathfrak{g}^{\otimes(n+2)},V)\) be the map induced by \(\pi_{\rm rel}\).
We define \(C^{n}_{\rm rel}(\mathfrak{g}):={\rm Coker}[{\rm Hom}(\mathfrak{g}^{\wedge(n+2 )},\,V)\xrightarrow{\pi^{*}_{\rm rel}}{\rm Hom}(\mathfrak{g}^{\otimes(n+2)}, \,V)]\) and \(H^{*}_{\rm rel}(\mathfrak{g})\) be the cohomology of the complex \(C^{*}_{\rm rel}(\mathfrak{g})\).
There is a short exact sequence
\[0\longrightarrow\operatorname{Hom}(\mathfrak{g}^{\wedge(n+2)},\,V)\stackrel{{ \pi_{\operatorname{rel}}^{*}}}{{\longrightarrow}}\operatorname{Hom}( \mathfrak{g}^{\otimes(n+2)},\,V)\longrightarrow C_{\operatorname{rel}}^{n}( \mathfrak{g})\longrightarrow 0\]
which induces a long exact sequence of cohomology
\[\begin{split}\cdots\longrightarrow& H_{\operatorname{ Lie}}^{n+2}(\mathfrak{g};\,V)\stackrel{{\pi_{ \operatorname{rel}}^{*}}}{{\longrightarrow}}HL^{n+2}(\mathfrak{g};\,V) \xrightarrow{}H_{rel}^{n}(\mathfrak{g})\stackrel{{ c_{ \operatorname{rel}}}}{{\longrightarrow}}\\ & H_{Lie}^{n+3}(\mathfrak{g};\,V)\stackrel{{\pi_{ \operatorname{rel}}^{*}}}{{\longrightarrow}}HL^{n+3}(\mathfrak{g};\,V) \longrightarrow H_{\operatorname{rel}}^{n+1}(\mathfrak{g})\stackrel{{ c_{ \operatorname{rel}}}}{{\longrightarrow}}\cdots\end{split} \tag{7.1}\]
where \(c_{\operatorname{rel}}\) is the connecting homomorphism.
### PIRASHVILI SPECTRAL SEQUENCE
**Theorem 7.1**.: _: Let \(\mathfrak{g}\) be a Lie algebra over a field \(\mathbf{F}\) and let \(V\) be a left \(\mathfrak{g}\)-module. Then there is a first-quadrant spectral sequence converging to \(H_{\operatorname{rel}}^{*}(\mathfrak{g};\,V)\) with_
\[E_{2}^{m,\,k}\simeq HR^{m}(\mathfrak{g})\otimes HL^{k}(\mathfrak{g};\,V),\quad m \geq 0,\quad k\geq 0,\]
_provided that \(HR^{m}(\mathfrak{g})\) and \(HL^{k}(\mathfrak{g};\,V)\) are finite dimensional vector spaces in each dimension. If this finite condition is not satisfied, then the completed tensor product \(\widehat{\otimes}\) can be used._
We outline the key features of the construction and introduce notation that will be used in the sequel. Let \(A^{m,\,k}\) denote those elements \(f\in\operatorname{Hom}(\mathfrak{g}^{\otimes(k+m+2)},\,V)\) that are skew-symmetric in the last \(m+1\) tensor factors of \(\mathfrak{g}^{\otimes(k+m+2)}\). Filter the complex \(C_{\operatorname{rel}}^{*}\) by
\[F^{m,\,k}=A^{m,\,k}/\operatorname{Hom}(\Lambda^{k+m+2}(\mathfrak{g}),\,V).\]
Then \(F^{m,\,*}\) is a subcomplex of \(C_{\operatorname{rel}}^{*}\) with \(F^{0,\,*}=C_{\operatorname{rel}}^{*}\) and \(F^{m+1,\,*}\subseteq F^{m,\,*}\). To identify the \(E_{0}^{*,\,*}\) term, use the isomorphism
\[\operatorname{Hom}(\mathfrak{g}^{\otimes(k+m+2)},\,V)\simeq\operatorname{Hom}( \mathfrak{g}^{\otimes(m+2)},\,\operatorname{Hom}(\mathfrak{g}^{\otimes k},\,V)) \tag{7.2}\]
Then
\[\begin{split} E_{0}^{m,\,k}&=F^{m,\,k}/F^{m+1,\,k-1} \\ &\simeq\operatorname{Hom}(\mathfrak{g}\otimes\mathfrak{g}^{ \wedge(m+1)}/\mathfrak{g}^{\wedge(m+2)},\,\operatorname{Hom}(\mathfrak{g}^{ \otimes k},\,V)),\end{split} \tag{7.3}\]
and \(d_{0}^{m,\,k}:E_{0}^{m,\,k}\to E_{0}^{m,\,k+1}\), \(m\geq 0\), \(k\geq 0\). It follows that
\[E_{1}^{m,\,k}\simeq\operatorname{Hom}(\mathfrak{g}\otimes\mathfrak{g}^{ \wedge(m+1)}/\mathfrak{g}^{\wedge(m+2)},\,HL^{k}(\mathfrak{g};\,V)).\]
Now, \(d_{1}^{m,\,k}:E_{1}^{m,\,k}\to E_{1}^{m+1,\,k}\). Since the action of \(\mathfrak{g}\) on \(HL^{*}(\mathfrak{g};\,V)\) is trival, we have \(E_{2}^{m,\,k}\simeq HR^{m}(\mathfrak{g})\widehat{\otimes}HL^{k}(\mathfrak{g};\,V)\). Using the isomorphism (7.2), we consider an element of \(E_{2}^{m,\,k}\) operationally in the form \(HL^{k}(\mathfrak{g};\,V)\widehat{\otimes}HR^{m}(\mathfrak{g})\).
In this section, all computations are done with \(V=\mathfrak{g}=\mathfrak{h}_{p,\,q}\) and \(k=\mathbf{R}\).
**Lemma 7.2**.: _There is an isomorphism in Lie-algebra cohomology_
\[H^{*}_{\operatorname{Lie}}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq H ^{*}_{\operatorname{Lie}}(\mathfrak{so}(p,\,q);\,\mathbf{R})\otimes[H^{*}_{ \operatorname{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so} (p,\,q)} \tag{7.4}\]
Proof.: We apply the Hochschild-Serre spectral sequence [10] to the ideal \(\mathfrak{I}_{n}\) of \(\mathfrak{h}_{p,\,q}\) and use the isomorphism of Lie algebras \(\mathfrak{h}_{p,\,q}/\mathfrak{I}_{n}\simeq\mathfrak{so}(p,\,q)\).
**Theorem 7.3**.: _There is an isomorphism in Lie-algebra cohomology_
\[H^{*}_{\operatorname{Lie}}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq H ^{*}_{\operatorname{Lie}}(\mathfrak{so}(p,\,q);\,\mathbf{R})\otimes\langle I,\rho\rangle. \tag{7.5}\]
Proof.: \([H^{*}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}\) is the cohomology of the complex \([\text{Hom}(\wedge^{*}\mathfrak{I}_{n},\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}\). We compute the Lie-algebra cohomology groups \([H^{*}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{so(p,\,q)}\) from the complex:
\[[\text{Hom}(\boldsymbol{R},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)} \xrightarrow{\delta}[\text{Hom}(\mathfrak{I}_{n},\,\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}\xrightarrow{\delta}[\text{Hom}(\mathfrak{I}_{n}^{ \wedge 2},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}\xrightarrow{\delta}\]
\(\delta I=0\), \(\delta\rho=0\), \(\delta\beta=(-1)^{(n-1)}(n-1)\gamma\) and \(\delta\gamma=0.\) It follows that,
\[[H^{0}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{ \mathfrak{so}(p,\,q)}\simeq[\text{Hom}(\boldsymbol{R},\,\mathfrak{h}_{p,\,q} )]^{\mathfrak{so}(p,\,q)}\simeq\{0\}.\]
\[[H^{1}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}\simeq[\text{Hom}(\mathfrak{I}_{n},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so }(p,\,q)}\simeq\langle I\rangle.\]
\[[H^{2}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]\simeq[\text{ Hom}(\mathfrak{I}_{n}^{\wedge 2},\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so}(p,\,q)}\simeq \langle\rho\rangle.\]
\[[H^{n-2}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so }(p,\,q)}\simeq\{0\}.\]
\[[H^{n-1}_{\text{Lie}}(\mathfrak{I}_{n};\,\mathfrak{h}_{p,\,q})]^{\mathfrak{so }(p,\,q)}\simeq\langle\gamma\rangle/\langle\gamma\rangle\simeq\{0\}.\]
We conclude by Lemma 7.2,
\[H^{*}_{\text{Lie}}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq H^{*}_{ \text{Lie}}(\mathfrak{so}(p,\,q);\,\mathbf{R})\otimes\langle I,\rho\rangle,\, \,\text{where I and $\rho$ is as defined above.}\]
We are now ready to compute \(HL^{*}(\mathfrak{h}_{p,\,q};\mathfrak{h}_{p,\,q})\), we begin in low dimensions.
**Lemma 7.4**.: \(HL^{0}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq 0\) _and \(HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=\langle I\rangle,\) where \(I\):_
\[\begin{split}& I(\alpha_{ij})=0,\ \ 1\leq i<j\leq p,\ \ p+1\leq i<j\leq n,\\ & I(\beta_{ij})=0,\ \ 1\leq i\leq p,\ \ p+1\leq j\leq n,\\ & I(\frac{\partial}{\partial x^{i}})=\frac{\partial}{\partial x^ {i}},\ \ i=1,2,\cdots,n.\end{split} \tag{7.6}\]
Proof.: We compute \(HL^{0}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \(HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) from long exact sequence (7.1). From (7.1), we have
\[0\longrightarrow H^{0}_{\text{Lie}}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p, \,q})\xrightarrow{\pi^{*}_{\text{rel}}}HL^{0}(\mathfrak{h}_{p,\,q};\, \mathfrak{h}_{p,\,q})\xrightarrow{C_{\text{rel}}}0,\]
which implies \(\pi^{*}_{\text{rel}}\) is an isomorphism and
\[HL^{0}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq H^{0}_{\text{Lie}}( \mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq 0.\]
From (7.1), we also have
\[0\longrightarrow H^{1}_{\text{Lie}}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p, \,q})\xrightarrow{\pi^{*}_{\text{rel}}}HL^{1}(\mathfrak{h}_{p,\,q};\, \mathfrak{h}_{p,\,q})\xrightarrow{C_{\text{rel}}}0,\]
\(\pi^{*}_{\text{rel}}\) is an isomorphism and
\[HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq H^{1}_{\text{Lie}}( \mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=\langle I\rangle,\]
where \(I\) is just as defined above.
For higher dimensions, the calculations for \(HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) proceed in a recursive manner, using results from lower dimensions to compute higher dimensions. Our strategy is to first find \(H^{*}_{\text{rel}}(\mathfrak{h}_{p,\,q})\) and then insert \(H^{*}_{\text{rel}}(\mathfrak{h}_{p,\,q})\) into the long exact sequence (7.1) to compute \(HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\). The Pirashvili spectral sequence converges
to \(H^{*}_{\rm rel}(\mathfrak{h}_{p,\,q})\) with \(E_{2}\) term given by
\[E_{2}^{m,\,k}\simeq HL^{k}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\otimes HR ^{m}(\mathfrak{h}_{p,\,q}),\ \ where\ \ \ m\geq 0,\ \ k\geq 0.\]
We demonstrate the strategy in an easy example: From the Pirashvili spectral sequence,
\[E_{2}^{m,\,0}\simeq HL^{0}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\otimes HR ^{m}(\mathfrak{h}_{p,\,q})\simeq 0\]
for all \(m\) since \(HL^{0}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=0.\) Since \(E_{2}^{0,\,0}\simeq 0\), we have \(H^{0}_{\rm rel}(\mathfrak{h}_{p,\,q})\simeq E_{2}^{0,\,0}\simeq 0\). Now, we insert \(H^{0}_{\rm rel}(\mathfrak{h}_{p,\,q})\) into (7.1) and it yields the following:
\[0\longrightarrow H^{2}_{\rm Lie}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q}) \xrightarrow{\pi^{*}_{\rm rel}}HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\xrightarrow{C_{\rm rel}}H^{0}_{rel}(\mathfrak{h}_{p,\,q})\simeq 0.\]
Consequently, \(HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq H^{2}_{\rm Lie}( \mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})=\langle\rho\rangle\), where \(\rho\) is as defined above with \(\wedge\) replaced with \(\otimes\).
**Lemma 7.5**.: _For \(n=p+q\geq 4\), \(HL^{n}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq[I\otimes\gamma^{*}_{ pq}]\) and \(HL^{n+1}(\mathfrak{h}_{p,\,q})\simeq[\rho\otimes\gamma^{*}_{pq}]\)._
Proof.: We begin the iteration of elements in the \(E_{2}^{*,*}\) term of the Pirashvili spectral sequence with \(HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\). Consider the following elements :
\[I\otimes\theta^{\prime}\,\in\,HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p, \,q})\otimes HR^{m}(\mathfrak{h}_{p,\,q})\subseteq E_{2}^{m,\,1}\]
\[I\otimes\gamma^{*}_{pq}\,\in HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p, \,q})\otimes HR^{n-3}(\mathfrak{h}_{p,\,q})\subseteq E_{2}^{m,\,n-3}\]
where \(c_{R}(\theta^{\prime})=\theta\in H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q),{\bf R})\).
In the Pirashvili spectral sequence, \(d_{r}^{m,\,1}(I\otimes\theta^{\prime})=0\) for all \(r\geq 2\), so \(I\otimes\theta^{\prime}\) will not be a coboundary and therefore \([I\otimes\theta^{\prime}]\in H^{m+1}_{\rm rel}(\mathfrak{h}_{p,\,q})\). Using the long exact sequence
(7.1), we show that \([I\otimes\theta]\) is mapped to \(0\) in \(HL^{m+4}\). Now,
\[\delta(I\otimes\theta^{\prime})(g_{1}\otimes\cdots\otimes g_{m+4}) =\delta I\otimes\theta^{\prime}\ (g_{1}\otimes\cdots\otimes g_{m+4})-I\otimes\delta\theta^{\prime}\ (g_{1}\otimes\cdots\otimes g_{m+4})\] \[+\sum_{i=3}^{m+4}(-1)^{i}g_{i}I(g_{1})\theta^{\prime}(g_{2} \otimes\cdots\widehat{g_{i}}\cdots\otimes g_{m+4}),\]
\(g_{i}I=0\) for all \(g_{i}\in\mathfrak{h}_{p,\,q}\) since \(I\) is \(\mathfrak{h}_{p,\,q}\)-invariant and \(\delta I=0\), then
\[\delta(I\otimes\theta^{\prime})=-I\otimes\delta\theta^{\prime}=-I\otimes c_{R }(\theta^{\prime})=-I\otimes\theta.\]
Consequently, \(c_{\text{rel}}([I\otimes\theta^{\prime}])=[\delta(I\otimes\theta^{\prime})]=[ I\otimes\theta]=[I\wedge\theta]\) in \(H^{m+4}_{Lie}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \(\pi^{*}_{\text{rel}}([I\otimes\theta])=0\) in \(HL^{m+4}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\).
For \(I\otimes\gamma^{*}_{pq}\), it is quite easy to see that \(d^{m,\,1}_{r}(I\otimes\gamma^{*}_{pq})=0\), for all \(r\geq 2\) and \(I\otimes\gamma^{*}_{pq}\) is not a coboundary in the Pirashvili spectral sequence, therefore \([I\otimes\gamma^{*}_{pq}]\in H^{n-2}_{\text{rel}}(\mathfrak{h}_{p,\,q})\). Now, we shift our attention to (7.1) where
\[\delta(I\otimes\gamma^{*}_{pq})(g_{1}\otimes\cdots\otimes g_{n+1}) =\delta I\otimes\gamma^{*}_{pq}(g_{1}\otimes\cdots\otimes g_{n+1} )-I\otimes\delta\gamma^{*}_{pq}(g_{1}\otimes\cdots\otimes g_{n+1})\] \[+\sum_{i=3}^{n+1}(-1)^{i}g_{i}I(g_{1})\gamma^{*}_{pq}(g_{2} \otimes\cdots\widehat{g_{i}}\cdots\otimes g_{n+1}).\]
We know that \(\delta I=0\), \(\delta\gamma^{*}_{pq}=0\) and \(g_{i}I=0\) for all \(g_{i}\in\mathfrak{h}_{p,\,q}\), then \(\delta(I\otimes\gamma^{*}_{pq})=0\). Consequently, \(c_{\text{rel}}([I\otimes\gamma^{*}_{pq}])=[\delta(I\otimes\gamma^{*}_{pq})]=0\) in \(H^{n+1}_{\text{Lie}}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) which implies
\[[I\otimes\gamma^{*}_{pq}]\in HL^{n}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q }).\]
We continue the iteration of elements in the \(E^{*,*}_{2}\) term of the Pirashvili spectral sequence with \(HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\). Consider these elements:
\[\rho\otimes\theta^{\prime}\,\in HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q })\otimes HR^{m}(\mathfrak{h}_{p,\,q})\subseteq E_{2}^{m,\,2}\]
\[\rho\otimes\gamma_{pq}^{*}\,\in HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p, \,q})\otimes HR^{n-3}(\mathfrak{h}_{p,\,q})\subseteq E_{2}^{m,\,n-3},\]
where \(c_{R}(\theta^{\prime})=\theta\in H_{\text{Lie}}^{m+3}(\mathfrak{so}(p,\,q), \mathbf{R}).\)
In the Pirashvili spectral sequence, \(d_{r}^{m,\,2}(\rho\otimes\theta^{\prime})=0\) for all \(r\geq 2\), so \(\rho\otimes\theta^{\prime}\) is not a coboundary and therefore \([\rho\otimes\theta^{\prime}]\in H_{\text{rel}}^{m+2}(\mathfrak{h}_{p,\,q}).\) Using the long exact sequence (7.1), we show that \([\rho\otimes\theta]\) is mapped to \(0\) in \(HL^{m+5}\). Now,
\[\delta(\rho\otimes\theta^{\prime})(g_{1}\otimes\cdots\otimes g_{ m+5}) =\delta\rho\otimes\theta^{\prime}(g_{1}\otimes\cdots\otimes g_{m+5 })+\rho\otimes\delta\theta^{\prime}(g_{1}\otimes\cdots\otimes g_{m+5})\] \[+\sum_{i=4}^{m+5}(-1)^{i}g_{i}\rho(g_{1}\otimes g_{2})\theta^{ \prime}(g_{3}\otimes\cdots\widehat{g_{i}}\cdots\otimes g_{m+5}),\]
\(\delta\rho=0\) and the element \(\theta^{\prime}\in HR^{m}(\mathfrak{h}_{p,\,q})\) can be chosen so that \(\theta^{\prime}(y_{1}\otimes y_{2}\otimes...\otimes y_{m+2})=0\) if any \(y_{i}\in\mathfrak{I}_{n}\). Since \(\rho\otimes\theta^{\prime}\in E_{2}^{m,\,2}\), \(d_{m,\,2}^{2}(\rho\otimes\theta^{\prime})\) is skew symmetric in the variables \(g_{3},g_{4},\cdots,g_{m+5}\), and suppose that \(g_{i}\in\mathfrak{I}_{n}\) for one and only one \(\text{i}\in\{4,5,\cdots,m+5\}\), then
\[g_{i}\rho(g_{1}\otimes g_{2})\theta^{\prime}(g_{3}\otimes\cdots\widehat{g_{i} }...\otimes g_{m+5})=\pm g_{3}\rho(g_{1}\otimes g_{2})\theta^{\prime}(g_{i} \otimes...\widehat{g_{3}}\cdots\otimes g_{m+5})=0,\]
since \(g_{3}\in\mathfrak{so}(p,\,q)\)-invariant and \(\rho\) is an \(\mathfrak{so}(p,\,q)\)-invariant. It follow that
\[\delta(\rho\otimes\theta^{\prime})=\rho\otimes\delta\theta^{\prime}=\rho \otimes c_{R}(\theta^{\prime})=\rho\otimes\theta.\]
Consequently, \(c_{\text{rel}}([\rho\otimes\theta^{\prime}])=[\delta(\rho\otimes\theta^{ \prime})]=[\rho\otimes\theta]=[\rho\wedge\theta]\) in \(H_{Lie}^{m+5}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \(\pi_{\text{rel}}^{*}([\rho\otimes\theta])=0\) in \(HL^{m+5}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\).
For \(\gamma_{pq}^{*}\in HR^{n-3}(\mathfrak{h}_{p,\,q})\), we have \(\rho\otimes\gamma_{pq}^{*}\in E_{2}^{n-3,\,2}\) and \(E_{2}^{n-5,\,3}\xrightarrow{d_{2}^{n-5,\,3}}E_{2}^{n-3,\,2}\xrightarrow{d_{2}^ {n-3,\,2}}E_{2}^{n-1,\,1}.\)\(d_{2}^{n-3,\,2}(\rho\otimes\gamma_{pq}^{*})=0\), \(d_{2}^{n-5,\,3}\) is also a zero map, so \(E_{3}^{n-3,\,2}\simeq E_{2}^{n-3,\,2}.\) Now, on
the third page of the Pirashvili spectral sequence we have
\[E_{3}^{n-6,\,4}\xrightarrow{d_{3}^{n-3,\,2}}E_{3}^{n-3,\,2}\xrightarrow{d_{3}^{n-3, \,2}}E_{3}^{n,\,0}\simeq\{0\}.\]
Thus, \(d_{r}^{n-3,\,2}(\rho\otimes\gamma_{pq}^{*})=0\) for all \(r\geq 3\) and \(\rho\otimes\gamma_{pq}^{*}\) is not a coboundary the Pirashvili spectral sequence, which implies \([\rho\otimes\gamma_{pq}^{*}]\in H_{\mathrm{rel}}^{n-1}(\mathfrak{h}_{p,\,q})\). Again, we shift our attention to (7.1),
\[c_{\mathrm{rel}}([\rho\otimes\gamma_{pq}^{*}])=[\delta(\rho\otimes\gamma_{pq}^ {*})]=0\]
in \(H_{\mathrm{Lie}}^{n+2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) which then implies
\[[\rho\otimes\gamma_{pq}^{*}]\in HL^{n+1}(\mathfrak{h}_{p,\,q}).\]
We have established that \([I\otimes\gamma_{pq}^{*}]\in HL^{n}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \([\rho\otimes\gamma_{pq}^{*}]\in HL^{n+1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p, \,q})\).
**Theorem 7.6**.: _For \(p+q\geq 4\), \(HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq\langle I,\rho\rangle \otimes T(\gamma_{pq}^{*})\), where \(T(\gamma_{pq}^{*}):=\sum\limits_{k\geq 0}\langle\gamma_{pq}^{*}\rangle^{ \otimes k}\) is the tensor algebra on the class of_
\[\begin{split}\gamma_{pq}^{*}&=\sum\limits_{1\leq i <j\leq p}(-1)^{i+j}dx^{1}\wedge\cdots\wedge\widehat{dx^{i}}\wedge\cdots\wedge \widehat{dx^{j}}\wedge\cdots\wedge dx^{n}\otimes\alpha_{ij}^{*}\\ &-\sum\limits_{p+1\leq i<j\leq n}(-1)^{i+j+1}dx^{1}\wedge\cdots \wedge\widehat{dx^{i}}\wedge\cdots\wedge\widehat{dx^{j}}\wedge\cdots\wedge dx ^{n}\otimes\alpha_{ij}^{*}\\ &-\sum\limits_{\begin{subarray}{c}1\leq i\leq p\\ p+1\leq j\leq n\end{subarray}}(-1)^{i+j+1}dx^{1}\wedge\cdots\wedge\widehat{ dx^{i}}\wedge\cdots\wedge\widehat{dx^{j}}\wedge\cdots\wedge dx^{n}\otimes\beta_{ij}^{*} \end{split} \tag{7.7}\]
Proof.: The result for \(k=0\) and \(k=1\) follow from Lemma 7.4 and Lemma 7.5 respectively. We continue with iteration of elements in the \(E_{2}^{*,*}\) term of the Pirashvili
spectral sequence.
Consider \([I\otimes\gamma^{*}_{pq}]\in HL^{n}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \(\theta^{\prime}\in HR^{m}(\mathfrak{h}_{p,\,q}),\)
\[[I\otimes\gamma^{*}_{pq}]\otimes\theta^{\prime}\in HL^{n}(\mathfrak{h}_{p,\,q}; \,\mathfrak{h}_{p,\,q})\otimes HR^{m}(\mathfrak{h}_{p,\,q})\subseteq E_{2}^{m, \,n}.\]
In the Pirashvili spectral sequence, \(d^{m,n}_{r}((I\otimes\gamma^{*}_{pq})\otimes\theta^{\prime})=0\) for \(2\leq r\leq n-1,\) but on page \(n\) we have
\[d^{m,\,n}_{n}((I\otimes\gamma^{*}_{pq})\otimes\theta^{\prime})=I\otimes(\theta \otimes\gamma^{*}_{pq})\in HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q} )\otimes HR^{m+n}(\mathfrak{h}_{p,\,q})\subseteq E_{n}^{m+n,\,1}.\]
We conclude \([I\otimes(\theta\ \otimes\gamma^{*}_{pq})]\notin H^{*}_{\rm rel}(\mathfrak{h}_{p,\,q}),\) when \(\theta\in H^{m+3}_{Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\,{\rm and}\,m=0,1,2,\cdots.\)
Note: The case of \(\theta\otimes\gamma^{*}_{pq}\in HR^{m+n}(\mathfrak{h}_{p,\,q}),\) where \(\theta\in H^{m+3}_{Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\,{\rm and}\,m=0,1,2,\cdots,\) with \(HL^{1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\)\(\big{(}\ I\otimes(\theta\otimes\gamma^{*}_{pq})\in HL^{1}(\mathfrak{h}_{p,\,q};\, \mathfrak{h}_{p,\,q})\otimes HR^{m+n}(\mathfrak{h}_{p,\,q})\subseteq E_{2}^{m +n,\,1}\ \big{)}\) is now covered in the iteration.
With \([I\otimes\gamma^{*}_{pq}]\in HL^{n}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \(\gamma^{*}_{pq}\in HR^{n-3}(\mathfrak{h}_{p,\,q}),\) we have \([I\otimes\gamma^{*}_{pq}]\otimes\gamma^{*}_{pq}\in HL^{n}(\mathfrak{h}_{p,\,q} ;\,\mathfrak{h}_{p,\,q})\otimes HR^{n-3}(\mathfrak{h}_{p,\,q})\subseteq E_{2} ^{n-3,\,n}.\) In the Pirashvili spectral sequence, \(d^{n-3,\,n}_{r}([I\otimes\gamma^{*}_{pq}]\otimes\gamma^{*}_{pq})=0,\) for all \(r\geq 2,\) which implies \([I\otimes\gamma^{*}_{pq}]\otimes\gamma^{*}_{pq}\in H^{2n-3}_{\rm rel}( \mathfrak{h}_{p,\,q}).\)
Now,
\[\delta((I\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq})(g_{1} \otimes\cdots\otimes g_{n}\otimes\cdots\otimes g_{2n})\] \[=\delta(I\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq}(g_{1} \otimes\cdots\otimes g_{n}\otimes\cdots\otimes g_{2n})\] \[+(I\otimes\gamma^{*}_{pq})\otimes\delta\gamma^{*}_{pq}(g_{1} \otimes\cdots\otimes g_{n}\otimes\cdots\otimes g_{2n})\] \[+\sum_{i=n+2}^{2n}(-1)^{i}(g_{i}(I\otimes\gamma^{*}_{pq}))(g_{1} \otimes\cdots\otimes g_{n})\gamma^{*}_{pq}(g_{n+1}\otimes\cdots\otimes\hat{g_{ i}}\otimes\cdots\otimes g_{2n}),\]
since \(\delta(I\otimes\gamma^{*}_{pq})=0\), \(\delta\gamma^{*}_{pq}=0\), \(I\) and \(\gamma^{*}_{pq}\) are \(\mathfrak{h}_{p,\,q}\)-invariant, then \(\delta((I\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq})=0\). Consequently, \(c_{\rm rel}(\left[[I\otimes\gamma^{*}_{pq}]\otimes\gamma^{*}_{pq}\right])=[ \delta(I\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq})]=0\) in \(H^{2n}_{\rm Lie}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\), so \([I\otimes\gamma^{*}_{pq}]\otimes\gamma^{*}_{pq}\in HL^{2n-1}(\mathfrak{h}_{p, \,q};\,\mathfrak{h}_{p,\,q})\).
Using the same argument above for \([\rho\otimes\gamma^{*}_{pq}]\otimes\theta^{\prime}\in HL^{n+1}(\mathfrak{h}_{p, \,q};\,\mathfrak{h}_{p,\,q})\otimes HR^{m}(\mathfrak{h}_{p,\,q})\subseteq E^{m,\,n+1}_{2}\) and \(\rho\otimes(\theta\ \otimes\gamma^{*}_{pq})\in HL^{2}(\mathfrak{h}_{p,\,q};\, \mathfrak{h}_{p,\,q})\otimes HR^{m+n}(\mathfrak{h}_{p,\,q})\subseteq E^{m+n, \,2}_{2}\) : On page \(n\) of the Pirashvilli spectral sequence,
\[d^{m,\,n+1}_{n}((\rho\otimes\gamma^{*}_{pq})\otimes\theta^{\prime})=\rho \otimes(\theta\otimes\gamma^{*}_{pq}).\]
Therefore, \([\rho\otimes(\theta\ \otimes\gamma^{*}_{pq})]\) is not in \(H^{*}_{\rm rel}(\mathfrak{h}_{p,\,q})\), when \(\theta\in H^{m+3}_{\rm Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\) and \(m=0,1,2,\cdots\).
Similarly, the case of \(\theta\otimes\gamma^{*}_{pq}\in HR^{m+n}(\mathfrak{h}_{p,\,q})\), where \(\theta\in H^{m+3}_{Lie}(\mathfrak{so}(p,\,q);\,{\bf R})\,\mbox{and}\,m=0,1,2,\cdots\), with \(HL^{2}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\)\(\left(\ \rho\otimes(\theta\otimes\gamma^{*}_{pq})\in HL^{2}(\mathfrak{h}_{p,\,q};\, \mathfrak{h}_{p,\,q})\otimes HR^{m+n}(\mathfrak{h}_{p,\,q})\subseteq E^{m+n, \,2}_{2}\ \right)\) is now covered in the iteration.
Consider \([\rho\otimes\gamma^{*}_{pq}]\in HL^{n+1}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) and \(\gamma^{*}_{pq}\in HR^{n-3}(\mathfrak{h}_{p,\,q})\), we have \((\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq}\in E_{2}^{n-3,\,n+1}\). Now,
\[\delta((\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq})(g_{1}\otimes\cdots \otimes g_{2n+1})\]
\[=\delta(\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq}(g_{1}\otimes\cdots \otimes g_{2n+1})\]
\[+(\rho\otimes\gamma^{*}_{pq})\otimes\delta\gamma^{*}_{pq}(g_{1}\otimes\cdots \otimes g_{2n+1})\]
\[+\sum_{i=n+3}^{2n+1}(-1)^{i}g_{i}\rho(g_{1}\otimes g_{2})\gamma^{*}_{pq}(g_{3} \otimes\cdots\otimes g_{n+1})\gamma^{*}_{pq}(g_{n+2}\otimes\cdots\hat{g}_{i} \cdots\otimes g_{2n+1}).\]
\[\delta((\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq})(g_{1}\otimes\cdots \otimes g_{2n+1})=\sum_{i=n+3}^{2n+1}(-1)^{i}g_{i}\rho(g_{1}\otimes g_{2}) \gamma^{*}_{pq}(g_{3}\otimes\cdots\otimes g_{n+1})\gamma^{*}_{pq}(g_{n+2} \otimes\cdots\hat{g}_{i}\cdots\otimes g_{2n+1}),\]
since \(\delta(\rho\otimes\gamma^{*}_{pq})=0\) and \(\delta\gamma^{*}_{pq}=0.\) Using a skew symmetric argument, \(\delta((\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq})=0\), which implies \((\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq}\in H^{2n-2}_{\rm rel}( \mathfrak{h}_{p,\,q})\). Consequently, \(\pi^{*}_{\rm rel}\big{(}\big{[}[\rho\otimes\gamma^{*}_{pq}]\otimes\gamma^{*}_ {pq}\big{]}\big{)}=0\). Therefore \((\rho\otimes\gamma^{*}_{pq})\otimes\gamma^{*}_{pq}\in HL^{2n}(\mathfrak{h}_{p, \,q};\,\mathfrak{h}_{p,\,q})\).
Also, in the Pirashvili spectral sequence, we have
\[d^{n}((I\otimes(\gamma^{*}_{pq})^{\otimes 2})\otimes\theta^{\prime})=(I \otimes\gamma^{*}_{pq})\otimes(\gamma^{*}_{pq}\otimes\theta)\]
\[d^{n}((\rho\otimes(\gamma^{*}_{pq})^{\otimes 2})\otimes\theta^{\prime})=( \rho\otimes\gamma^{*}_{pq})\otimes(\gamma^{*}_{pq}\otimes\theta)\]
By induction on k, \(HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\) is the direct sum of vector spaces \(\langle I,\rho\rangle\otimes(\gamma^{*}_{pq})^{\otimes k}\). We conclude that
\[HL^{*}(\mathfrak{h}_{p,\,q};\,\mathfrak{h}_{p,\,q})\simeq\langle I,\rho \rangle\otimes T(\gamma^{*}_{pq})\] |
2307.07854 | AdvFusion: Multilingual Adapter-based Knowledge Transfer for Code
Summarization | Parameter Efficient Fine-Tuning (PEFT) is an alternate choice to full
fine-tuning a language model. Though PEFT methods are used in natural language
domain widely, there are limited studies on using PEFT for language models that
are pre-trained on code and comment datasets (i.e., code-LMs). Previous
research has also shown that code summarization, a task that intends to
generate natural description of the given code snippet automatically and is
known to benefit the program comprehension, benefits from multilingual
fine-tuning approach. In multilingual fine-tuning, the code-LM is fine-tuned on
a dataset consisting of different programming languages.
AdapterFusion is a specific PEFT approach that aims to extract and compose
the latent knowledge from multiple (language) adapters for a downstream task.
However, our experiments reveal that the AdapterFusion still learns from the
same language, not taking advantage of other programming languages. Therefore,
we change the architecture and propose AdvFusion, a PEFT approach that enforces
the model to first learn from other programming languages, and then pay
attention to the language of the target task. Therefore, the AdvFusion
emphasizes the knowledge transfer among different programming languages, as
stated in the multilingual fine-tuning.
Our results on the CodeSearchNet dataset using two code-LMs, show that
Adapters, AdapterFusion, and our proposed AdvFusion can achieve results on-par
with or higher than the full fine-tuning models for code summarization and
method name prediction. Notably, the number of trainable parameters are 123x
less and the training time is reduced by ~30%. AdvFusion exhibits a notable
enhancement compared to AdapterFusion, showcasing a 0.9 to 1.7-point increase
in BLEU-4 scores specifically for Ruby, JavaScript, and Go. | Iman Saberi, Fatemeh Fard, Fuxiang Chen | 2023-07-15T17:17:16Z | http://arxiv.org/abs/2307.07854v2 | # Multilingual Adapter-based Knowledge Aggregation on Code Summarization for Low-Resource Languages
###### Abstract
Multilingual fine-tuning (of a multilingual Pre-trained Language Model) has shown to improve performance of downstream tasks. However, it was observed that different programming languages may have different structural properties, and thus the learning or fine-tuning of a model may be sub-optimal or even degrade the intended performance by using a multilingual dataset. In this study, we proposed a new modular component architecture, _AdvFusion_, that leverages the different aspects of programming languages for a target popular low-resource programming language, Ruby. Our result shows that AdvFusion can extract useful features from different programming languages efficiently, and it outperforms the existing state-of-the-art multilingual fine-tuning by 12% on the Code Summarization task.
adapters, low-resource languages
## 1 Introduction
Providing high-quality large datasets is a challenging and time-consuming activity [1, 2, 3, 4]. Inspired by the success of using Pre-trained Language Model (PLM) in transfer learning for low-resource languages (i.e., a language that is lacking training data) in Natural Language Processing (NLP) [5, 6, 7, 8], several Natural Language to Programming Language (NL-PL) models have been proposed to take advantage of high-resource (i.e., a language that has abundance of training data) datasets for tasks involving low-resource programming languages [2, 3, 9].
Ahmed and Devanbu reported that fine-tuning a multilingual PLM on CodeSearchNet [10], a multilingual dataset consisting of six programming languages (i.e., Ruby, JavaScript, Go, Java, Python and PHP), yields better results on downstream tasks such as code summarization, code retrieval, and method name prediction, as compared to fine-tuning a multilingual PLM on a monolingual dataset [2]. However, there are semantic and syntactic differences among different programming languages [3] and training a PLM on several different programming languages may lead to knowledge interference - the same semantics can have dissimilar syntaxes or similar syntaxes can have different semantics in different programming languages. For example, Python employs an indentation-based code structure while Java uses parenthesis for separating blocks; and the default keyword has a different meaning in Java than it has in Go [11]. If we combine them together, it may confuse the model's learning process.
In a separate study, Chen et al. analyzed the transferability of programming language PLMs for Ruby, a low-resource language, for two downstream tasks: code summarization and code search. It was reported that for the target programming language, Ruby, the PLMs that were pre-trained on programming languages similar to Ruby yield better performance on the downstream tasks. They also proposed a strategy to select similar programming languages for pre-training a PLM based on the semantic and textual similarities of the target programming language [3].
The former study [2] utilized the entire dataset and do not consider the structural and semantic differences of the different programming languages, which may cause knowledge interference, while the latter work [3] mitigates this knowledge interference by selecting only a subset of dataset that are similar to the target programming language. However, those unselected datasets may still be useful due to their high-resource nature and undiscovered latent properties. Thus, we would like to better utilize the entire dataset (i.e., multilingual dataset) by selecting only the similar aspects of each programming language in the dataset for a _target_ programming language. Here, the _target_ programming language refers to a low-resource programming language.
To do that, we leverage modular components to utilize the entire multilingual dataset while minimizing the interferences between different programming languages. Modular components have an extensive history in NLP [12, 13, 14]. A recently introduced modular component is _adapter_, which is a light-weight module inserted between the transformer layers [15]. It is proposed as a _parameter-efficient and quick fine-tuning_ (i.e., using less number of parameters and less time) approach for fine-tuning transformer-based models [15]. Adapters have been used previously to show the transferability of natural language PLMs to programming languages with lower training costs - the authors trained language adapters (i.e., adapters trained on masked lan
guage modeling on unlabelled data) and task adapters (i.e., adapters trained for a target task on labelled data) for code clone detection [16].
Although adapters are used previously for transferability of natural languages to programming languages [16], little is known about how adapters can be used for knowledge aggregation for low-resource languages, nor how adapters can be effectively inserted into the layers of programming language based PLMs such as CodeBERT [17] for fine-tuning downstream tasks. In this work, we focused on the studying of a popular downstream task - code summarization, and we treat Ruby as low-resource language, because it has the lowest number of data in CodeSearchNet and it is studied as a low-resource language in previous work [3]. First, we investigate the possibility and impact of improving the performance of a programming language based PLM using task adapters on monolingual data. We study to what extent we can leverage adapters for fine-tuning on downstream tasks and how much performance improvement we can achieve without including additional data (RQ1). Then, we evaluate specific adapters known as _adapter fusions_[13] for knowledge aggregation from multilingual data. The process of knowledge aggregation involves leveraging information from diverse programming languages to enhance the overall understanding and performance for a specific programming language. With the increasing availability of multilingual data, it becomes crucial to develop techniques that can effectively utilize this wealth of information. Finally, we propose **Adversarial Fusion adapter (AdvFusion)** to address the shortcomings of adapter fusions, as detailed below (RQ2). In additon, we compare the parameter and time efficiency of our fine-tuning approach with the traditional way of fine-tuning a model.
We first evaluate to what extent we can improve the performance of monolingual fine-tuning of programming language based PLMs using task adapters on code summarization in terms of time and accuracy. In using task adapters for code summarization on Ruby, we achieve an improvement of 16% in BLEU score, without including data from other languages. We then leverage language adapters to train language-specific modular components for each programming language on a _multilingual_ setting, to extract knowledge from the different languages for a particular low-resource language. Each language adapter is trained on a single programming language.
In the next step, we train adapter fusion, an attention-based adapter proposed in [13], on top of a stack of language adapters on a multilingual dataset. It's aim is to select the embeddings for a specific task from the language adapters (See Fig. 1). More specifically, when an input is passed through the network, we have the embeddings of that input from each language adapter, and the adapter fusion selects the embeddings that will maximize the objective function of the downstream task. In NLP, adapter fusions are mainly introduced as a non-destructive task composition for transfer learning [13] in which we have a stack of task adapters trained on different downstream tasks, and the adapter fusion will extract embeddings on a downstream task for a low-resource dataset.
However, adapter fusion tends to more strongly activate the language adapter that is in the same language as the input at every layer [13]; e.g., if we have a Ruby sample and our model consist of Ruby and Java language adapters as well as an adapter fusion on top of them, the adapter fusion tends to pay more attention to the Ruby language adapter rather than the Java language adapter. Therefore, **we propose AdvFusion, a variant of the adapter fusion for knowledge extraction from different programming languages and we show that our approach improves the performance of downstream tasks involving low-resource languages and decreases the fine-tuning computational cost for _all_ languages**. AdvFusion enforces the fusion layer to learn more from other languages for a given language, instead of focusing the learning from the language adapter that has the same language as the input. We finetune AdvFusion on the code summarization task in two phases. In the first phase, we disable the language adapter that corresponds to the language we expect the model to be fine-tuned on and enforce AdvFusion to learn from the other languages on the downstream task. For example, for code summarization on Ruby, we disable the Ruby language adapter and train AdvFusion to learn only from the other five languages that exist on CodeSearchNet. In the second phase, we enable the muted language adapter (i.e., Ruby language adapter in this example) and fine-tune AdvFusion on _all_ the language adapters. When AdvFusion is applied on multilingual fine-tuning, we achieved an improvement of 12% in BLEU score on code summarization over the current state-of-the-art multilingual fine-tuning on Ruby.
Overall, this paper makes the following contributions:
* Evaluation in performance of adapter fine-tuning on monolingual programming datasets.
* Proposed AdvFusion to address the knowledge interference (caused by the different programming languages in the multilingual dataset setting) by utilizing separate language adapters in the model.
The rest of the paper is organized as follows. In Section 2, we provide an overview of important background information and then introduce AdvFusion in Section 3. In Section 4 we provide the research questions and the details of our study, followed by the experimental setup in Section 5. Results are explained in Section 6 and they are further discussed in Section 7. Sections 8 and 9 are dedicated to the related works and threats to validity. Finally, we conclude the paper in Section 10.
## 2 Background
### _Transformers_
In recent years, transformer [18] architecture has become popular when designing neural language models [19, 20, 21, 22]. Transformers leverage attention mechanisms [18] to map a Query, denoted by \(Q\) and a set of Key-Value pairs, denoted by \(K\) and \(V\), respectively, to an output. Given \(Q\), the output is calculated as a weighted sum \(V\), where the weights are computed by the scaled-dot product of the Query with the corresponding \(K\)[18] as shown in Equation 1:
\[Attention(Q,K,V)=Softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{1}\]
where \(d_{k}\) refers to the dimension of keys.
Transformers are composed of two stacks of encoders and decoders. Each encoder has two sub-layers, a multi-head self-attention layer and a fully-connected feed-forward layer. Decoders consist of three sublayers: multi-head self-attention, multi-head encoder-decoder attention, and a fully-connected layer. There are two residual connections in the transformer's internal structure, one for the Multi-head attention and the other for the feedforward (FF) layer. The encoder provides the embedding for each token based on the other input tokens, and the decoders map these encodings/embeddings to the downstream task. Attention sublayers implement an attention mechanism that maps a query and a set of key-value pairs to an output. The output is a weighted sum of the values and the weight assigned to each value is computed by performing a dot product of a query and the corresponding key.
### _Pre-trained Language Models (PLMs)_
Language models are models pre-trained on a large corpus of unlabelled data using objective functions such as Mask Language Modeling (MLM) to learn the general representation of data. Once the PLMs are trained, they can be fine-tuned on a smaller dataset for a downstream task. In software engineering, CodeBERT [17] is a PLM pre-trained on programming languages and it is fine-tuned on several different downstream tasks in various software engineering studies[17, 2, 23].
### _Adapters_
In this study, we adopted several variants of adapters in our proposed approach. They are: task adapters, language adapters and adapter fusions [13].
Adapters are lightweight modules that are plugged into the internal structure of a pre-trained language model [15]. They are used as an alternative approach to fine-tune a pre-trained model for new downstream tasks [15, 24], and they are also used to avoid catastrophic forgetting [25, 26]. As they are lightweight in nature, they require lesser computational time and resources than the traditional fine-tuning process.
Consider \(\Theta\) as a representation of all the weights of a pre-trained model. By plugging adapter \(i\) into a pre-trained model, there is a new set of weights, \(\theta_{i}\), that corresponds to \(i\). For training \(i\), the weights of the pre-trained model (i.e., \(\Theta\)) are freezed and the fine-tuning of the model is done only through \(\theta_{i}\).
#### 2.3.1 Task Adapters
Task adapters' aim is to learn task-specific transformations by training their weights on a target task dataset [14]. Task adapters consist of a simple down- and up- projection combined with residual connections. Task adapter \(TA_{l}\) at layer \(l\) consists of a down-projection \(D\in R^{h\times d}\) where \(h\) is the hidden size of the transformer and \(d\) is the dimension of the adapter. Down-sampled representations are fed to a ReLU activation followed by an up-projection transformation \(U\in R^{d\times h}\) at each layer:
\[TaskAdapter_{l}(h_{l},r_{l})=U_{l}(ReLU(D_{l}(h_{l})))+r_{l} \tag{2}\]
where \(h_{l}\) and \(r_{l}\) are the hidden state and residuals at layer \(l\), respectively. Task adapters are trained on a set of labelled data for a downstream task, which in our case is code summarization. We adopted task adapters in RQ1.
#### 2.3.2 Language Adapters
Language adapters learn language-specific transformation by training their weights on an abstract objective function such as MLM [14]. Language Adapter \(LA_{l}\) at layer \(l\) have the same architecture as a task adapter. The internal structure of a language adapter consists of a down-projection \(D\in R^{h\times d}\) with a ReLU activation, followed by an up-projection \(U\in R^{d\times h}\):
\[LanguageAdapter_{l}(h_{l},r_{l})=U_{l}(ReLU(D_{l}(h_{l})))+r_{l} \tag{3}\]
In contrast to task adapters, language adapters are trained on unlabelled data of a language using MLM which encourages the model to learn embeddings for a specific language. Later, these embeddings either can be used as an input for task adapters or they can be placed in a parallel stack, together with an adapter fusion (see next sub-section) to extract knowledge for a downstream task. Language adapters are used in RQ2.
Fig. 1: Internal structure of the adapter fusion and language adapters. Adapter fusion is placed on top of the stack of trained language adapters. Output of language adapters are fed to the key and value in the adapter fusion and for each query, the adapter fusion chooses the weighted sum of language adapters.
#### 2.3.3 Adapter Fusion
Language adapters are introduced to extract language specific embeddings from the internal structure of a pre-trained language model based on an abstract objective function such as MLM to learn the general representations of a language. Adapter fusion aims to extract and compose the knowledge from the different language adapters for a downstream task such as code summarization. For example, given a set of \(N\) language adapters, the adapter fusion output is a weighted sum of language adapters outputs, while the weights of the pre-trained model ( \(\Theta\)) and that of the language adapters \((\theta_{1},...,\theta_{N})\) are fixed:
\[\Phi=\text{argmin}\ L(D;\Theta,\theta_{1},...,\theta_{N}) \tag{4}\]
where \(\Phi\) consists of \(Key_{l}\), \(Value_{l}\) and \(Query_{l}\) metrics at each layer \(l\), as shown in Fig. 2. At each transformer block, the output of the feed-forward sub-layer is taken to be as the \(Query\), and the output of each language adapter is used for both \(Key\) and \(Value\) vectors.
## 3 Adversarial Fusion Adapter
In this section, we explain the architecture of our approach, AdvFusion, before describing the proposed learning algorithm for it.
### _Architecture_
The adapter fusion can leverage the language adapter corresponding to the current input better [13], which means more attention is paid on the current input language. This is mainly due to the internal attention mechanism of the adapter fusion. Therefore, we propose a new architecture, AdvFusion, which requires the adapter fusion to not only learn from the current input language but also from other languages. Our approach consists of two training phases:
1. Adversarial training phase (see Fig. 3): (i) the weights of the language adapter of the corresponding input are set to zero, (ii) the weights of the pre-trained model and that of the other language adapters are frozen and (iii) the AdvFusion is trained on the entire dataset. This phase allows AdvFusion to only learn from the other languages.
2. Fine-tuning phase (see Fig. 4): (i) the trained weights of the language adapter corresponding to the current input language are restored, (ii) the weights of the pre-trained language model and that of the language adapters are fixed and (iii) the AdvFusion weights are fine-tuned.
The overall architecture of the AdvFusion is similar to adapter fusion except that the language adapters in this setting behave like switches, meaning that they ignore the irrelevant parts of the input.
### _Learning Algorithm_
In this section, we formalize the learning procedure for AdvFusions. Let \(\Theta\) denote the parameters of the pre-trained language model, and \(\theta_{i}\) to denote the parameters of the \(language_{i}\) language adapter. We introduce the \(\Psi\) parameters that learn to combine \(N\) language adapters embeddings to solve a downstream task. For the adversarial training phase, we formalize the problem as follows:
\[\Psi\leftarrow\underset{\Psi}{\text{argmin}}\ \sum_{m=1}^{N}L(D_{m};\Theta, \theta_{1},..,\theta_{m-1},\theta_{m+1},..,\theta_{N},\Psi) \tag{5}\]
where \(L\) is the loss function of the downstream task and \(D_{m}\) denotes the \(language_{m}\) dataset. In this step, AdvFusion learns to compose \(N-1\) language adapter embeddings at each training step.
In the second phase, we employ all the language adapters to train the \(\Psi\) parameters:
\[\Psi\leftarrow\underset{\Psi}{\text{argmin}}\ \sum_{m=1}^{N}L(D_{m};\Theta, \theta_{1},..,\theta_{N},\Psi) \tag{6}\]
As illustrated in Fig. 3, \(\Psi\) consists of _Key_, _Value_ and _Query_ parameters, denoted by \(K_{l}\), \(V_{l}\) and \(Q_{l}\) at transformer layer \(l\), respectively.
Let \(h_{l}\) denote the output of the feed-forward sub-component at layer \(l\) which is taken as the input, and \(z_{l,i}\) indicates the output of language adapter \(i\) at layer \(l\), which is considered as the input for both the _Key_ and
Fig. 2: Overview of the transformer architecture in our proposed framework. The pre-trained programming language adapters are placed after the second Add & Normalization layer inside each transformer. The output of the language adapters are fed into the adapter fusion as keys and values; and the output of the feed forward layer is sent to the adapter fusion as its queries.
_Value_ transformations at layer \(l\). We compute the output of AdvFusion, denoted by \(O_{l}\), as follows:
\[\begin{split} S_{l}&=\text{softmax}(h_{l}^{T}Q_{l} \otimes z_{l,n}^{T}K_{l})\\ z_{l,n}^{\prime}&=z_{l,n}^{T}V_{l}\\ Z_{l}^{\prime}&=[z_{l,0}^{\prime},...,z_{l,N}^{ \prime}]\\ O_{l}&=S_{l}^{T}Z_{l}^{\prime}\\ \end{split} \tag{7}\]
Where \(\otimes\) represents the dot product and \(n\in\{1,...,m-1,m+1,...,N\}\) for the adversarial training phase (first phase) and \(n\in\{1,...,N\}\) for the fine-tuning phase (second phase).
Given the embeddings of each language adapter (\(z_{n}\)), AdvFusion learns a weighted mixer of the available trained language adapters.
## 4 Research Questions & Study Design
In this section, we discuss the research questions and how we design experiments to conduct our research.
### _Research Questions_
In this study, we conduct experiments to answer the following research questions:
RQ1: _What is the performance of employing adapters in programming language based PLMs in terms of training time and model accuracy?_ In this research question, we investigate whether we can efficiently fine-tune a model using adapters inserted into the transformer blocks of a PLM while keeping the weights of the PLM fixed. It is shown that during the fine-tuning phase, if we encourage the weights to stay closer to the initial weights of the pre-trained model, we will avoid catastrophic forgetting and stabilize the fine-tuning process [27]. Therefore, we hypothesize that our model accuracy will outperform the standard fine-tuning process when adapters are used. Also, as we employ only a small set of weights to fine-tune the model, we hypothesize that the fine-tuning duration will decrease. The answer to this RQ will build the foundation for using adapters on low-resource languages, as we are able to learn efficiently separate language-specific weights for a downstream task.
RQ2: _Does using AdvFusion lead to performance improvement for low-resource programming languages?_
Constructing good-quality datasets for low-resource programming languages is a time consuming and challenging task. Previous studies have attempted to take advantage of other high-resource programming languages for low-resource languages [2, 3] on code summarization. In this RQ, we evaluate to what extent we can improve multilingual finetuning by reducing the knowledge interference between different programming languages and assess if our
Fig. 4: The fine-tuning phase of AdvFusion. If the current input belongs to a language \(k\), we reload the fine-tuned weights of \(lang_{k}\) adapter and allow the network to learn from all languages. The weights of the double border components are frozen in this phase.
Fig. 3: The adversarial training phase of AdvFusion. If the current input belongs to a language \(k\), then the weights of the \(lang_{k}\) adapter will be zero, and the adapter fusion will learn from the other language adapters. Dashed border indicates the components with zero weights, and thick borders demonstrate the fixed-weight components during this phase.
proposed approach, AdvFusion, can improve the performance of low-resource languages by selectively extracting, exploiting and aggregating the knowledge learned from other high-resource programming languages.
RQ3: _How much attention is put on a low-resource language from other languages?_
In this RQ, we perform an attention analysis to evaluate how much we can learn from other programming languages for a target (i.e., low-resource) language. More specifically, we calculate the participation of each language at every transformer block in CodeBERT by measuring the percentage of attention we get from each language adapter for each sample over a target language dataset.
### _Experiments Design_
For our study, the dataset and the downstream task are CodeSearchNet, which consists of six programming languages (Section 5.1), and code summarization, respectively. Code summarization refers to generating natural language description for a given code snippet [11]. Code summarization is chosen as it is a widely-used task in multiple studies and it enables us to evaluate the model's generation ability while focusing on both the natural language and the programming language modalities of the model [28, 17]. To conduct our experiments, we choose CodeBERT [17] as the backbone model to insert adapters. CodeBERT is selected as it is used in numerous software engineering studies previously [2, 3] Though CodeT5 has the state-of-the-arts results, it is not chosen in our experiment, because the pre-trained MLM version of CodeT5 is not available online.
**RQ1:** To answer _RQ1_, the aim is to fine-tune CodeBERT on code summarization without changing the model's pre-trained weights using task adapters. To achieve this, we fixed the weights of the pre-trained model, plugged the task adapter into the CodeBERT and fine-tuned its weights for code summarization. We refer to these models as CodeBERT\({}_{\text{TaskAdapter}}\). In this research question, we separately trained six task adapters, each for code summarization in one programming language. The results are compared to the traditional method of fine-tuning CodeBERT [17] on each of the programming languages for code summarization. As the task adapters and fine-tuning of the PLM are done on each programming language separately, we refer to this as _monolingual task adapter based fine-tuning_.
**RQ2:** In _RQ2_, we investigate if the performance of the model using adapters can be improved using a multilingual data. We refer to this setting as _multilingual adapter-based fine-tuning_. We first evaluate using adapter fusions where we learn from multiple langauge adapters for a target language. We refer to these models as CodeBERT\({}_{\text{Fusion}}\). To address the limitation in adapter fusions, in the second experiment for RQ2, we use our proposed approach, _AdvFusion_, to assess its effectiveness on low-resource languages and refer to our model as _AdvFusion_CodeBERT.
To train CodeBERT\({}_{\text{Fusion}}\), we first employ a separate set of weights for each programming language and train them separately while the weights of the pre-trained model are fixed. To achieve this, six language adapters are trained separately, each for one programming language in our dataset. These adapters are trained on MLM to capture the language-specific embeddings of each programming language. Although the pre-trained model has knowledge of all the seen programming languages (i.e., the languages used in its pre-training), when we fine-tune all the weights on an imbalanced multilingual dataset, the weights are affected mostly by the high-resource languages. To address this challenge, we adapted a pre-trained model and extend it with a parallel stack of trained language adapters. When a sample is fed into the model, the output of each adapter will be the network embeddings focusing on the corresponding language.
Then, we train an adapter fusion on top of them to select the set of attention of each language for each sample. This approach avoids fogetting the knowledge of the low-resource languages as for each language, we trained a separate set of weights. Furthermore, the high-resource language datasets do not affect the weights trained for the low-resource datasets. For this step, we use adapter fusion that aggregates the extracted embeddings from all the language adapters of each input. Adapter fusion has an attention architecture and learns how to focus its attention on the embeddings of its previous language adapters [13]. We train the adapter fusion on all the six languages by freezing the weights of the language adapters and the pre-trained model. Using this approach, we require the model to select the extracted embeddings from all the languages without deteriorating the language embeddings. The trained model here is referred to as _AdvFusion_CodeBERT and follows the training steps as explained previously in Section 3.2.
**RQ3:** For _RQ3_, we evaluated the contribution of each programming language for Ruby by analysing the attention. Ruby is chosen as it has the lowest number of samples in the dataset. This is calculated by feeding the Ruby test dataset to CodeBERT\({}_{\text{Fusion}}\). Then we aggregate all the attention scores from each language adapter in each layer and normalize (i.e., min-max normalization) the aggregated attention to obtain the percentage of each language contribution.
We repeat the above steps for _AdvFusion_CodeBERT to compare the CodeBERT\({}_{\text{Fusion}}\)'s and AdvFusion's ability to extract knowledge from other languages.
## 5 Experiment Setup
In this section, we describe the dataset, the model training process, the baseline models, and the evaluation metrics.
### _Dataset_
We use CodeSearchNet [10] for training the adapters. It consists of 6 programming languages and the size of each language is shown in table 1. We only use the bimodal dataset in our experiments.
### _Model Training_
The length of the source and target sequences in our setting are set to 256 and 128, respectively. We set the learning rate to \(10e-5\), and the batch size to 64 for the fine-tuning phase in all the experiments. As [13] has performed an extensive hyperparameter search analysis over adapters, we used the default settings for adapters' hyperparameters that has reported to have the optimal performance. All experiments are conducted on Nvidia Tesla V100 32GB GPU.
### _Baseline Models_
In RQ1 and RQ2, the experiments are conducted on monolingual and multilingual datasets, respectively. Different baseline models on evaluating monolingual and multilingual dataset settings are used due to the nature of the models. For example, the model, CodeBERT, was reported on monolingual dataset settings and thus, we only include it into our baseline models for benchmarking monolingual datasets [17]. For monolingual dataset settings (RQ1), seq2seq [29], Transformer [18], RoBERTa [30], CodeBERT [17], GraphCodeBERT [31], PLBART [32], ProphetNet-Code [33], and CodeSET [28] are considered. For multilingual dataset settings (RQ2), we compare the results with _Polyglot_GraphCodeBERT [2] and _Polyglot_CodeBERT [2], as these are the only models that are finetuned on multilingual dataset. _Polyglot_GraphCodeBERT fine-tunes GraphCodeBERT [31] on multilingual dataset from CodeSearchNet, and _Polyglot_CodeBERT fine-tunes CodeBERT [17] on multilingual dataset from CodeSearchNet.
### _Evaluation Metrics_
We evaluate the Code Summarization task using the BLEU [34] score which is a widely used metric in multiple natural language generation tasks. It provides a way to compare the generated summaries against the ground truth comments. BLEU is a precision-based metric and measures the n-gram geometric precision between the generated summary (i.e., n-gram hit) and the ground truth summary (i.e., total n-gram count) [34]. In this work, we calculate the smoothed BLEU-4 score as introduced in [35] where a count is added to the n-gram hit and total n-gram count for \(n>1\).
## 6 Results
_RQ1: Performance of adapter-based finetuning_
We compared the performance of fine-tuning using task adapters with the traditional approach of fine-tuning, on monolingual datasets.
The obtained BLEU scores and the training times are reported in Table II. Our proposed approach has fewer parameters and shorter training time as compared to the traditional way of fine-tuning for all the programming languages. The total number of trainable parameters in the encoder stack in the traditional fine-tuning process is \(\sim 62\)M, while for the adapter-based fine-tuning process in CodeBERT\({}_{\text{TaskAdapter}}\), the number of trainable parameters is \(\sim 0.89\)M. The training time is also reduced by \(\sim 30\%\) for all languages with CodeBERT\({}_{\text{TaskAdapter}}\). The main reason for the shorter training time is that in our proposed approach, we only need to train a decoder stack on top of CodeBERT, and not on the entire CodeBERT model.
Despite having fewer parameters and shorter training times, we observed a performance improvement. As shown in Table II, with CodeBERT\({}_{\text{TaskAdapter}}\), the BLEU score for Ruby is improved by \(17\%\) on average. Interestingly, for JavaScript and Go, we also observe an improvement, where Go has the highest score with 23.21 BELU score. Using CodeBERT\({}_{\text{TaskAdapter}}\), we achieve approximately the same results for high-resource languages, as obtained with CodeBERT.
Table IV demonstrates the results of other approaches for code summarization on CodeSearchNet. The dashed line separates the models with multilingual fine-tuning (i.e., _polyglot_GraphCodeBERT and _polyglot_CodeBERT for RQ2) from other models that have monolingual fine-tuning. As mentioned earlier, in RQ1, we consider CodeBERT\({}_{\text{TaskAdapter}}\) with models that are fine-tuned on a single language to have a fair comparison. The BLEU scores obtained using CodeBERT\({}_{\text{TaskAdapter}}\) are higher or on par with most models.
**Insight 1: For low-resource languages, adapter-based monolingual fine-tuning leads to a performance improvement and less computational resources as compared to fine-tuning the entire model.**
**Insight 2: Adapter-based monolingual finetuning on high-resource programming languages yields almost the same result with fewer computational resources than fine-tuning the entire model.**
_RQ2: Reducing knowledge interference in multilingual training for low-resource programming language_
In this RQ, we evaluate how much improvement we could gain by using other programming languages, while alleviating the knowledge interference in multilingual fine-tuning in programming PLMs. In the first experiment, we use adapter fusions and present the results of CodeBERT\({}_{\text{Fusion}}\), and in the second experiemnt, adapter fusion is replaced with AdvFusion. As shown in Table IV, CodeBERT\({}_{\text{Fusion}}\) outperforms the _polyglot_ models, which indicates that by fine-tuning the adapter fusion, we would be able to get a better result than fine-tuning the entire weights of the CodeBERT.
In Table III, we show the BLEU scores for multilingual CodeBERT (i.e., _polyglot_CodeBERT) [2] and _AdvFusion_CodeBERT. Note that both approaches use the same pre-trained model, but the fine-tuning strategy is different. The difference is that _polyglot_CodeBERT is fine-tuned on the entire model weights, while _AdvFusion_CodeBERT is only fine-tuned on the AdvFusion and language adapters' weights. With _AdvFusion_CodeBERT, we achieve \(12\%\) improvement for the results of Ruby. In this setting, we also observe that the results of JavaScript and Go are improved by 6% and 28%, respectively. The improvment in results of JavaScript and Go could be related to their amount of training data (see bimodal statistics in Table I). Though they have 3 to 6 times more training data than Ruby, they still have much fewer samples compared to the other three languages, Python, Java, and PHP. Therefore, they still can learn from these higher resource languages. The results for other languages, Python, Java, and PHP are on par with _polyglot_CodeBERT.
Note that _AdvFusion_CodeBERT is trained in two phases, the first phase is performed once for all the languages, and the second phase is done for each language that is shown in the last column of Table III. As an intuitive explanation for al
\begin{table}
\begin{tabular}{|c|c|c|} \hline Language & bimodal Data & unimodal Data \\ \hline Ruby & 52,905 & 164,048 \\ JavaScript & 143,252 & 1,857,835 \\ Go & 317,832 & 726,768 \\ Python & 458,219 & 1,156,085 \\ Java & 500,754 & 1,569,889 \\ PHP & 662,907 & 977,821 \\ \hline \end{tabular}
\end{table} TABLE I: Dataset used for training PLM [10]
leviating knowledge interference, _AdvFusion_CodeBERT will not be allowed to change the general representations we have learned for each language (in language adapters). Consequently, low-resource language representations will not be affected by high-resource datasets.
In terms of parameter efficiency, the _polylgol_CodeBERT encoder has 62M trainable parameters, while the total number of trainable parameters in _AdvFusion_CodeBERT is about 21M (a 66% reduction). Note that the total number of trainable parameters is 0.89M when we only use the task adapters, while it is 21M when we plug adapter fusion or AdvFusion into the model. This is due to the fact that language and task adapters have a simple feed-forward architecture, while adapter fusion and AdvFusion have additional attention mechanism.
In terms of training time, as shown in Table III, the average fine-tuning time for the entire weights of CodeBERT takes \(\sim 8\) hours, while the first phase in _AdvFusion_CodeBERT is performed once and takes around 3:25 hours. The second phase is done separately for each programming language, and the average fine-tuning time for all languages takes 4:27 in _AdvFusion_CodeBERT which is decreased by \(\sim 44\%\) compared to the whole weights training time.
**Insight 3: _AdvFusion_CodeBERT improves the performance on low-resource languages while having on par results on high-resource languages with shorter training time and fewer trainable parameters.
_RQ3: Languages' contribution for a low resource language_
Here, we evaluate the contribution of each language on Ruby when using AdvFusion and we also compared with adapter fusions. We extract the attention at AdvFusion and adapter fusion when we fine-tune the _AdvFusion_CodeBERT and CodeBERT Fusion, respectively (separate experiments). Fig. 5 demonstrates the contribution of each language at each layer in CodeBERT Fusion when the Ruby test dataset is fed into the fine-tuned model. The x-axis shows the attention score and the y-axis shows the contribution of each language at each layer in percentages. The color bars demonstrated the contributions from each of the six programming languages. In most layers, a high percentage of attention (more than 80%) is towards Ruby, the gray bar; which shows that the adapter fusion tends to pay more attention to the language adapter corresponding to the input language. Fig. 6 shows the contribution of each language in _AdvFusion_CodeBERT when Ruby test dataset is fed to the fine-tuned model. Y-axis is the layout number in CodeBERT, and x-axis shows the percentage of contribution of each language. Here, the AdvFusion pays more attention to other programming languages in comparison with adapter fusion in Fig. 5. For instance, Ruby learns more from Go in the second layer (i.e., \(52.9\%\) of attention are grabbed from Go adapter), and it learns more from Python in the forth layer (i.e., \(56.2\%\)). In layer seven, Ruby learns more from JavaScript. Even in the higher layers, learning from other languages is continued and the attention is distributed to other languages, and not only focusing on Ruby. More interestingly, Php is the most resourceful language in the dataset, but its contribution for Ruby is lesser than the other languages. Thus, no noticeable relationship is observed between the language dataset size and its contribution for Ruby.
**Insight 4: Low-resource languages could benefit from resourceful languages differently in different layers.**
**Insight 5: Higher resource languages do not necessarily contribute more on low-resource languages.**
## 7 Discussion
_When should we use adapters for monolingual fine-tuning?_ Based on our experiments in RQ1, we observed that adapter-based fine-tuning on high-resource languages are on par with the traditional way of fine-tuning but it is more computational efficient, and it improves the results of low-resource languages. Therefore, we recommend adapter-based fine-tuning for monolingual fine-tuning for code summarization.
_When should we consider knowledge interferece in multilingual fine-tuning?_ It is shown that multilingual fine-tuning
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \multirow{2}{*}{Language} & \multicolumn{2}{c|}{CodeBERT} & \multicolumn{2}{c|}{CodeBERT\({}_{\text{Task},\text{adapter}}\)} & \multirow{2}{*}{Improvement} & \multicolumn{2}{c|}{CodeBERT Training Time} & \multicolumn{2}{c}{CodeBERT\({}_{\text{Task},\text{Adapter}}\)} \\ & (\# trainable params: 62M) & (\# trainable params: 0.89M) & & (20000 steps) & Training Time (2000 steps) \\ \hline Ruby & 12.16 & 14.12 & 16\% & 8:12 & 5:28 \\ JavaScriptcript & 15.67 & 16.80 & -7:0\% & 8:13 & 5:44 \\ Go & 18.07 & 23.21 & 28\% & 8:31 & 5:36 \\ Python & 19.06 & 18.74 & -3:0\% & 8:13 & 5:25 \\ Java & 17.65 & 18.99 & 7\% & 8:14 & 5:41 \\ PHP & 25.16 & 25.25 & 0\% & 8:26 & 5:38 \\ \hline \end{tabular}
\end{table} TABLE II: Performance of task adapters for code summarization on monolingual fine-tuning setting. Note that the dashed line separates Ruby, a low-resource language, with the rest, and the second dashed line separates where improvement is achieved when adapters are used. CodeBERT and CodeBERT\({}_{\text{Task},\text{Adapter}}\) columns indicate the smooth BLEU-4 scores for these models. The last two columns specify the training hours in H:MM format.
Fig. 5: The attention contribution from each programming language at each layer when we feed the Ruby test dataset into the finetuned fusion model.
outperforms monolingual fine-tuning in both high and low-resource languages [2]. In RQ2, we studied when knowledge interference could affect multilingual fine-tuning. Based on the results shown in Table III, even though low-resource languages could benefit from diverse data from other programming languages, they could suffer from the differences in structures and semantics of other high-resource languages. Therefore, when we manage the learning between Ruby and the other programming languages in AdvFusion, the results for low-resource languages improves.
On the other hand, when we reduced the knowledge interference on fine-tuning a high-resource language, we observed lesser improvement as compared to the improvement we observed in low-resource languages (see Table III). Intuitively, this could be related to the fact that the structure and semantics could be preserved in a high-resource target language as the model is trained on a substantial amount of data for a high-resource language. When the AdvFusion strategy is used, it's effects on the learning from the high-amount of data is minimal and therefore does not improve the results. We recommend using AdvFusion for low-resource languages but for high-resource languages, it is only recommended when an alternative parameter efficient approach is desired and that a small performance drop is acceptable.
_Which languages could a low-resource language take advantage of in a multilingual setting?_ We have observed that using AdvFusion, Ruby could primarily benefit from Go, Python and javascript, as depicted in Fig. 6. This study does not focus on the syntactic or semantic similarities between the source and target programming languages but rather on which languages are most useful for a low-source language from the perspective of a fine-tuned multilingual model.
As an intuitive example to see how AdvFusion pays attention to different programming languages, we make a heatmap of attention by feeding a Ruby sample into the fine-tuned _AdvFusion_CodeBERT as demonstrated in Fig. 7. On the x-axis, we have a sequence of Ruby tokens fed into the fine-tuned model, and on the y-axis, the six programming languages of CodeSearchNet dataset are shown. The lighter color shows a higher attention. The heatmap depicts the attention we can gain from each programming language for each token of the followin sample.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline Models & Ruby & JavaScript & Go & Python & Java & PHP \\ \hline \(AdvFusion\)CodeBERT & **16.53** & **16.80** & **24.09** & 18.28 & 19.94 & 25.20 \\ CodeBERT\({}_{\text{Fusion}}\) & 15.77 & 16.22 & 24.01 & 18.40 & 19.85 & 25.17 \\ CodeBERT\({}_{\text{Task}Adapter}\) & 14.12 & 15.67 & 23.21 & 18.47 & 18.99 & 25.55 \\ \hline \(polylogd\)GraphCodeBERT [2] & 14.95 & 15.79 & 18.92 & 18.90 & 19.91 & 26.15 \\ \(polylogd\)CodeBERT [2] & 14.75 & 15.80 & 18.77 & 18.71 & 20.11 & **26.23** \\ CodeT5 [28] & 15.69 & 16.24 & 19.76 & **20.36** & **20.46** & 26.09 \\ ProphetNet-Code [33] & 14.37 & 16.60 & 18.43 & 17.87 & 19.39 & 24.57 \\ PLBAT [32] & 14.11 & 15.56 & 18.91 & 19.30 & 18.45 & 23.58 \\ GraphCodeBERT & 12.62 & 14.79 & 18.40 & 18.02 & 19.22 & 25.45 \\ CodeBERT & 12.16 & 14.90 & 18.07 & 19.06 & 17.65 & 25.16 \\ RoBERTa [30] & 11.17 & 11.90 & 17.72 & 18.14 & 16.47 & 24.02 \\ Transformer [18] & 11.18 & 11.59 & 16.38 & 15.81 & 16.26 & 22.12 \\ seq2seq [29] & 9.64 & 10.21 & 13.98 & 15.93 & 15.09 & 21.08 \\ \hline \end{tabular}
\end{table} TABLE IV: Smooth BLEU-4 scores on code summarization. CodeBERT\({}_{\text{Task}Adapter}\) is fine-tuned on the monolingual datasets (same as CodeBERT). If we encourage the model weights to be closer to the pre-trained model, we could improve the fine-tuning task without additional data.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline Language & \(polylogd\)CodeBERT & \(AdvFusion\)CodeBERT & Improvement & \(polylogd\)CodeBERT & \(AdvFusion\)CodeBERT \\ & (\# trainable params:62M) & (\# trainable params:21M) & Training Time & Training Time (20000 steps) & (phase1) phase2 \\ \hline Ruby & 14.75 & 16.53 & 12\% & 8.06 & (3.25) & 3.35 \\ JavaScript & 15.80 & 16.80 & 6\% & 8.05 & (3.25) & 3.48 \\ Go & 18.77 & 24.09 & 28\% & 8.07 & (3.25) & 4.16 \\ Python & 18.71 & 18.28 & 0\% & 8.03 & (3.25) & 4.35 \\ Java & 20.11 & 19.94 & 0\% & 8.04 & (3.25) & 3.53 \\ PHP & 26.23 & 25.20 & 0\% & 8.06 & (3.25) & 4.13 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Effectiveness of using AdvFusion for code summarization in the multilingual fine-tuning setting. The first dashed line separates Ruby from the other languages while the second dashed line separates the results where improvement is observed. _polylogd_CodeBERT and _AdvFusion_CodeBERT columns indicate the smooth BLEU-4 score for each language. The last two columns specify the training hours in H:MM format.
Fig. 6: The attention contribution from each programming language at each layer when we feed the Ruby test dataset to the finetuned AdvFusion model.
puts a + b end
The highest attention on the tokens are from the other language adapters other than Ruby; as observed, the attention from the Ruby adapter is low (note the Ruby-adapter row which is dark everywhere).
## 8 Related Work
In the past few years, there is a lot of effort on representing source code using deep learning models for different applications such as code generation [36, 37, 38], code summarization [39, 9, 40], program synthesis [41, 42, 43, 44], code search [45], and bug repair [46, 47]. A number of models are also released that are pre-trained on source code and/or code and comment with different objective functions, which are then fine-tuned on multiple downstream tasks [28, 17, 31] such as code summarization [17, 28, 23, 2]. Examples of these models are CodeBERT, a bimodal pre-trained model for programming languages and natural languages [17], GraphCodeBERT which considered the inherent semantic structure of code and extracted the data flow of code snippets [31], CodeT5, a unified pre-trained encoder-decoder transformer that leverages the code semantics from developer-assigned identifiers [28], PLBART, a sequence-to-sequence pre-trained model for both language understanding and generation tasks [32], and CodeGPT, a transformer-based architecture using only a decoder stack for generation tasks.
The programming language pre-trained models aim to develop general-purpose pre-trained models, which can be applied for a broad range of program understanding and generation tasks [36]. However, little research is focused on their transferability for low-resource languages in different software engineering tasks. The transferability of PLMs for code summarization and code search on Ruby is empirically studied in [3]. Others use few shot learning approaches to address the lack of data [48, 49, 50], design code/project related prompts [49], or study project-specific data for various models [51]. However, none of the current research aims to learn from other languages for a low-resource programming language.
In NLP, adapters were introduced to transfer the learned knowledge from a pre-trained model for a target task [15]. Adapters are a parameter-efficient alternative to fine-tuning on multiple tasks in a way that instead of training the whole weights for each task, few trainable parameters are inserted into the model's internal architecture and the pre-trained weights are fixed during fine-tuning [15]. Adapter fusions were introduced in [13] which have an attention-based architecture to extract knowledge from multiple downstream tasks for a low-resource downstream task. Though adapters have an extensive history in NLP [15, 13, 14, 52], they have not been studied for low-resource languages in the software engineering field. To the best of our knowledge, the only work is [16] where empirical studies are conducted to assess the transferability of natural language to code using adapters.
## 9 Threats to Validity
**External Validity** In this study, the downstream task (code summarization) and the programming languages (from CodeSearchNet) are restricted, and the results might not be generalizable to other downstream tasks and datasets.
**Internal Validity** Hyperparameters can affect the fine-tuning phase of a pre-trained model, and there is no hard rule to select the best values for these parameters. Therefore, the fine-tuned model might be sub-optimal. To alleviate this threat, we used the hyperparamters suggested in [13], as their work had performed an extensive hyperparameter search for adapters.
## 10 Conclusion and Future Works
In this work, we have evaluated the performance of adapters on the code summarization task on CodeBERT in terms of model accuracy and parameter efficiency in monolingual settings. We showed that by plugging our proposed adapters and enforcing the model to stay closer to the pre-trained model, the fine-tuned parameter is smaller and we could expect performance improvement without including any additional data. For the multilingual settings, we proposed AdvFusion, an architecture for extracting knowledge from different programming languages while reducing the effect of imbalanced datasets on low-resource languages and knowledge interference that we can experience in a multilingual training setting.
In future, we will apply the AdvFusion on other downstream tasks and pre-trained models. Another future line for this study would be to extract the knowledge from the different downstream tasks for a low-resource language.
## Acknowledgments
This research is support by a grant from Natural Sciences and Engineering Research Council of Canada RGPIN-2019-05175.
|
2303.04672 | Coherent errors and readout errors in the surface code | We consider the combined effect of readout errors and coherent errors, i.e.,
deterministic phase rotations, on the surface code. We use a recently developed
numerical approach, via a mapping of the physical qubits to Majorana fermions.
We show how to use this approach in the presence of readout errors, treated on
the phenomenological level: perfect projective measurements with potentially
incorrectly recorded outcomes, and multiple repeated measurement rounds. We
find a threshold for this combination of errors, with an error rate close to
the threshold of the corresponding incoherent error channel (random Pauli-Z and
readout errors). The value of the threshold error rate, using the worst case
fidelity as the measure of logical errors, is 2.6%. Below the threshold,
scaling up the code leads to the rapid loss of coherence in the logical-level
errors, but error rates that are greater than those of the corresponding
incoherent error channel. We also vary the coherent and readout error rates
independently, and find that the surface code is more sensitive to coherent
errors than to readout errors. Our work extends the recent results on coherent
errors with perfect readout to the experimentally more realistic situation
where readout errors also occur. | Áron Márton, János K. Asbóth | 2023-03-08T15:50:44Z | http://arxiv.org/abs/2303.04672v3 | # Coherent errors and readout errors in surface code
###### Abstract
We consider the combined effect of readout errors and coherent errors, i.e., deterministic phase rotations, on the surface code. We use a recently developed numerical approach, via a mapping of the physical qubits to Majorana fermions. We show how to use this approach in the presence of readout errors, treated on the phenomenological level: perfect projective measurements with potentially incorrectly recorded outcomes, and multiple repeated measurement rounds. We find a threshold for this combination of errors, with an error rate close to the threshold of the corresponding incoherent error channel (random Pauli-Z and readout errors). The value of the threshold error rate depends on how the logical-level errors are quantified: using the diamond norm, it is 3.1%, using for logical-level fidelity, it is 2.6%. Below the threshold, scaling up the code leads to the rapid loss of coherence in the logical-level errors, but error rates that are greater than those of the corresponding incoherent error channel. We also vary the coherent and readout error rates independently, and find that the surface code is more sensitive to coherent errors than to readout errors. Our work extends the recent results on coherent errors with perfect readout to the experimentally more realistic situation where readout errors also occur.
## 1 Introduction
The surface code[1, 2] is one of the most promising candidates for quantum error correction. For a code patch of distance \(d\), the collective quantum state of \(d^{2}\) physical qubits is used to store a single logical qubit. Incoherent errors (from entanglement of the physical qubits with a memoryless environment) can be modeled as random Pauli operators on the physical qubits. Repeated measurements of the parity check operators (a.k.a. stabilizer generators, to obtain the so-called syndrome) can be used to localize and correct such errors, with a success probability that increases as the code distance increases, \(d\rightarrow\infty\), as long as the error rates are below the so-called threshold.
For incoherent errors, the value of the threshold depends on the details of the error model, of the error correction (decoding) procedure, and on the level of detail in which the measurements are modeled (circuit-level or not). For simple cases, the threshold is known from mappings between the correction of incoherent errors on the surface code and phase transitions in classical Ising models[1, 3]: it is around 10% with perfect and around 3% with imperfect measurements. This mapping can be extended to some other error models and codes as well[4, 5]. For more complicated error models and circuit-level modeling the threshold can be obtained numerically, using efficient simulation in the Heisenberg picture[6, 7], and is around 0.75%. These threshold values not very far from current experimental reality: below 1% for quantum gates, and a few % for readout[8, 9].
The effect of coherent errors on the surface code is less well understood. These are errors modeled by nonrandom unitary operators acting on each physical qubit separately at each timestep. In the simplest case - the one we will also consider - these are phase rotations of the qubits with a fixed angle \(\theta\), i.e., \(e^{i\theta\vec{2}}\). Such errors model the effect of components becoming miscalibrated, inevitable in long calculations. Since coherent errors are not Clifford operations, their effects cannot be simulated efficiently in the Heisenberg picture[6]. Brute-force
simulations[10, 11], tensor network methods[12] or other efficient approximations[13], and the standard mappings to statistical physics models also break down (but note a recent extension of the mappings to a Majorana scattering network[14]).
One of the ways in which coherent errors are more complicated than incoherent errors is that they lead to coherent errors on the logical level as well. With coherent errors, the parity check measurements project the surface code into a random state, which even after error correction differs from the original state. For codes with an odd distance, this difference corresponds to a coherent rotation on the logical level. This logical-level coherence also makes the quantitative investigation of a quantum memory more complicated. Note, however, that scaling up the surface code leads to a "washing out" of coherence from the errors on the logical level. As shown analytically[15, 16], the random logical-level rotations correspond more and more to random Pauli noise as the code size is increasing. Note also, however, that the rate of this logical-level Pauli noise has not been computed analytically, and it can be considerably higher than what one would get by simply replacing coherent errors with their Pauli twirled counterparts[17, 11].
Coherent errors also seem to have a threshold, as shown numerically by Bravy et al[18]. They have simulated relatively large code sizes using a mapping to Majorana fermions. They have seen the "washing out the coherence", and found that the rate of the logical-level Z error is indeed significantly higher than those obtained by Pauli twirling the physical-level errors. Nevertheless, their numerics revealed an error threshold for coherent errors too: for \(\theta<\theta_{th}\approx 0.08\pi-0.1\pi\), the logical-level error rates decrease as the code is scaled up. This value of the threshold is very close to that of the Pauli twirled physical error channel: \(\sin(\theta_{th})^{2}\approx 0.09\).
In this paper we bring the results on coherent errors closer to experimentally relevant settling by considering them together with measurement errors. We consider the simplest kind of measurement errors, the so-called phenomenological error model: perfect projective von Neumann measurements of the parity check operators, with possibly incorrectly recorded measurement results. We use the numerical simulation approach based on the mapping to Majorana fermions[18, 19], but combine this with readout errors, and a corresponding 3D decoding.
This paper is structured as follows. In Sec. 2 we briefly introduce the most important concepts of the surface code, including error correction by minimum weight perfect matching on a 3D syndrome graph. In Sec. 3 we introduce the key points of the simulation method using fermionic linear optics, as pioneered by Bravyi et al[18]. In Sec. 4 we present our results on the combined effects of coherent and readout errors on the surface code. In Sec. 5 we conclude the paper with a discussion of our results.
## 2 Surface code
We briefly introduce the surface code as a quantum memory, storing a single logical qubit, in the rotated basis[10], with the patch encoding[20].
### Definition of the code space
A distance-\(d\) patch of the surface code (we always take \(d\) odd) consists of \(n=d^{2}\) physical (data) qubits arranged in a square grid. An example with \(d=5\) is shown in Fig. 1. The faces of the grid are colored in a checkerboard pattern, light (brown) and dark (blue). Extra boundary faces are also included, on all the edges, to ensure that top and bottom edges consist of only blue faces, while left and right edges of only light faces (smooth/rough boundaries[2]).
All faces of the grid correspond to parity check operators, which are the stabilizer generators of the code. These are products of Pauli operators on the qubits at the corners of the corresponding face. For each light (dark) face, \(\hat{Z}\) (\(\hat{X}\)) operators
Figure 1: A patch of a surface code with code distance \(d=5\).
are used, i.e.,
\[\hat{A}_{f} =\prod_{j\in\partial f}\hat{X}_{j}; \hat{B}_{f} =\prod_{j\in\partial f}\hat{Z}_{j}; \tag{1}\] \[\hat{S}_{f} =\hat{A}_{f}\text{ if }f\text{ dark}; \hat{S}_{f} =\hat{B}_{f}\text{ if }f\text{ light}. \tag{2}\]
They have eigenvalues \(\pm 1\), which correspond to even/odd parity of the corresponding group of qubits (in \(X\) or \(Z\) basis). Since any two faces share either no corners or two corners, all of these stabilizer operators commute.
The code space ("quiescent state"[2]) of the surface code is defined as the +1 eigenspace of all the stabilizers. This is a two-dimensional space spanned by the logical basis states,
\[\ket{0_{L}} =N_{d}\prod_{f\in\text{blue}}\frac{1}{2}(1+\hat{A}_{f})\ket{0}^{ \otimes n}; \tag{3}\] \[\ket{1_{L}} =N_{d}\prod_{f\in\text{blue}}\frac{1}{2}(1+\hat{A}_{f})\ket{1}^{ \otimes n}, \tag{4}\]
where \(\ket{0}^{\otimes n}\) and \(\ket{1}^{\otimes n}\) are the states where all physical qubits are \(\ket{0}\) and \(\ket{1}\) respectively, and \(N_{d}=2^{(d^{2}-1)/4}\) is a normalizing factor. Both logical basis states are highly entangled states of the physical qubits, and are locally indistinguishable from each other (all physical qubits have completely mixed density matrices). The encoding of a logical qubit is a mapping from a Hilbert space of dimension 2 to one of dimension \(2^{n}\),
\[\ket{\psi}=\alpha\ket{0}+\beta\ket{1}\rightarrow\ket{\psi_{L}}=\alpha\ket{0_ {L}}+\beta\ket{1_{L}}. \tag{5}\]
The logical (or "encoded") operators \(\hat{X}^{L}\) and \(\hat{Z}^{L}\) must fulfil \(\hat{X}^{L}\ket{0_{L}}=\ket{1_{L}}\), \(\hat{X}^{L}\ket{1_{L}}=\ket{0_{L}}\), \(\hat{Z}^{L}\ket{0_{L}}=\ket{0_{L}}\), \(\hat{Z}^{L}\ket{1_{L}}=-\ket{1_{L}}\). One possible choice of such logical operators is products of \(\hat{X}\), (\(\hat{Z}\)) on qubits along the left (top) edge,
\[\hat{X}^{L}=\prod_{j\in\text{LEFT}}\hat{X}_{j};\ \ \ \ \ \hat{Z}^{L}=\prod_{j\in\text{TOP}}\hat{Z}_{j}, \tag{6}\]
as shown in Fig. 1.
### Coherent errors and their detection
Of the various error processes by which the environment can affect the states of the physical qubits, we focus on coherent errors[18]. Here each physical qubit undergoes a fixed SU(2) unitary operation in every timestep, resulting from e.g. calibration errors in a quantum computer. For simplicity, we take this unitary to be the same for every qubit, specifically, a rotation about the Z axis through an angle \(\theta\) (noise parameter)[18]. Thus the unitary operator representing the effect of noise reads,
\[\hat{U}=\prod_{j=1}^{n}e^{i\theta\hat{Z}_{j}}. \tag{7}\]
The noise parameter \(\theta\) can be converted to a _physical error rate_\(p\), as
\[p=\sin^{2}(\theta). \tag{8}\]
This is the parameter of the dephasing channel obtained by Pauli twirling the coherent errors.
To detect and correct the coherent errors, we use the standard procedure of repeated measurements of the parity check operators. Each measurement results in a measurement outcome and a post-measurement quantum state. The string \(s\) of measurement outcomes is the _syndrome_,
\[\text{syndrome }s=(s_{1},\dots,s_{n-1}), \tag{9}\]
with all elements \(s_{f}\in\{+1,-1\}\). The (unnormalized) post-measurement state of the code can be obtained by a projection of the pre-measurement state,
\[\hat{\Pi}_{s}=\prod_{\forall f}\frac{1}{2}(1+s_{f}\hat{S}_{f});\ \ \ \ket{\Phi_{s}}=\hat{\Pi}_{s}\ket{\Phi}. \tag{10}\]
Note that since we consider only coherent errors that are \(Z\)-rotations, only the \(X\)-parity check measurements can return with a \(-1\) value.
### Error correction with perfect readout
Before we discuss readout errors, we need to briefly summarize how the errors would be corrected if readout was perfect.
If some of the parity check measurements have resulted in an outcome of -1, a _correction operation_\(\hat{C}_{s}\) is needed to bring the state back into the code space. Since the coherent errors only contain \(\hat{Z}\), this correction involves flipping some qubits in the \(X\) basis by \(\hat{Z}\), i.e.,
\[\hat{C}_{s}=\prod_{j\in l}\hat{Z}_{j}. \tag{11}\]
The set \(l\) of qubit indices is in a properly defined sense a 1-chain that connects the error locations
with each other or with the left or right edge [1], i.e., whose boundary (in a homological sense) is the set of faces where the measured parity check operators are \(-1\).
Deciding on the correction operator for a given syndrome is the solution of the decoding problem. There are many possible correction operators for any given syndrome, which fall into two homological equivalence classes: when multiplying two operators, if they are in the same class, we obtain a product of stabilizers, if they are in different classes, we obtain a logical \(\hat{Z}^{L}\) times a product of stabilizers. In principle, the likelihoods of the two classes (based on the error model) should be compared and any correction operator from the more likely class should be chosen. However, given the computational cost of the likelihood calculation, various approximate approaches, so-called decoders, have been developed [21, 22, 23]. We will discuss this in more detail after the introduction of readout errors.
The collective quantum state of \(n\) qubits after we measured syndrome \(s\), and applied the corresponding correction operator, reads
\[\ket{\Phi_{s}}=\frac{1}{\sqrt{P(s)}}\hat{C}_{s}\hat{\Pi}_{s}\hat{U}\ket{\psi_ {L}}. \tag{12}\]
This is in the code subspace. Moreover, because the coherent errors are only Z-rotations (and the code distance is odd[18]), this differs from the original state only by a logical Z-rotation:
\[\ket{\Phi_{s}}=e^{i\theta_{s}\hat{Z}^{L}}\ket{\psi_{L}}. \tag{13}\]
Interestingly the _logical rotation angle_\(\theta_{s}\) depends on the syndrome, but not on the initial state \(\ket{\psi_{L}}\)[18]; this is not the case for the surface code on some other types of lattices [19]. Moreover, the probability \(P(s)\) of measuring the syndrome \(s\) is also independent of the initial state \(\ket{\psi_{L}}\)[18].
### Quantifying logical errors
To characterize the effectiveness of error correction, we use two quantitative measures, the _diamond-norm distance_ of a channel to the identity, and the _maximum infidelity_ of the resulting state with the initial state[24]. These show how different the state of the encoded qubit is after the error correction process from the original state. In our case the average over syndromes of the diamond-norm distance can be expressed as [18]:
\[p_{L}^{d}=2\sum_{s}p(s)|\sin(\theta_{s})|, \tag{14}\]
and the average over the syndromes of the maximum infidelity as
\[p_{L}^{i}=\sum_{s}p(s)\sin^{2}(\theta_{s}). \tag{15}\]
### Readout errors, 3D syndrome
We take into account not only coherent errors on the physical qubits, but also readout errors distorting the result of the syndrome measurements. We consider the simplest, _phenomenological_ noise model for the readout [25]. Thus, we assume perfect syndrome measurements, whose outcome is unreliably recorded, with a readout error probability \(q\), i.e.,
\[P(1\to 0)=P(0\to 1)=q, \tag{16}\]
and correspondingly, \(P(1\to 1)=P(0\to 0)=1-q\). The obtained noisy syndrome is
\[\text{noisy syndrome:}\quad s\to s^{\prime}. \tag{17}\]
To solve the decoding problem in the presence of readout errors, we need to consider \(d\) consecutive rounds of syndrome measurements[2, 1]. Since errors occur between the rounds of syndrome measurements, the rounds of measurement outcomes differ from each other even if the measurements are perfect. The \(d\) rounds of syndromes constitute a _3D syndrome_, which without/with readout errors is
\[\underline{s}=\{s_{1},s_{2},...,s_{d}\}\rightarrow\underline{s}^{\prime}=\{s ^{\prime}_{1},s^{\prime}_{2},...,s^{\prime}_{d}\}. \tag{18}\]
For error correction we have to solve the decoding of this 3D syndrome with some decoding technique, obtaining a correction operator \(\hat{C}_{\underline{s}^{\prime}}\). We will detail the decoding in the next Section.
We can express the final state of the code for a measured 3D syndrome \(\underline{s}^{\prime}\), where the corresponding 3D syndrome without readout error was \(\underline{s}\), as
\[\ket{\Phi_{\underline{s},\underline{s}^{\prime}}}=\frac{1}{\sqrt{P(\underline {s})}}\hat{C}_{\underline{s}^{\prime}}\hat{\Pi}_{s_{d}}\hat{U}\ldots\hat{\Pi }_{s_{1}}\hat{U}\ket{\psi_{L}}. \tag{19}\]
Using \(\hat{U}=\prod_{j=1}^{n}\exp(i\theta\hat{Z}_{j})\), this can be rewritten (more details in the next section) as
\[\ket{\Phi_{\underline{s},\underline{s}^{\prime}}}=\hat{C}_{\underline{s}^{ \prime}}\hat{C}_{s_{d}}e^{i\theta^{*}(\underline{s})\hat{Z}^{L}}\ket{\psi_{L} }. \tag{20}\]
Here the rotation angle \(\theta^{*}(\underline{s})\) only depends on the noiseless 3D syndrome \(\underline{s}\), but not on the readout errors, nor on the initial state \(\ket{\psi_{L}}\).
### Error correction with readout errors
To correct errors based on the noisy 3D syndrome contaminated with readout errors we used the 3D version of the minimum weight perfect matching (MWPM) decoder[23], as implemented in PyMatching[26]. In this 3D case, like in the case with perfect measurements, errors are associated with marked vertices on a grid, we need to find the set of edges on the grid with the smallest weight that pair the vertices up or connect them to the right/left boundaries.
The grid here is 3-dimensional, with "space" coordinates (\(d\times d\) grid) giving the position of the measured stabilizer operator and "time" coordinates (running from 2 to \(d\)) corresponding to the measurement round. Those vertices are marked where the measured stabilizer value differs from that measured in the previous round. "Spacelike" and "timelike" edges on the grid correspond to readout errors and physical errors. These carry different weights (\(w_{s}\) and \(w_{t}\)), since the rate \(p\) of coherent errors can differ from the rate \(q\) of readout errors,
\[w_{s}=\log\left(\frac{1-p}{p}\right);\quad w_{t}=\log\left(\frac{1-q}{q}\right). \tag{21}\]
The MWPM decoder finds the set of edges with the smallest overall weight, which perfectly connect the marked vertices (with each other, or with the left or right boundaries). The set of edges with the minimum weight is used to define the correction operator, which consists of a string of \(\hat{Z}\) operators. Spacelike edges correspond to \(\hat{Z}\) operators, however timelike edges have no physical meaning, they are just virtual corrections of readout errors. A technical note: to ensure the correction operator brings the state back to the code space, the last measurement round is assumed to be free of readout errors.
The final state, after the error correction operator has been applied, cf. Eq. (20), reads
\[\ket{\Phi_{\underline{s},\underline{s}^{\prime}}}=e^{i\theta_{L}(\underline{ s},\underline{s}^{\prime})\hat{Z}^{L}}\ket{\psi_{L}}. \tag{22}\]
The logical rotation angle \(\theta_{L}(\underline{s},\underline{s}^{\prime})\) depends on perfect and noisy 3D syndrome too, as
\[\begin{array}{rcl}\theta_{L}=\theta^{*}&\leftarrow&\hat{C}_{\underline{s}^{ \prime}}\hat{C}_{s_{d}}\ket{\psi_{L}}=\ket{\psi_{L}};\\ \theta_{L}=\theta^{*}+\frac{\pi}{2}&\leftarrow&\hat{C}_{\underline{s}^{ \prime}}\hat{C}_{s_{d}}\ket{\psi_{L}}=\hat{Z}^{L}\ket{\psi_{L}}.\end{array} \tag{23}\]
Here the property that the operator \(\hat{C}_{\underline{s}^{\prime}}\hat{C}_{s_{d}}\) acts like a logical Z-operator or an identity is guaranteed by the constraint of perfect measurements in the last round of error correction.
The average diamond-norm distance and maximum infidelity can be expressed as:
\[P_{L}^{d}=2\sum_{\underline{s},\underline{s}^{\prime}}P( \underline{s})P(\underline{s}\rightarrow\underline{s}^{\prime})|\sin(\theta _{L}(\underline{s},\underline{s}^{\prime}))|; \tag{24}\] \[P_{L}^{i}=\sum_{\underline{s},\underline{s}^{\prime}}P( \underline{s})P(\underline{s}\rightarrow\underline{s}^{\prime})\sin^{2}( \theta_{L}(\underline{s},\underline{s}^{\prime})). \tag{25}\]
## 3 Fermionic Linear Optics Simulation
We simulate quantum error correction by sampling the random outcomes of the syndrome measurements, and the final state of the logical qubit after the error correction. This can be summarized in the following steps:
1. Generate a sample of the 3D syndrome \(\underline{s}\) - a sequence of \(d\) syndrome measurement rounds - from the probability distribution \(P(\underline{s})\).
2. Calculate the corresponding rotation angle of the logical qubit, \(\theta^{*}(\underline{s})\).
Figure 2: Specific example of MWPM decoding method on a code patch with \(d=3\). Stabilizer measurement outcomes are represented on a 3D grid, \(\pm 1\) outcomes as white/grey circles. Vertices, where the measured value differs from the previous round, are marked with red circles. The minimum weight set of edges, which perfectly connects the marked vertices denoted by green color. This set of edges force the correction including 2 \(\hat{Z}\) operators.
3. Generate readout errors, i.e., noisy syndrome \(\underline{g}^{\prime}\) from probability distribution \(P(\underline{g}\rightarrow\underline{s}^{\prime})\).
4. Calculate whether the rotation angle of the logical qubit is changed as a result of the readout errors, i.e., whether \(\theta_{L}=\theta^{*}+\pi/2\) or \(\theta_{L}=\theta^{*}\), according to Eq. (23)
Steps 3. and 4. here are straightforward, they involve changing measured values of stabilizers with readout error probability \(q\), and decoding using MWPM. Steps 1. and 2., however, can only be computed efficiently using Fermionic Linear Optics tools, and subtle tricks, as recently introduced by Bravyi et al [18]. We only introduce some of the main concepts of the method of Bravyi et al. here, and give a brief summary in an Appendix; see [18, 19] for more details. We describe how the method can be extended to the sampling of repeated syndrome measurements.
### Defining Majoranas for the qubits
To make use of the tools of fermionic linear optics, we introduce a four-dimensional Hilbert space for each qubit, and four Majorana operators (Majoranas) acting in this Hilbert space. The Majoranas for the \(m\)-th qubit are denoted by \(\hat{c}_{j}^{(m)}\), with \(j=1,2,3,4\). They are similar to fermionic operators, in that different Majoranas anticommute; however, all Majoranas square to the identity, \(\hat{c}_{j}^{(m)}\hat{c}_{l}^{(m^{\prime})}+\hat{c}_{l}^{(m^{\prime})}\hat{c}_ {j}^{(m)}=2\delta_{jl}\delta_{m^{\prime}m}\). Using the so-called C4 code [18, 27], the Pauli operators acting on the qubit are represented using Majoranas as
\[\hat{X}_{m} =i\hat{c}_{1}^{(m)}\hat{c}_{2}^{(m)},\quad\hat{Z}_{m}=i\hat{c}_{2 }^{(m)}\hat{c}_{3}^{(m)};\] \[\hat{Y}_{m} =i\hat{c}_{3}^{(m)}\hat{c}_{1}^{(m)}. \tag{26}\]
These operators fulfil the commutation relations expected of the Pauli operators.
The Majoranas require a Hilbert space that is larger than that of the qubit itself. In fact, since above the Pauli operators were represented by products of two Majoranas, all states of the qubit are represented in so-called fixed-parity subspaces. We work on the subspace defined as \(+1\) eigenspace of the C4 stabilizer,
\[\hat{S}^{(m)}=-\hat{c}_{1}^{(m)}\hat{c}_{2}^{(m)}\hat{c}_{3}^{(m)}\hat{c}_{4} ^{(m)}. \tag{27}\]
The advantage of introducing Majoranas is that initialization of the code, coherent errors, and sampling the measurement statistics of the stabilizers can all be mapped to free time evolution of
The main idea of the FLO approach is that we can work with the covariance matrix of the Majorana operators. Therefore instead of simulating the state vector of \(d^{2}\) qubits, with \(2^{d^{2}}\) elements, it is enough to keep track of the covariance matrix with \((2d)^{4}\) elements. With proper transformations of the covariance matrix, which forced by free fermionic time evolution and measurement of Majorana pairs, we are able to sample \(\theta^{*}(\underline{s})\) from the distribution \(P(\underline{s})\) in \(\mathcal{O}(d^{4})\) time.
We have extended the original simulation method for coherent errors [18], to the case of simultaneous coherent and readout errors. The key observation is that multiple rounds of coherent errors and stabilizer measurements can be decomposed into single rounds of inhomogeneous coherent errors, (when the physical rotation angle \(\theta\) can be different for each physical qubit).
Starting from Eq. (19), we are able to write it a slightly different way by inserting identities in the form \(\hat{C}_{s_{j}}\hat{C}_{s_{j}}\),
\[\begin{split}\ket{\Phi_{\underline{s},\underline{s}^{\prime}}} =\frac{1}{\sqrt{P(\underline{s})}}\hat{C}_{\underline{s}^{\prime}}\hat{C}_{s _{d}}\hat{C}_{s_{d}}\hat{\Pi}_{s_{d}}\hat{U}\hat{C}_{s_{d-1}}\\ \hat{C}_{s_{d-1}}\hat{\Pi}_{s_{d-1}}\hat{U}...\hat{C}_{s_{1}}\hat {C}_{s_{1}}\hat{\Pi}_{s_{1}}\hat{U}\ket{\psi_{L}}.\end{split} \tag{28}\]
Furthermore each round can be written in the form of Eq. (12), with \(\hat{U}\) replaced by inhomogeneous error operators \(\hat{U}_{j}=\hat{U}\hat{C}_{s_{j-1}}\), and the normalization factor by \(1/\sqrt{P(s_{j})}\). Based on Eq. (13), we can express each round as a rotation about the Z axis,
\[\frac{1}{\sqrt{P(s_{j})}}\hat{C}_{s_{j}}\hat{\Pi}_{s_{j}}\underbrace{\hat{U }\hat{C}_{s_{j-1}}}_{\hat{U}_{j}}\ket{\psi_{L}}=e^{i\theta_{j}^{L}\hat{Z}^{L} }\ket{\psi_{L}}, \tag{29}\]
where the logical rotation angle \(\theta_{j}^{L}\) depends on all the previously measured syndromes \((s_{1},s_{2},..,s_{j})\).
Finally one can write the final state for perfect syndrome \(\underline{s}\), and noisy syndrome \(\underline{s}^{\prime}\) as
\[\ket{\Phi_{\underline{s},\underline{s}^{\prime}}}=\hat{C}_{\underline{s}^{ \prime}}\hat{C}_{s_{d}}e^{i\theta^{*}(\underline{s})\hat{Z}^{L}}\ket{\psi_{L} }, \tag{30}\]
where the rotation angle \(\theta^{*}(\underline{s})\) can be calculated from sampling single rounds of error correction with perfect syndromes and inhomogeneous coherent errors,
\[\theta^{*}(\underline{s})=\sum_{j=1}^{d}\theta_{j}^{L}(s_{1},s_{2},...,s_{j}). \tag{31}\]
## 4 Numerical results
We used the fermionic linear optics method to simulate the surface code under coherent and readout errors, for code sizes up to \(d=19\). We sampled the logical rotation angle distribution, from which we computed - for the most susceptible initial states - both the expecation value of the infidelity and the diamond norm distance to the identity channel. As code sizes were scaled up, we found threshold behaviour with both types of error measures (although somewhat less convincingly with the diamond norm). In case the rates of coherent and readout errors were equal, \(p=q\), we found that the threshold is close to the corresponding threshold of random Pauli Z + readout errors. Our results here are similar to those with perfect measurements by Bravyi et al.[18].
We also investigated how, below the threshold, the logical error rates compare to those of the random Pauli Z + readout errors, and how the residual coherence in the logical error decreases as the code size is scaled up. We again find similar results to those with perfect measurements by Bravyi et al.[18]. Varying the rates of the two error processes independently, we mapped out the threshold on the \((p,q)\) plane, and found that coherent errors are more critical than readout errors: to achieve scalable error correction it is easier to compensate a high value of readout errors (at or above 10%) by reducing the rate of coherent errors than vice versa.
### Threshold with equal coherent and readout errors
We ran extensive simulations to estimate the error threshold when the readout error rate \(q\) is set equal to the coherent error rate \(p\). For every odd value of code distance \(d\), up to \(d=19\), to obtain the numerical distribution of logical rotation angles, we sampled the noiseless 3D syndrome measurements 5000 times (5000 \(d\) rounds), and then sampled 100 noisy syndromes from each of these. In Fig. 3, we show the resulting average diamond norm distance and the average logical error rate,
Figure 3: Error threshold as the code size is scaled up with coherent and readout error rates equal (\(p=q\) is the “physical error rate”). The numerically obtained diamond-norm distance (a), and the maximum infidelity (logical error rate) (b) show threshold behaviour as functions of the physical error rate. Every point results from \(5000\times d\times 100\) rounds of simulation.
calculated via Eqs. (24). Using both measures, we observe that for errors below a threshold, scaling up the code size decreases the logical errors, while above the threshold, scaling up only makes things worse by increasing the logical errors.
To obtain a precise value of the threshold, we fitted the numerical values using a finite size scaling ansatz[3], based on mapping of the surface code to statistical physics models[1]. Although the ansatz is strictly expected to work for random Pauli+readout errors, it also fits our numerics (coherent+readout errors), albeit less convincingly for the case of the diamond norm. The threshold values are
\[p_{th}^{d}=3.10\%\pm 0.05\%;\qquad p_{th}^{i}=2.62\%\pm 0.02\%. \tag{32}\]
The threshold via the diamond norm is significantly higher than that via the infidelity, but both are relatively close to the threshold of random Pauli + readout errors[3], which for the toric code is \(p_{th}=2.93\%\pm 0.02\%\).
### Sub-threshold comparison with random Pauli+readout errors
We next compare the performance of the surface code under coherent+readout errors with its performance under random Pauli+readout errors, when error rates are below threshold. We use the same parameter \(p\) for the two channels, with the random Pauli channel being the Pauli twirled version of the coherent error channel,
\[\varepsilon_{p}^{twirl}(\hat{\rho})=\cos^{2}(\theta)\hat{\rho}+\sin^{2}(\theta )\hat{Z}\hat{\rho}\hat{Z}. \tag{33}\]
The _Pauli twirl ratio_ is the ratio of logical error rate in case of coherent errors+readout errors, and the logical error rate when the coherent errors are replaced by their Pauli twirled version. We denote this latter quantity with \(P_{L}^{d}(\varepsilon_{p}^{twirl})\) and \(P_{L}^{i}(\varepsilon_{p}^{twirl})\) when the diamond norm and the infidelity are used, respectively.
The numerically obtained values of the twirl ratio, shown in Fig. 4, indicate that below the threshold, coherent errors + readout errors lead to higher logical error rates than random Pauli + readout errors (high Pauli twirl ratio). Moreover, this difference grows as we scale up the size of the code. Interestingly, there is a threshold-like behavior of the Pauli twirl ratio, which is independent of code distance for \(p\approx 3.3\%\), and decreases with code distance for \(p\) above. However, the value \(3.3\%\) is not the threshold of the code.
One of the key findings of Bravyi et al [18] is that quantum error correction "washes out" coherence in the logical level in the coherent errors (without readout errors). As the code distance increases, the distribution of logical rotation angles becomes more and more highly peaked around \(0\) and \(\pi/2\), thus, the logical noise is better and better approximated by random Pauli process. This property of quantum error correction has also been studied analytically[15, 16]. A practical quantity to study this effect is the _coherence ratio[18]_, defined as
\[\frac{P_{L}^{d}}{P_{L}^{d,twirl}}=\frac{\sum_{\underline{s},\underline{s}^{ \prime}}p(\underline{s})p(\underline{s}\rightarrow\underline{s}^{\prime})| \sin(\theta_{L})|}{\sum_{\underline{s},\underline{s}^{\prime}}p(\underline{s })p(\underline{s}\rightarrow\underline{s}^{\prime})\sin^{2}(\theta_{L})}. \tag{34}\]
The coherence ratio is always greater than or equal to one, equality holds if \(\theta_{L}\) only takes values \(\{0,\pi/2\}\), i.e., if the logical noise is fully incoherent (probabilistic logical Z errors).
One would expect that this "washing out" of coherence is, if anything, made even stronger by the readout errors. Our numerics confirms this intuition. In Fig. 5 we show the coherence ratio of different code sizes with physical error rate and readout error rate set equal. We find that the coherence ratio decreases as the code is scaled up, even above the threshold. Around the threshold, \(p=q\approx 3\%\), practically all coherence is lost on the logical level, for code sizes \(d=17\) and above.
Figure 4: Pauli twirl ratio (for maximum infidelity) as functions of physical error rate (equal to readout error rate), for different code distances. Results obtained from the same data as in Fig. 3 and Monte Carlo simulations of incoherent noise.
Interestingly though, the coherence ratio appears to increase as the physical error rate is decreased from the threshold. Without readout errors[18], all of these qualitative trends are there, but the coherence ratio is \(1.1\) at the threshold even for a code size \(d=37\).
### Independent coherent and readout errors
We have also investigated how varying the rate \(p\) of coherent errors and \(q\) of readout errors independently affects the threshold of the surface code. For many pairs of \(p\) and \(q\) we numerically ascertained whether scaling up the surface code decreases logical error rates (scalable QEC) or it increases them (unscalable QEC). The threshold should be in between these regions. We could not determine the threshold values more precisely, since the fitting ansatz we used for \(p=q\) turned out to be a poor fit in many of the cases with asymmetric noise.
Our results, shown as a 2D map in Fig. 6, show that the surface code is more sensitive to coherent errors than to readout errors. If the coherent error rate is on the percent level, the surface code is quite robust against readout errors, scalable even with relatively high \(q\approx 7\%\). However, if the readout error rate is on the percent level, the surface code still requires the coherent error rate to be below \(3\%\).
## 5 Discussion and outlook
We investigated numerically how well the surface code works as a quantum memory when there are coherent errors on the physical qubits as well as readout errors (phenomenological readout error model). We focused on a restricted class of coherent errors, namely, unitary phase rotations, \(e^{i\theta Z_{j}}\). This allowed us to use the theoretical tools of Fermionic Linear Optics, as applied to the surface code by Bravyi et al[18]. We extended that work by including readout errors as well, on a phenomenological level (perfect measurements, noisy recording of measurement results).
Our results show that the findings of Bravyi et al[18] on the effects of coherent errors on the surface code mostly carry over to when readout errors also occur. Namely, the surface code with coherent+readout errors has a threshold, which is close to that of the corresponding Pauli twirled error channel (random Pauli+readout errors). However, for error rates below the threshold, its logical error rates are significantly higher than that of the Pauli twirled error channel. Scaling up the code size, coherence is washed out from the logical error. Moreover, we found that
Figure 5: Coherence ratio as a function of error rate (\(p=q\)) with different code distances. Each point results from \(5000\times d\times 100\) rounds of simulation.
Figure 6: Threshold lines on the \(\left(p,q\right)\) plane for maximum infidelity (logical error rate). For a fix \(q\) value we run the simulation for \(16\) different \(\theta\) values, and determined a lower and an upper bound for thresholds in each cases. The lower bound is the last point where the maximum infidelity is decreasing as the code distance is increasing. The upper bound however is the first point where the logical error rate is increasing as the distance is increasing. We numerically investigated the incoherent case via Monte Carlo simulations, and determined the threshold values for asymmetric \(p,q\) values with the fitting ansatz[3].
having a low value of the coherent errors is more important than a low value of readout errors (high readout error rates can be compensated by low coherent error rates, but less so vice versa).
A point that is worth further investigation is the differences in our results when using the diamond norm or the fidelity as quantitative measures of the reliability of quantum memory. Although they gave qualitatively similar results, they were quantitatively different: e.g., the value of the threshold was significantly higher (by 0.5%) when using the diamond norm.
It would also be interesting to consider broader classes of coherent errors. A next step would be to consider coherent error parameters \(\theta\) that are not constant, but vary from qubit to qubit or even fluctuate (this latter case modeling the combination of coherent and incoherent Z errors). A numerically more challenging question is how our results would be changed if even the axis of coherent rotation varied from qubit to qubit (not \(Z\) for all qubits as in our work) - unfortunately here the tools Bravyi et al.[18] do not apply. Even more challenging is to bring the error model closer to experimental reality, by modeling coherent errors on the circuit level. For this case, recent theoretical work[28] using a mapping to three dimensional lattice gauge theory seems to suggest that when combined with incoherent errors, coherent errors ruin the threshold: even with arbitrarily small error rates, scaling the code size up beyond a certain size will increase noise the logical level.
## Acknowledgements
This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office (NKFIH) within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004), and by the NKIFH within the OTKA Grant FK 132146. This research has been supported by the Horizon Europe research and innovation programme of the European Union through the IGNITE project.
|
2306.06863 | Flux evolution of superluminal components in blazar 3C454.3 | The kinematic behavior of superluminal components observed at 43 GHz in
blazar 3C454.3 were model-fitted with their light curves interpreted in terms
of their Doppler-boosting effect. The relation between the flux evolution of
superluminal components and their accelerated/decelerated motion or the
increase/decrease in their Lorentz/Doppler factor was investigated. The
precessing jet-nozzle scenario previously proposed by Qian et al. (1991, 2018a,
2021) and Qian (2018b, 2022a, 2022b) was applied to consistently model-fit the
kinematic behavior and light curves for two superluminal components (B4 and B6)
measured by Jorstad et al. (2005). For both B4 and B6 which were ascribed
respectively to the jet-A and jet-B of the double-jet structure assumed for
3C454.3, their kinematic features were well model-fitted with their bulk
Lorentz factor and Doppler factor (as function of time) convincingly derived.
It is shown that the light curves of the radio bursts associated with knot B4
and knot B6 can be well explained in terms of their Doppler boosting effect.
Similarly, for the knot R3 observed at 15GHz (Qian et al. 2014, Britzen et al.
2013) the interpretation of its kinematic behavior and light curve is presented
in the Appendix. We emphisize that the interpretation of the flux evolution of
superluminal components combined with the model-fit of their kinematics is
important and fruitful. This kind of combined investigation not only can
greatly improve the model-simulation of their kinematics with properly
selecting the model parameters (especially the bulk Lorentz factor and Doppler
factor as function of time), but also their light curves can be well
interpreted in terms of their Doppler-boosting effect. Therefore, we can almost
completely (or perfectly) understand the physical nature of these components:
their kinematic/dynamic characteristics and emission properties. | S. J. Qian | 2023-06-12T04:41:05Z | http://arxiv.org/abs/2306.06863v1 | # Flux evolution of superluminal components in blazar 3C454.3
###### Abstract
Context:The kinematic behavior of superluminal components observed at 43GHz in blazar 3C454.3 were model-fitted with their light curves interpreted in terms of their Doppler-boosting effect.
Aims:The relation between the flux evolution of superluminal components and their accelerated/decelerated motion or the increase/decrease in their Lorentz/Doppler factor was investigated.
Methods:The precessing jet-nozzle scenario previously proposed by Qian et al. (1991, 2018a, 2021) and Qian (2018b, 2022a, 2022b) was applied to consistently model-fit the kinematic behavior and light curves for two superluminal components (B4 and B6) measured by Jorstad et al. (2005).
Results:For both B4 and B6 which were ascribed respectively to the jet-A and jet-B of the double-jet structure assumed for 3C454.3, their kinematic features were well model-fitted with their bulk Lorentz factor and Doppler factor (as function of time) convincingly derived. It is shown that the light curves of the radio bursts associated with knot B4 and knot B6 can be well explained in terms of their Doppler-boosting effect. Similarly, for the knot R3 observed at 15 GHz (Qian et al. 2014, Britzen et al. 2013) the interpretation of its kinematic behavior and light curve is presented in the appendix.
Conclusions:We emphasize that the interpretation of the flux evolution of superluminal components combined with the mode-fit of their kinematics is important and fruitful. This kind of combined investigation not only can greatly improve the model-simulation of their kinematics with properly selecting the model-parameters (especially the bulk-Lorentz factor and Doppler factor as functions of time), but also their light curves can be well interpreted in terms of their Doppler-boosting effect. Therefore, we can almost completely (or perfectly) understand the physical nature of these components: their kinematic/dynamic characteristics and emission properties.
## 1 Introduction
3C454.3 (redshift z=0.859) is one of the most prominent blazars, radiating across the entire electromagnetic spectrum from radio/mm through IR/optical/UV and X-ray to high-energy \(\gamma\)-rays with strong variability at all the wavebands on various timescales. For example, in May 2005 its optical flaring activity reached an unprecedented level with the R-band magnitude \(m_{R}\)\(\sim\)12 mag, an extremely bright state (Raiteri et al. 2007, Villata et al. 2006). During the time-interval 2007-2010 3C454.3 underwent an exceptionally strong \(\gamma\)-ray activity period (Vercellone et al. 2009, 2011): it was the brightest \(\gamma\)-ray source in the sky on 2009 December 2-3 and 2010 November 20, a factor of \(\sim\)2 and \(\sim\)6 brighter than the Vela pulsar (Vercellone et al. 2010), respectively. Particularly important, both \(\gamma\)-ray flares were associated with the ejection of superluminal components observed by VLBI-observations (Jorstad et al. 2013).
3C454.3 is an extremely variable flat-spectrum radio quasar with superluminal components steadily ejected from its radio core. Based on the 1981-1986 VLBI observations at centimeter wavelengths Pauliny-Toth et al. (1987) firstly detected some distinctive features of its superluminal components: superluminal brightening of stationary structure, apparent acceleration of superluminal components and extreme curvature in the apparent trajectory.
Multiwavelength monitoring campaigns on 3C454.3 have been performed during \(\gamma\)-ray outbursts to investigate the correlation between radio, optical, X-ray and \(\gamma\)-ray flares, especially the correlation between the \(\gamma\)-ray outbursts and the ejection of superluminal components from the radio core on parsec-scales (e.g., Jorstad et al. 2001, 2010, 2013; Vercellone et al. 2010). These studies have greatly improved the understanding of the distinctive high-energy phenomena occurred in 3C454.3 and in other \(\gamma\)-ray blazars. Moreover, VLBI observations have revealed more peculiar features in its morphological structure and kinematics. For example,
(1) at 15 GHz an arc-like structure around the core was detected (Britzen et al. 2013; cf. Fig.A.1) in an area delimited by core distance [1.5 mas, 3.5 mas] and position angle [-40\({}^{\circ}\), -110\({}^{\circ}\)], which was formed by the distributed superluminal components. It expanded with superluminal velocities, dominating the pc-structure over \(\sim\)14 years. A similar arc-like structure was also observed at 43 GHz (cf. Fig.1 below; Jorstad et al. 2005);
(2) VLBI observations at 43 GHz showed that the trajectories of moving components could be separated into two groups: e.g., beyond core distance \(\sim\)0.5 mas, knot B6 (and B3) moved northwest, while knot B4 moved southwest (Jorstad et al. 2005);
(3) However, in a striking contrast, near the core (within
core distance of \(\sim\)0.2 mas) knot B6 moved along an extremely curved path with its ejection position angle \(\sim\) -150\({}^{\circ}\) (at ejection epoch 1999.80), while knot B4 moved along a track with its ejection position angle \(\sim\) -80\({}^{\circ}\) (at ejection epoch 1998.36). This position angle swing (\(\sim\)50\({}^{\circ}\)/yr; Jorstad et al. 2005) seems too fast to be explained in terms of a "sudden jump" in the jet-nozzle direction and it could be presumed as a clue for a double-jet structure in 3C454.3 with B4 and B6 being ejected from the respective jets (similar rapid position angle swings were observed in blazars OJ287 and 3C279, cf. Qian 2018b, Qian et al. 2019);
(4) A detailed analysis of the VLBI-kinematics observed at 43 GHz revealed some recurrent trajectory patterns: e.g., the knot pair B4/K09 (in jet-A) and the knot-pairs B6/K10 and B2/K16 (in jet-B) with time intervals of \(\sim\)1-2 precession periods (Qian 2021). This discovery seems significant for understanding the nature of their kinematics. That is, the recurrence of these regular trajectory patterns may not only imply some periodic behavior induced by the jet-nozzle precession, but also the possible existence of some common precessing trajectory pattern(s) as suggested in Qian et al.
(5) The periodicity analysis of the secular optical (B-band) light curve for 3C454.3 by using Jurkevich method (Su 2001) resulted in a period of \(\sim\)12.4 yr. Similarly, an analysis of its light curves at multifrequencies (4.8, 8, 14.5, 22 and 37 GHz) (Kudryavtseva & Pyatunina, 2006) revealed the periodicity in its flux variations with a period of 12.4 years also. Based on the quasi-regular double-bump structure in its 4.8 and 8 GHz light curves, Qian et al. (2007) proposed a binary black hole model with a double-jet structure to explain these light curves;
(6) VLBI observations at 15 GHz detected a radio burst (during 2005-2011) for superluminal component R3 associated with an extreme curvature in its trajectory in the outer jet regions (at core distances \(r_{n}\)\(\sim\)2-3.5 mas).
In this paper we shall investigate the flux evolution observed at 43 GHz for two superluminal components in 3C454.3 (knot B4 and knot B6) and the connection with their kinematic properties. In order to search for the association of the flux variations with their Doppler-boosting effect, the model-fitting of their kinematic behaviors were performed more closely, taking into consideration of more details in the curves delineating their kinematics (e.g., the observed details in their core distance \(r_{n}(t)\) and coordinate \(Z_{n}(t)\) as function of time), which were ignored in the previous studies.
3C454.3 has a very complex structure at 43 GHz. A map observed at 2001.28 is shown in Figure 1 (cf. the sequence of maps presented in Fig.15 in Jorstad et al. 2005).
We shall utilize the results obtained for 3C454.3 by Qian et al. (2021). In that work its thirteen superluminal components observed at 43 GHz were separated into two groups: group-A including the six components (B4, B5, K2, K3, K09 and K14) and group-B including the seven components (B1, B2, B3, B6, K1, K10 and K16).1. Moreover, a double-jet structure (jet-A plus jet-B) was assumed to eject the superluminal knots of group-A and group-B, respectively. Interestingly, the kinematical behavior of the superluminal knots ascribed to group-A and group-B could be well model-fitted respectively in the framework of our precessing jet-nozzle scenario. It was found that both jets precess with the same precession period of 10.5 yr, but have different precessing common trajectory patterns.
As a supplement we shall discuss the flux evolution associated with the Doppler-boosting effect for the superluminal knot R3 observed at 15 GHz.
Footnote 1: As identified in Jorstad et al. (2005, 2013).
## 2 Recapitulation of the precessing nozzle model: Geometry and formalism
In order to investigate the kinematic behavior and distribution of trajectory of the superluminal components on parsec scales in terms of our precessing jet-nozzle scenario, we define a special geometry in general case, where four coordinate systems are introduced (Qian et al. 2021; Qian 2022a, 2022b), as shown in Figure 2: (X,Y,Z), \((X_{n},Y_{n},Z_{n})\), \((X_{p},Y_{p},Z_{p})\), and \((x^{\prime}\),\(y^{\prime}\),\(z^{\prime}\)). Z-axis directs toward the observer, \((X_{n},Y_{n})\) and \((X_{p},Y_{p})\) define the plane of the sky with \(X_{n}\)-axis pointing toward the negative right ascension and \(Z_{n}\)-axis toward the north pole. Parameter \(\epsilon\) defines the angle between Z-axis and \(Y_{n}\)-axis and \(\psi\) the angle between X-axis and \(X_{n}\)-axis. Thus parameters \(\epsilon\) and \(\psi\) define the plane where the jet-axis (\(x_{0}(z_{0})\)) locates (see below). The helical trajectory of a knot can be described by the parameters A(\(s_{0}\)) and \(\phi(s_{0})\), which are defined in the coordinate system \((x^{\prime}\),\(y^{\prime}\),\(z^{\prime})\), where \(s_{0}\) is the arc-length along the axis of the helix and \(z^{\prime}\)-axis is along the tangent of the axis of the helix. In addition, The helical trajectory precesses around the jet axis, producing the trajectories of different knots ejected at different times. That is superluminal components are ejected from the precessing jet nozzle at the helical phase \(\phi\) which is related to the precession phase \(\phi_{0}\). \(\phi_{0}\) varies over a range of [0, 2\(\pi\)] during a precession period and is related to its ejection epoch \(t_{0}\).
In the following we will adopt the formalism of the precessing nozzle scenario given in Qian et al. (2021). The projection of the spatial trajectory of a knot on the sky-plane can be calculated by using the transformation between the coordinate systems. Superluminal knots are assumed to move on a helical trajectory defined by the parameters A(\(s_{0}\)) (amplitude) and \(\phi(s_{0})\) (phase), \(s_{0}\) - arc-length along the jet axis which is defined by
\[x_{0}(z_{0})=p(z_{0})({z_{0}}^{\zeta}) \tag{1}\]
where
\[p(z_{0})=p_{1}+p_{2}[1+exp(\frac{z_{t}-z_{0}}{z_{m}})]^{-1} \tag{2}\]
\(\zeta\), \(p_{1}\), \(p_{2}\), \(z_{t}\) and \(z_{m}\) are constants. \(\zeta\)=2 represents a parabolic shape for the axis of the helical trajectory.
The arc-length along the helical trajectory is \(s_{0}\):
\[s_{0}=\int_{0}^{z_{0}}\sqrt{1+(\frac{dx_{0}}{dz_{0}})^{2}}dz_{0} \tag{3}\]
The helical trajectory of a knot can be described in (X,Y,Z) system as follows.
\[X(s_{0})=A(s_{0}){\rm cos}\,\phi(s_{0}){\rm cos}\,\eta(s_{0})+x_{0} \tag{4}\]
\[Y(s_{0})=A(s_{0}){\rm sin}\,\phi(s_{0}) \tag{5}\]
\[Z(s_{0})=A(s_{0}){\rm cos}\,\phi(s_{0}){\rm sin}\,\eta(s_{0})+z_{0} \tag{6}\]
where \(\tan\eta(s_{0})\)=\(\frac{dx_{0}}{dz_{0}}\). The projection of the helical trajectory on the plane of the sky (or the apparent trajectory) is represented by
\[X_{n}=X_{p}{\rm cos}\,\psi-Z_{p}{\rm sin}\,\psi \tag{7}\]
\[Y_{n}=X_{p}{\rm sin}\,\psi+Z_{p}{\rm cos}\,\psi \tag{8}\]
where
\[X_{p}=X(s_{0}) \tag{9}\]
\[Z_{p}=Z(s_{0}){\rm sin}\,\epsilon-Y(s_{0}){\rm cos}\,\epsilon \tag{10}\]
All coordinates and amplitude (A) are measured in units of milliarcsecond. Introducing the following functions
\[\Delta={\rm arctan}[(\frac{dX}{dZ})^{2}+(\frac{dY}{dZ})^{2}]^{-\frac{1}{2}} \tag{11}\]
\[\Delta_{p}={\rm arctan}(\frac{dY}{dZ}) \tag{12}\]
\[\Delta_{s}={\rm arccos}[(\frac{dX}{ds_{0}})^{2}+(\frac{dY}{ds_{0}})^{2}+( \frac{dZ}{ds_{0}})^{2}]^{-\frac{1}{2}} \tag{13}\]
We can then calculate the viewing angle \(\theta\), apparent transverse velocity \(\beta_{app}\), Doppler factor \(\delta\) and the elapsed time T, at which the knot reaches distance \(z_{0}\), as follows:
\[\theta={\rm arccos}[{\rm cos}\,\epsilon({\rm cos}\,\Delta+{\rm sin}\,\epsilon \tan\Delta_{p})] \tag{14}\]
\[\delta=[\Gamma(1-\beta{\rm cos}\,\theta)]^{-1} \tag{15}\]
where \(\Gamma\)=\([1-\beta^{2}]^{-\frac{1}{2}}\) - bulk Lorentz factor, \(\beta\)=\(v\)/c, \(v\) - velocity of the knot.
\[\beta_{app}=\beta{\rm sin}\,\theta/(1-\beta{\rm cos}\,\theta) \tag{16}\]
\[T=\int_{0}^{s_{0}}\frac{(1+z)}{\Gamma\delta{\rm vcos}\,\Delta_{s}}ds_{0} \tag{17}\]
In this paper we shall adopt the concordant cosmological model (\(\Lambda CDM\)) with \(\Omega_{m}\)=0.3, \(\Omega_{\Lambda}\)=0.7, and Hubble constant \(H_{0}\)=70km\(s^{-1}Mpc^{-1}\) (Spergel et al. 2003). Thus for 3C45.43, z=0.859, its luminosity distance is \(D_{l}\)=5.483 Gpc (Hogg 1999, Pen 1999) and angular diameter distance \(D_{a}\)= 1.586 Gpc. Angular scale 1 mas=7.69 pc and proper motion of 1 mas/yr is equivalent to an apparent velocity of 46.48 c.
## 3 Knot B4: Model-fitting of kinematic behavior and 43 GHz light curve
As shown in the previous work (Qian et al. 2021) knot B4 was ejected from the nozzle associated with the jet-A in our precessing jet-nozzle scenario. Other knots attributed to jet-A are B5, K2, K3, K09 and K10. (Jorstad et al. 2005, 2010, 2013). The kinematic behavior of these knots has been consistently interpreted in terms of our scenario (Qian et al. 2021), but untill now their flux evolution has not been investigated.
Knot B4 produced two radio bursts. In order to investigate
Figure 1: A 43 GHz image of 3C454.3 observed at 2001.28 (by courtesy of S.G.Jorstad, cf. Fig.44 in Jorstad et al. 2005; also cf. Fig.15 (for a sequence of total intensity images during 1998.17–2001.28) and Fig.18 (for the trajectories of B4 and B6). On this map the positions of three knots (B4, B6 and D) should be indicated: knot B4 at position \([r_{n},PA]\)=[1.16 mas, –92.5\({}^{\circ}\)], knot B6 at position \([r_{n},PA]\)=[0.70 mas, –80.1\({}^{\circ}\)] and knot D at position \([r_{n},PA]\)=[6 mas, –82\({}^{\circ}\)]. The ejection position angles of B4 and B6 were \(-80^{\circ}\) and \(-150^{\circ}\) respectively. Their ejection epochs were 1998.36 and 1999.80. Thus a position angle swing of \(\sim 70^{\circ}\) occurred in a short time-interval of \(\sim\)1.3 years. It seems that such a rapid swing could not be induced by a sudden ”jump” in the position angle of the jet-nozzle, but could be resulted from the two knots being ejected by two different nozzles. In the precessing jet-nozzle scenario for 3C454.3 (Qian et al. 2021), knot B4 and knot B6 are ascribed to jet-A and jet-B, respectively. In addition, there is an arc-like structure around the core which distributed in the area delimited by \(r_{n}\)\(\simeq\)[1.0–1.5 mas] and PA\(\simeq\)[–30\({}^{\circ}\), –110\({}^{\circ}\)]. This arc-like structure is quite similar to that detected at 15 GHz by Britzen et al. (2013); cf. Fig.A.1 in the appendix.
their flux evolution associated with its kinematic behavior we adopt the model parameters for jet-A as before:
(a) The plane, in which the jet axis locates, is defined by parameters \(\epsilon\) and \(\psi\): \(\epsilon\)=0.0126 rad=0.72\({}^{\circ}\), \(\psi\)=-0.1 rad=-5.73\({}^{\circ}\).
(b) The jet axis is defined by a set of parameters: \(\zeta\)=2, \(p_{1}\)=\(p_{2}\)=0, \(z_{t}\)=33 mas, \(z_{m}\)=3.0 mas (cf. equations 1 and 2).
(c) The precessing common trajectory pattern is assumed as: amplitude A=\(A_{0}\)sin(\(\pi z_{0}/Z_{1}\)) with (\(A_{0}\)=0.727, \(Z_{1}\)=240 mas) and the helical phase \(\phi\) identically equal to the precession phase \(\phi_{0}\).
For jet-A the precession phase \(\phi_{0}\) of the superluminal knots is related to the ejection epoch \(t_{0}\) as follows:
\[\phi_{0}(rad)=4.58+\frac{2\pi}{T_{0}}(t_{0}-1998.24) \tag{18}\]
In order to investigate the flux evolution associated with the Doppler boosting effect the observed flux density will be calculated as follows:
\[S_{obs}(\nu,t)=S_{int}(\nu,t){\times}[\delta(t)]^{3+\alpha(\nu,t)} \tag{19}\]
where \(S_{obs}\) - observed flux density, \(S_{int}\) - intrinsic flux density (\(S_{\nu}\)\(\propto\)\(\nu^{-\alpha}\)) and \(\alpha\) - spectral index.
The modeled distribution of the precessing trajectory for the superluminal components of jet-A is shown in Figure 3 (left panel) for precession phases \(\phi\)=0.0, 1.0, 2.0, 3.0, 4.0, 5.0 rad, respectively. The projected trajectories of B5, K09 and K14 within the jet-cone are shown in the right panel. As shown in the previous paper (Qian et al. 2021), five superluminal components (B4, B5, K2, K3, K09 and K14) of jet-A were found to undergo accelerated/decelerated motion, revealing increase/decrease in their bulk Lorentz factor and Doppler factor. However, their flux evolution due to the Doppler-boosting effect has not been investigated. Here we shall model-fit the 43 GHz light curve for knot B4 in terms of its Doppler-boosting effect in combination with the explanation of its kinematics.
### Knot B4: model-fitting results of kinematics
The model-fitting results of its kinematics are shown in Figures 4-7.
The traveled distance Z(t) along the jet-axis (left panel) and the modeled curves for parameters \(\epsilon(t)\) and \(\psi(t)\) are shown in Fig.4. Before 1998.52, or core distance \(r_{n}\)\(\leq\)0.066 mas (Z\(\leq\)23.0 pc), \(\epsilon\)=0.72\({}^{\circ}\) and \(\psi\)=-5.73\({}^{\circ}\), knot B4 moved along the precessing common trajectory, while after 1998.52 it started to move along its own individual trajectory, where parameter \(\psi\) showed quite large changes during the two radio bursts in (1998.4-1999.4) and (1999.9-2001.2).
In the model fitting its precession phase was assumed to be \(\phi_{0}\)=4.58 rad and the corresponding ejection time \(t_{0}\)=1998.24, well consistent with the ejection time 1998.36\(\pm\)0.07 derived by Jorstad et al. (2005).
It can be seen from Figs. 5 and 6 that its entire kinematic features (including trajectory \(Z_{n}(X_{n})\), core separation \(r_{n}(t)\), coordinates \(X_{n}(t)\) and \(Z_{n}(t)\)) are all well fitted extending to core separation \(r_{n}\)\(\sim\)1.2 mas. The model-derived apparent velocity \(\beta_{app}(t)\), viewing angle \(\theta(t)\), bulk Lorentz factor \(\Gamma(t)\) and Doppler factor \(\delta(t)\) are shown in Fig.7. \(\Gamma(t)\), \(\delta(t)\) and \(\beta_{app}(t)\) all have a double-peak structure, corresponding to the double-peak structure of the light curve (Fig.8). The viewing angle \(\theta(t)\) varied from \(\sim\)1.3\({}^{\circ}\) (1998.3) to \(\sim\)4.6\({}^{\circ}\) (2002.0).
### Knot B4: Flux evolution and Doppler-boosting effect
B4 produced two radio bursts during 1998.4-1999.4 and 1999.9-2001.2. In order to interpret the entire lightcurve, the model parameters (\(\epsilon(t)\), \(\psi(t)\) and \(\Gamma(t)\)) were carefully and consistently selected and we paid much more attention on the details in the observed kinematic behavior (especially on the details in \(r_{n}(t)\), \(dr_{n}/dt\) and \(Z_{n}(t)\)).
The model-fit results of the entire light curve (including
Figure 2: Geometry of the precessing nozzle scenario with a helical trajectory pattern.
Figure 4: Knot B4: the modeled traveled distance Z(t) along the jet axis (left panel) and the modeled curves for the parameter \(\epsilon(t)\) and \(\psi(t)\) defining the plane where the jet-axis locates. Before 1998.52 \(\epsilon\)=0.72\({}^{\circ}\) and \(\psi\)=-5.73\({}^{\circ}\), B4 moved along the precessing common trajectory. After 1998.52 B4 started to move along it own individual track. During the two radio bursts (\(\sim\)1998.4–1999.4 and \(\sim\)1999.9–2001.2) parameter \(\psi\) varied quite largely.
Figure 5: Knot B4: the model-fit of the observed trajectory \(Z_{n}(X_{n})\). Within \(\sim\)0.07 mas of the coordinate \(X_{n}\) B4 moved along the precessing common trajectory, while beyond that it moved along its own individual track, deviating from the common track. A prominent curvature occurred at \(r_{n}\simeq\)0.7–1.2 mas.
Figure 3: Jet-A. Left panel: the modeled distribution of the precessing common trajectories for the superluminal components at precessing phases \(\phi\)=0.0, 1.0,2.0, 3.0, 4.0 and 5.0, respectively. Right panel: the trajectories observed for knots B5, K09 and K14 within the jet-cone. The jet-axis is at position angle of \(-84.3^{\circ}\).
both the radio bursts) are shown in Figure 8. For the first burst, the epoch of the modeled peak \(t_{max}\)=1998.47 with the maximum Doppler factor \(\delta_{max}\)=39.49 and the maximum flux density \(S_{max}\)=1.88 Jy, while its intrinsic flux density \(S_{int}\)=4.86 \(\mu\)Jy.
For the second burst, the epoch of the modeled peak \(t_{max}\)=2000.01 with the maximum Doppler factor \(\delta_{max}\)=18.02 and maximum flux density \(S_{max}\)=2.17 Jy, while its intrinsic flux density \(S_{int}\)=87.5 \(\mu\)Jy. It should be noted that the second burst originated from its re-acceleration in the convergence/collimation region near the position [\(r_{n}\)=0.6-1.2 mas, PA=\(\sim\)\(-\)90\({}^{\circ}\)], where parameter \(\psi\) rapidly increased, resulting in the southwest curvature of its trajectory (Figs. 4-6).
## 4 Knot B6: Model fitting of kinematic behavior and 43 GHz light curve
As in the previous work superluminal knots B6 and B1, B2, B3, K1, K10, K16 were assumed to be ejected by the nozzle of jet-B. For jet-B the modeled distribution of the precessing common trajectories for its superluminal components is shown in Fig.9. We shall use the same model parameters as before (Qian et al. 2021).
(a) The jet-axis locates in a plane defined by the parameters \(\epsilon\)=0.0126 rad=0.72\({}^{\circ}\) and \(\psi\)=0.20 rad=11.46\({}^{\circ}\).
(b) The shape of the jet-axis is defined by a set of parameters: \(\zeta\)=2, \(p_{1}\)=0, \(p_{2}\)=1.34\(\times\)10\({}^{-4}\)/mas, \(z_{t}\)=66 mas and \(z_{m}\)=6.0 mas (cf. equations 1 and 2).
(c) The common precessing trajectory pattern is defined by the amplitude parameter A=\(A_{0}\)[\(\sin(\pi z_{0}/Z_{1})\)]\({}^{1/2}\) with (\(A_{0}\)=0.182 mas and \(Z_{1}\)=396 mas) and helical phase \(\phi\)=\(\phi_{0}\)
Figure 6: Knot B4: the model-fits of the core separation \(r_{n}(t)\) and coordinates \(X_{n}(t)\) and \(Z_{n}(t)\). These fits are almost perfect (especially the fit for \(Z_{n}(t)\)) and much better than those previously presented in Qian et al. (2021) due to carefully having taken the details in its kinematic behavior into full account (especially details in \(r_{n}(t)\), d\(r_{n}\)/dt and \(Z_{n}(t)\)).
Figure 7: Knot B4. Left panel: the model-derived apparent velocity \(\beta_{app}(t)\) and viewing angle \(\theta(t)\). Right panel: the model-derived bulk Lorentz factor \(\Gamma(t)\) and Doppler factor \(\delta(t)\). \(\Gamma(t)\), \(\delta(t)\) and \(\beta_{app}(t)\) all show a double-peak structure closely related to its two radio bursts. At the first peak (epoch 1998.5) \(\delta_{max}\)=39.5, \(\Gamma_{max}\)=26.5, \(\beta_{app,max}\)=23.1 and \(\theta\)=1.26\({}^{\circ}\), while at the second peak (epoch 2000.0) \(\delta\)=\(\delta_{max}\)=18.0, \(\Gamma\)=14.9, \(\beta_{app}\)=14.5 and \(\theta\)=3.1\({}^{\circ}\). The maximum Lorentz factor was not coincident with the maximum Doppler factor: at 2000.1 \(\Gamma\)=\(\Gamma_{max}\)=17.2, \(\delta\)=17.7, \(\beta\)=\(\beta_{app,max}\)=17.1 and \(\theta\)=3.24\({}^{\circ}\). Our model-fitting results for the apparent velocity are quite different from those given in Jorstad et al. (2005; cf. its Fig.30) which were determined by using polynomial approximations. Figs.5–7 demonstrate that the entire kinematics observed within core separation \(r_{n}\)\(\simeq\)1.2 mas (corresponding to the traveled distance Z=16.3 mas\(\simeq\)125 pc) can be well explained in terms of our precessing nozzle scenario.
(\(z_{0}/Z_{2}\))\({}^{1/2}\) with \(Z_{2}\)=3.58 mas and \(\phi_{0}\) being the precession phase which is related to its ejection time \(t_{0}\) as follows:
\[\phi_{0}(rad)=0.42+\frac{2\pi}{T_{0}}(t_{0}-1994.46) \tag{20}\]
As shown in the previous paper (Qian et al., 2021), four superluminal components (B2, B6, K1 and K16) of jet-B were found to participate in accelerated/decelerated motions, showing the trends of increasing/decreasing in their bulk Lorentz factor and Doppler factor. However, until now their flux evolution due to the Doppler-boosting effect has not been investigated. Here we shall model-fit the 43 GHz light curve for knot B6 in combination with the explanation of its kinematics.
## 5 Knot B6: model-fitting results of kinematics
The model fitting results of its kinematics are shown in Figures 10-13.
Its ejection time \(t_{0}\)=1999.61, well consistent with that (\(t_{0}\)=1999.80\(\pm\)0.37) given by Jorstad et al. (2005) and the corresponding precession phase \(\phi_{0}\)=3.50 rad.
The modeled traveled distance Z(t) along the jet-axis and the model-derived curves for the parameters \(\epsilon(t)\) and \(\psi(t)\) are shown in Fig.10. Before 2000.54 (or \(r_{n}\)\(\leq\)0.18 mas) \(\epsilon\)=0.72\({}^{\circ}\) and \(\psi\)=11.5\({}^{\circ}\), knot B6 moved along the precessing common trajectory. After 2000.54 \(\psi\) quickly decreased to \(-\)4.6\({}^{\circ}\) and B6 started to move along its own individual track, deviating from the precessing common trajectory.
The model-fitting results of the observed trajectory \(Z_{n}(X_{n})\), distance from the core \(r_{n}(t)\), coordinates \(X_{n}(t)\) and \(Z_{n}(t)\) are shown in Figs. 11 and 12. All these kinematic features were well fitted by the precessing nozzle scenario. The model-derived curves for Lorentz factor \(\Gamma(t)\), Doppler factor \(\delta(t)\), apparent velocity \(\beta_{app}(t)\) and viewing angle \(\theta(t)\) are shown in Fig.13. \(\Gamma(t)\), \(\delta(t)\) and \(\beta_{app}(t)\) all have a bump structure. At 2000.30 \(\Gamma_{max}\)=30.5, and \(\delta_{max}\)=56.0, but \(\beta_{app,max}\)=21.4 at 2000.6. The viewing angle \(\theta(t)\) varied in the range [0.72\({}^{\circ}\)(1999.6)-0.38\({}^{\circ}\)(2000.0)-1.63\({}^{\circ}\)(2001.5)].
Figure 8: Knot B4. Left panel: the model fit of the 43 GHz light curve. The modeled peak flux densities are 1.88 Jy (at 1998.5) and 2.17 Jy (at 2000.0), respectively. The intrinsic flux density of the first burst was 4.86\(\mu\)Jy. Right panel: the light curve normalized by the modeled peak flux density of the first burst is well fitted by the Doppler boosting profile [\(\delta(t)/\delta_{max}\)]\({}^{3+\alpha}\) (\(\alpha\) was asssumed to be 0.5). The second burst had its Doppler factor (\(\delta_{max}\)=18.0) much smaller than the first burst (\(\delta_{max}\)=39.5), while its observed flux density (2.17 Jy) was larger than that (1.88 Jy) of the first burst. Thus in the model fitting of the flux evolution the intrinsic flux density of the second burst was assumed to be eighteen times that of the first burst (i.e., 87.5\(\mu\)Jy for the second burst).
Figure 9: Jet-B. Left panel: the modeled distribution of the precessing common trajectories for the superluminal components at precession phases \(\phi\)=0.0, 1.0, 2.0, 3.0, 4.0 and 5.0 rad, repectively. Right panel: the projected trajectories of knots B3, K1 and K10 within the jet cone. The jet-axis is at position angle –101.5\({}^{\circ}\).
Figure 11: Knot B6: the model fit of the apparent trajectory. Within core separation \(r_{n}\)\(\sim\)0.18 mas (or before 2000.54, corresponding to the traveled distance Z\(\leq\)19.0 mas=146 pc) \(\epsilon\)=0.72\({}^{\circ}\) and \(\psi\)=11.5\({}^{\circ}\), knot B6 moved along the precessing common trajectory with its precession phase \(\phi_{0}\)=3.50 rad. Beyond \(r_{n}\)=0.18 mas it started to move along its own individual track, deviating from the precessing common trajectory.
Figure 12: Knot B6: the model-fits of the core separation \(r_{n}(t)\) (left panel) and the coordinates \(X_{n}(t)\) and \(Z_{n}(t)\) (right panel). All these kinematic features are well fitted by the precessing nozzle scenario. The radio burst occurred during \(\sim\)2000–2001 is associated with its accelerated/decelerated motion.
Figure 10: Knot B6: the modeled traveled distance Z(t) along the jet axis (left panel) and the model-derived curves for the parameters \(\epsilon(t)\) and \(\psi(t)\) which define the plane where the axis of jet-B locates. Before 2000.54 (or \(r_{n}\)\(\leq\)0.18 mas, corresponding to Z\(\leq\)19.0 mas=146 pc) \(\epsilon\)=0.72\({}^{\circ}\) and \(\psi\)=11.5\({}^{\circ}\), knot B6 moved along the precessing common track. After 2000.54 \(\psi\) rapidly decreased to –4.6\({}^{\circ}\) (at 2001.4, knot B6 started to move along its own individual trajectory, deviating from the precessing common track.
### Knot B6: Flux evolution and Doppler-boosting effect
As for the case of knot B4 we calculated the observed light curve by using equation (19) in order to take account of its Doppler boosting effect. The model fit of its light curve is shown in Figure 14. It can be seen that the 43 GHz light curve is well fitted with the modeled peak flux density 4.31 Jy at 2000.30 and an intrinsic flux density of 3.3\(\mu\) Jy (left panel). The light curve normalized by the modeled peak flux density is well fitted by its Doppler-boosting profile \([\delta(t)/\delta_{max}]^{3+\alpha}\) with a presumed value of \(\alpha\)=0.5 (right panel).
## 6 Knot R3: model fitting results of kinematics and light curve observed at 15 GHz
Two characteristics should be emphasized: (1) An arc-like structure was detected at 15 GHz; (2) Its light curve comprises two radio bursts. These features are very similar to those of knot B4 observed at 43 GHz. See Appendix.
## 7 Discussion and Conclusion
We have applied the precessing jet-nozzle scenario proposed previously (e.g. Qian et al. 1991, 2014, 2021; Qian 2022a, 2022b) to successfully model-fit the kinematic behavior of the superluminal components B4 and B6 in blazar 3C454.3 and nicely explain the 43 GHz light curves in terms of their Doppler-boosting effect. Thus their kinematic properties (including the entire trajectory \(Z_{n}(X_{n})\), core separation \(r_{n}(t)\) and coordinates \(X_{n}(t)\) and \(Z_{n}(t)\)) and their flux evolution have been completely interpreted as a whole. Their kinemtic parameters (including bulk Lorentz factor \(\Gamma(t)\), Doppler factor \(\delta(t)\), viewing angle \(\theta(t)\) and apparent velocity \(\beta_{app}(t)\) as function of time) have been correctly determined.
In order to achieve this goal, we have carefully taken some significant details in their kinematic behavior (e.g., details in the observed core-separation curve \(r_{n}(t)\) and in the coordinate \(Z_{n}(t)\)) into full account, which were neglected in the previous work (Qian et al. 2021).
Obviously, similar studies can also be done for more components (e.g., B5, K3, K09, K14 (jet-A) and B2, K1, K16 (jet-B)) to associate their flux evolution with their
Figure 14: Knot B6: the model-fit of its light curve. The 43 GHz light curve is well fitted by the Doppler-boosting effect with the modeled peak flux density of 4.31 Jy at 2000.3 and its intrinsic flux density 3.3\(\mu\)Jy (left panel). The light curve normalized by the modeled peak flux is well fitted by its Doppler-boosting profile \([\delta/\delta_{max}]^{3+\alpha}\) with an assumed value of \(\alpha\)=0.5.
Figure 13: Knot B6: the model-derived apparent velocity \(\beta_{app}\) and viewing angle \(\theta(t)\) (left panel), and the model-derived bulk Lorentz factor \(\Gamma(t)\) and Doppler factor \(\delta(t)\) (right panel). \(\beta_{app}\), \(\Gamma\) and \(\delta\) all have a bump structure. At 2000.30 \(\delta\)=\(\delta_{max}\)=55.9 and \(\Gamma\)=\(\Gamma_{max}\)=30.5, while \(\beta\)=\(\beta_{max}\)=21.4 at 2000.59; \(\theta(t)\) varied in the range [0.72\({}^{\circ}\) (1999.6), 0.38\({}^{\circ}\) (2000.0), 1.63\({}^{\circ}\) (2001.5)]
Doppler boosting effect, because the modeled curves of Lorentz/Doppler factor for these components were already derived approximately in Qian et al. (2021). It seems that the optical light curves of superluminal knots observed in 3C454.3 (Jorstad et al. 2010, 2013) could also be studied within the framework of our precessing nozzle scenario, if the precessing common trajectory patterns (e.g., helical patterns on much smaller scales) are correctly selected (cf. Qian 2018b).
The full explanation of the kinematics and flux evolution observed at 43 GHz for both B4 and B6 in terms of the precessing nozzle scenario has further clarified the distinct features in 3C454.3: (1) its superluminal components could be separated into two groups which have different kinematic and flaring properties. For example, knot B4 moved southwest along a curved trajectory extending to core separation of \(\sim\)1.2 mas, passing through a convergence/recollimation region and producing a major burst. Moreover, the track of knot B4 could be connected to that of knot D at core distance \(\sim\)6 mas (cf. Fig.1). In contrast, knot B6 moved northwest along a track only extending to core distance of \(\sim\)0.7 mas without any flaring activity in the outer jet region. The VLBI observations at 15 GHz also revealed that knot R3 had its kinematic behavior similar to that of knot B4 observed at 43 GHz, while the kinematic behavior of knot R1 and R2 was quite different from that of knot R3; (2) At both 43 GHz and 15 GHz a prominent arc-like structure was detected, distributing over a very broad range of position angle from \(\sim\)-120\({}^{\circ}\) to \(\sim\)-30\({}^{\circ}\). A rapid position angle swing of \(\sim\)70\({}^{\circ}\) between knot B4 and knot B6 in \(\sim\)1.3 years might be regarded as a clue for the existence of a double-jet structure in 3C454.3, indicating that knot B4 and knot B6 were ejected from their respective jet; (3) The recurrent trajectory patterns found in the knot-pairs B4/K09, B6/K10 and B2/K14 may be regarded as favorable evidence to suggest some periodic behavior in knots' kinematics induced by the jet-nozzle precession with a precession period of \(\sim\)10.5 yr and the possible existence of a precessing common trajectory. Such kind of periodic recurrence of curved trajectory patterns seems to be important signatures for recognizing nozzle precession and determining precession periods for blazars; (4) Knot B4 produced two radio bursts at 43 GHz: one occurred near the core and the other in the outer jet region. Similarly, knot R3 also produced two radio bursts at 15 GHz, but at different core distances. Both the bursts can be interpreted in terms of Doppler-boosting effect. This similarity observed at diffrent frequencies might indicate the stability of the track patterns for knot B4 and knot R3.
Our precessing nozzle scenario is based on the two assumptions: (1) Superluminal components are ejected from a jet-nozzle at corresponding precession phases when the jet-nozzle precesses. Here for 3C454.3, the precession period is found to be 10.5 years; (2) Superluminal components move along their common precessing tracks in the innermost jet regions with a transition at certain core-distances in the outer-jet regions from precessing common tracks to their own individual trajectories.
However, these assumptions may imply to introduce severe constraints on the precessing nozzle scenario when it is applied to investigate the VLBI-kinematics of superluminal components in blazar 3C454.3 (Qian et al. 2021).2 That is, these assumptions may be "unavoidable" to require double precessing-nozzle structures existing in these blazars. Otherwise, if "single jet-nozzle" structures are assumed for these blazars, there would be no unified nozzle-precession to be explored for consistently delineating the kinematics of their superluminal components as a whole. One would have to deal with the kinematics of many superluminal components which are independent of each other. Thus it would be difficult to clarify their respective characteristics and connections to the central engine (the black hole/accretion disk system in its nucleus) for the components as a whole. In contrast, for quasars (e.g., B1308+328, PG1302-102 and NRAO 150) the precessing jet-nozzle scenario with a single jet structure may be applicable to determine their precession periods (Qian et al. 2018a, 2017; Qian 2016 and Qian 2023).
Footnote 2: Also in 3C345 (Qian 2022a, 2022b); OJ287 (Qian 2018b) and 3C279 (Qian et al. 2019).
Therefore, at present, the assumption of "double jet-nozzle structure" for blazar 3C454.3 (also for blazars 3C279, 3C345 and OJ287) may be regarded as a "working hypothesis", although it has initiated some physically meaningful results and there are some observational clues in favor of this suggestion. 3
Footnote 3: MHD theories of relativistic jets also provide some arguments for the possible existence of double-jet structure in binary black hole systems (Artymowicz & Lubow 1996, Artymowicz 1998, Shi et al. 2012, Shi & Krolik 2015).
In summary, we have applied our precessing nozzle scenario to study the kinematics and flux evolution of superluminal components in a few QSOs and blazars, determining their precession periods and precessing common trajectory patterns. For quasars B1308+326, PG1302-102 and NRAO150 only single jet structure has been assumed, while for blazars 3C279, 3C345, 3C454.3 and OJ287 double-jet structures are assumed. In all these cases the flux evolution of superluminal components can be interpreted in terms of their Doppler-boosting effect, although their intrinsic variations on shorter time-scales induced by the evolution of superluminal knots (superluminal plasmoids or relativistic shocks) need to be taken into account. The combined effects of Doppler-boosting and intrinsic variation in the flux evolution of superluminal components have also been found in blazar 3C345 (especially for its knots C9 and C10; Qian 2022a, 2022b).
Generally, the assumption of precessing common trajectory may be applicable, but the model-derived patterns and their extensions are quite different for different superluminal knots (Qian et al. 2021 and references therein). Higher resolution VLBI observations are needed to show if this assumption is still valid within core separations \(<\)0.1 mas.
Theoretically, this assumption may be based on the theoretical works of relativistic magnetohydrodynamics for jet formation: magnetic effects in acceleration/collimation zone of relativistic jets are very strong, forming some very solid magnetic structures and controlling the trajectory patterns of moving superluminal knots ejected from the jet nozzle (e.g., Vlahakis & Konigl 2004, Blandford & Payne 1982, Blandford & Znajek 1977, Camenzind 1990 and references therein). Thus all the superluminal components in a blazar can move along the precessing common trajectory
if the jet nozzle is precessing.
###### Acknowledgements.
We would like to thank J.W. Qian for her help with the preparation of the Figures.
|
2309.01989 | Exploring van der Waals materials with high anisotropy: geometrical and
optical approaches | The emergence of van der Waals (vdW) materials resulted in the discovery of
their giant optical, mechanical, and electronic anisotropic properties,
immediately enabling countless novel phenomena and applications. Such success
inspired an intensive search for the highest possible anisotropic properties
among vdW materials. Furthermore, the identification of the most promising
among the huge family of vdW materials is a challenging quest requiring
innovative approaches. Here, we suggest an easy-to-use method for such a survey
based on the crystallographic geometrical perspective of vdW materials followed
by their optical characterization. Using our approach, we found As2S3 as a
highly anisotropic vdW material. It demonstrates rare giant in-plane optical
anisotropy, high refractive index and transparency in the visible range,
overcoming the century-long record set by rutile. Given these benefits, As2S3
opens a pathway towards next-generation nanophotonics as demonstrated by an
ultrathin true zero-order quarter-waveplate that combines classical and the
Fabry-Perot optical phase accumulations. Hence, our approach provides an
effective and easy-to-use method to find vdW materials with the utmost
anisotropic properties. | Aleksandr S. Slavich, Georgy A. Ermolaev, Mikhail K. Tatmyshevskiy, Adilet N. Toksumakov, Olga G. Matveeva, Dmitriy V. Grudinin, Arslan Mazitov, Konstantin V. Kravtsov, Alexander V. Syuy, Dmitry M. Tsymbarenko, Mikhail S. Mironov, Sergey M. Novikov, Ivan Kruglov, Davit A. Ghazaryan, Andrey A. Vyshnevyy, Aleksey V. Arsenin, Valentyn S. Volkov, Kostya S. Novoselov | 2023-09-05T06:58:01Z | http://arxiv.org/abs/2309.01989v1 | ## Exploring van der Waals materials with high anisotropy: geometrical and optical approaches
## Abstract
The emergence of van der Waals (vdW) materials resulted in the discovery of their giant optical, mechanical, and electronic anisotropic properties, immediately enabling countless novel phenomena and applications. Such success inspired an intensive search for the highest possible anisotropic properties among vdW materials. Furthermore, the identification of the most promising among the huge family of vdW materials is a challenging quest requiring innovative approaches. Here, we suggest an easy-to-use method for such a survey based on the crystallographic geometrical perspective of vdW materials followed by their optical characterization. Using our approach, we found As\({}_{2}\)S\({}_{3}\) as a highly anisotropic vdW material. It demonstrates rare giant in-plane optical anisotropy, high refractive index and transparency in the visible range, overcoming the century-long record set by rutile. Given these benefits, As\({}_{2}\)S\({}_{3}\) opens a pathway towards next-generation nanophotonics as demonstrated by an ultrathin true zero-order quarter-waveplate that combines classical and the Fabry-Perot optical phase accumulations. Hence, our approach provides an effective and easy-to-use method to find vdW materials with the utmost anisotropic properties.
### Main
Modern nanophotonics exploits a plethora of novel phenomena for advanced light manipulation. Among them, are bound states in the continuum[1], chirality-preserving reflection[2], virtual-reality imaging[3, 4], and others. The key parameter in these effects is the refractive index \(n\) since it governs the resonance wavelength \(\lambda_{\text{res}}\) (\(\lambda_{\text{res}}\sim 1/n\))[5] and the resonance quality factor \(Q\) (_e.g.,_\(Q\sim n^{2}\) for the Mie resonances)[6] and the optical power, which is proportional to \(n-1\). Hence, even a slight increase in the refractive index gives a tremendous advantage in optical applications[7]. However, the refractive index is fundamentally limited[7], with the best results provided by high-refractive index materials, such as Si[8], GaP[9], TiO\({}_{2}\)[10], InGaS\({}_{3}\)[11], and SnS\({}_{2}\)[12]. As a result, a new strategy of using anisotropic materials[13, 14, 15, 16, 17, 18, 19] has appeared in recent years due to the emergence of an additional degree of freedom for light manipulation, namely, optical anisotropy. Optical anisotropy revolutionizes integrated nanophotonics[20] by enabling subdiffractional light guiding[21, 22], polariton canalization[23, 24], Dyakonov surface waves[25], and the high integration density of waveguides[26, 27].
The most promising anisotropic materials are van der Waals (vdW) crystals[15, 28, 29, 30]. They are bulk counterparts of two-dimensional (2D) materials. Their 2D layered origin naturally leads to record values of optical anisotropy[15] because of the fundamental difference between in-plane covalent and out-of-plane vdW atomic bonds. However, it mostly results in out-of-plane birefringence, while some interesting effects require in-plane anisotropy[16, 31]. At present, rutile continues to hold the record for strongest in-plane optical anisotropy in the visible range despite the significant advances in materials science[18, 32], which is surprising considering that several decades have passed since its discovery[33]. It has inspired intensive research of low-symmetry vdW crystals[34, 35, 36, 37].
In this work, we provide a solution to this long-term challenge. Our consideration of lattice geometry reveals that arsenic trisulfide (As\({}_{2}\)S\({}_{3}\)) stands out among other low-symmetry crystals. Further study of this material by micro-transmittance and spectroscopic ellipsometry measurements in combination with quantum mechanical computations confirmed that it possesses a giant in-plane optical anisotropy in the visible range. They also show that As\({}_{2}\)S\({}_{3}\) belongs to transparent high-refractive index materials. Thus, As\({}_{2}\)S\({}_{3}\) offers a universal material platform for nanooptics that brings benefits of both giant optical anisotropy and high refractive index.
### Origins of giant optical anisotropy of van der Waals materials
Recent investigations[29, 38] of vdW materials' optical properties reveal that those constitute the next-generation high refractive index materials platform with about 80% larger polarizability compared to traditional photonic materials, such as Si, GaP, and TiO\({}_{2}\). Hence, the search for highly refractive materials in the visible range among vdW crystals is a natural next step. Nevertheless, it is a tedious task because there are more than 5000 vdW crystals[39], and a straightforward enumeration of options is unacceptably time-consuming. To reduce the search area and pick the most promising materials for nanophotonics, we particularly aim for high in-plane optical anisotropy. Still, the family of anisotropic vdW crystals is huge, which motivates us to identify features relevant to large optical anisotropy. This problem is challenging since optical anisotropy could result from numerous unrelated physical effects. Among them are preferential directions of excitons[15, 18, 40], atomic-scale modulations[17], quasi-one-dimensional structures[16, 41], different natures of atomic bonding[15], aligned interaction of dipole excitations around specific atoms[42], phonon resonances[14, 43], and many others. Obviously, one of the reasons for the strong optical anisotropy is the directional material resonances, such as excitons[15, 18, 40] and phonons[14, 43]. However, it is difficult to identify directional resonances or types of atomic bonding since it requires costly
quantum-mechanical simulations. Therefore we choose an alternative approach, and assume that excitations with very strong anisotropy can manifest themselves in or be caused by the geometrical anisotropy of an elementary crystal cell.
Next, we compare the crystal structure of the most representative in-plane anisotropic crystals in Figure 1a. Interestingly, two materials stand out, namely, Sr\({}_{9/8}\)TiS\({}_{3}\) and As\({}_{2}\)S\({}_{3}\). According to Figure 1a, they exhibit the largest "geometric anisotropy", that is, the ratio of in-plane lattice parameters. Indeed, a recent study[17] shows that Sr\({}_{9/8}\)TiS\({}_{3}\) has the largest optical anisotropy (\(\Delta n\sim 2.1\)) with zero losses, which coincides with our predictions. However, the optical bandgap of Sr\({}_{9/8}\)TiS\({}_{3}\) corresponds to the infrared spectral range. In contrast, the optical bandgap of As\({}_{2}\)S\({}_{3}\) is within the visible range with \(E_{\text{g}}\sim 2.7\) eV (\(\lambda_{\text{g}}\sim 460\) nm)[44, 45], and As\({}_{2}\)S\({}_{3}\) crystal (Figure 1b) demonstrates a similar to Sr\({}_{9/8}\)TiS\({}_{3}\) "geometric anisotropy". Consequently, we anticipate that As\({}_{2}\)S\({}_{3}\) will offer a giant optical anisotropy and, like other vdW materials with strong optical anisotropy[18], a high refractive index in the visible range.
### Crystal structure of van der Waals As\({}_{2}\)S\({}_{3}\)
As\({}_{2}\)S\({}_{3}\) is a yellow semiconducting crystal usually found in nature as the mineral orpiment[44]. Amorphous As\({}_{2}\)S\({}_{3}\) has already proved useful in such photonic applications as holography[45] and fibers[47]. At the same time, As\({}_{2}\)S\({}_{3}\) in vdW configuration (see Figure 1b and Supplementary Note 1 for As\({}_{2}\)S\({}_{3}\) characterization) appeared only recently in the research focus owing to the extraordinarily large in-plane mechanical anisotropy[48, 49]. It also shows that our approach based on lattice geometry consideration in Figure 1a could be used to search for other giant anisotropic properties beyond optical ones.
In light of the importance of lattice parameters, we commenced our study of As\({}_{2}\)S\({}_{3}\) with their refinement _via_ X-ray diffraction measurements (see Methods). The XRD imaging patterns in Figures 2a-c confirm the monoclinic structure of As\({}_{2}\)S\({}_{3}\) (see Figure 1b) with the following lattice parameters: \(a=4.2546(4)\) A, \(b=9.5775(10)\) A, \(c=11.4148(10)\) A, \(\alpha=90^{*}\), \(\beta=90.442(4)^{*}\), and \(\gamma=90^{*}\). Using these values of the parameters, we computed the bandstructure of As\({}_{2}\)S\({}_{3}\) (see Supplementary Note 2) from the first principles (see Methods). Of great interest are the bandstructure cuts along crystallographic axes,
Figure 1: **Anisotropic crystalline structure as an origin for giant optical anisotropy.****a**, The comparison of the lattice parameters of the representative anisotropic crystals and their bandgaps. \(a_{1}\) and \(a_{2}\) stand for the in-plane lattice parameters. **b**, Monoclinic crystal structure of As\({}_{2}\)S\({}_{3}\) along the crystallographic \(b\)-axis (top) and \(a\)-axis (bottom). The black dashed frame shows the unit cell.
presented in Figures 2d-f. First of all, we can notice a considerable difference in the dispersion curves, which clearly indicates significant anisotropic properties. Moreover, the bandstructure cuts for in-plane directions have a fundamental difference: along the \(a\)-axis, the bandstructure has a high dispersion, while it is flat along the \(c\)-axis. Therefore, we expect a stronger dielectric response for the \(c\)-axis than for the \(a\)-axis as flat bands lead to a large density of states and, as a result, a large refractive index[7, 50].
### In-plane optical anisotropy of van der Waals As\({}_{2}\)S\({}_{3}\)
In general, for monoclinic systems, anisotropic permittivity tensor has a diagonal form diag(\(n_{a}\), \(n_{b}\), \(n_{c}\)) in the crystallographic (\(a\), \(b\), \(c\)) basis, where \(n_{a}\), \(n_{b}\), and \(n_{c}\) are refractive indices along the corresponding crystallographic axes[51]. The problem with this description is a non-orthogonal (\(a\), \(b\), \(c\)) basis, which significantly complicates the determination and the use of monoclinic optical constants due to the impossibility of decoupling the contribution of \(n_{a}\), \(n_{b}\), and \(n_{c}\) into the optical response of the monoclinic crystal. Luckily, the monoclinic angle \(\theta\) of As\({}_{2}\)S\({}_{3}\) differs from 90\({}^{\circ}\) very slightly by just 0.442(4)\({}^{\circ}\), which allows us to treat As\({}_{2}\)S\({}_{3}\) as an orthorhombic crystal. In this approximation, we can separately probe As\({}_{2}\)S\({}_{3}\) optical components (\(n_{a}\), \(n_{b}\), and \(n_{c}\)) by orthogonal polarizations. For this purpose, we measured the polarized micro-transmittance of As\({}_{2}\)S\({}_{3}\) flakes exfoliated on Schott glass substrates (see Methods) and determined their crystallographic axes by polarized Raman spectroscopy (see Supplementary Note 3). The exemplified transmittance spectra maps for parallel- and cross- polarizations for the case of 345-nm-thick flake are plotted in Figures 3a-b. Note that we choose the transparency range (\(500-850\) nm) of As\({}_{2}\)S\({}_{3}\), which allows us to leverage Cauchy models (see Methods) for As\({}_{2}\)S\({}_{3}\) refractive indices[15]. Using this description, we fitted the experimental data (see Figures 3a-b), and calculated the corresponding spectra in Figures 3c-d. Calculations agree perfectly with the experiment (Figures 3a-b) and give us in-plane optical constants of As\({}_{2}\)S\({}_{3}\) presented in Figure 3e. However, micro-transmittance cannot probe the out-of-plane component
Figure 2: **As\({}_{2}\)S\({}_{3}\) anisotropic crystal structure, a comprehensive characterization. XRD patterns in reciprocal space along \(a\), \(c^{+}\)-\(b^{+}\), b, \(c^{-}\)-\(a^{+}\), and \(c\), \(b^{-}\)-\(a^{+}\) reciprocal planes. The electronic bandstructure cuts along \(d\), \(a\)-axis, \(e\), \(b\)-axis, and \(f\), \(c\)-axis. Orange and blue curves present conduction and valence bands, respectively.**
of the As\({}_{2}\)S\({}_{3}\) dielectric tensor. Therefore, we performed single-wavelength Mueller-Matrix spectroscopic ellipsometry and near-field studies (see Supplementary Notes 4-6) presented in Figure 3e to get the complete picture of the As\({}_{2}\)S\({}_{3}\) dielectric response. Additionally, we performed the first-principle calculations (see Methods) of the As\({}_{2}\)S\({}_{3}\) dielectric function (see Figure 3e), which coincides well with the measured values, especially for in-plane components, \(n_{a}\) and \(n_{b}\). Notably, even first-principle computations yield zero extinction coefficient \(k\) for the considered visible range (see Figure 3e), which confirms that As\({}_{2}\)S\({}_{3}\) is a lossless material, promising for visible nanophotonics
### As\({}_{2}\)S\({}_{3}\) in the family of high-refractive index materials
Of immediate interest are absolute values of As\({}_{2}\)S\({}_{3}\) optical constants (see Figure 3e and Figures 4a-b). As anticipated from the bandstructure calculations in Figures 2d-f, As\({}_{2}\)S\({}_{3}\) has the largest refractive index \(n_{c}\) along the crystallographic \(c\)-axis (see Figure 3e). Besides, the benchmarking of \(n_{c}\) with other crystals (Figure 4a) reveals that As\({}_{2}\)S\({}_{3}\) also belongs to the family of high refractive index materials and holds record values below 620 nm. If we extrapolate the Cauchy model for \(n_{c}\) to As\({}_{2}\)S\({}_{3}\) optical bandgap (\(E_{g}\sim 2.7\) eV), then we can clearly see that As\({}_{2}\)S\({}_{3}\) fits the correlation for vdW materials between the optical bandgap and its refractive index, as shown in Figure 4c.
Apart from a high refractive index, As\({}_{2}\)S\({}_{3}\) possesses giant in-plane optical anisotropy \(\Delta n\sim 0.4\) (see the inset in Figure 3e). This is 20 % greater than the birefringence of rutile and even outperforms the excitonic maximum anisotropy of CsPbBr\({}_{3}\) perovskite[18], as seen in Figure 4b. Therefore, Figure 4 demonstrates that As\({}_{2}\)S\({}_{3}\) combines both high refractive index and giant optical anisotropy, which are the most crucial factors in the state-of-the-art nanophotonics.
Figure 3: **Optical properties of van der Waals As\({}_{2}\)S\({}_{3}\).** Experimental polarized micro-transmittance of an As\({}_{2}\)S\({}_{3}\) flake for **a,** parallel- and **b,** cross-polarized configurations. Calculated polarized micro-transmittance of As\({}_{2}\)S\({}_{3}\) for **c,** parallel- and **d,** cross-polarized configurations. The dashed lines show the position of crystallographic axes \(\sigma\) (red line) and \(c\) (green line). **e,** Anisotropic optical constants of As\({}_{2}\)S\({}_{3}\). The inset shows the in-plane birefringence of As\({}_{2}\)S\({}_{3}\). Tabulated optical constants of As\({}_{2}\)S\({}_{3}\) are collected in Supplementary Note 7.
### Unconventional true zero-order quarter-waveplates based on van der Waals As\({}_{2}\)S\({}_{3}\)
The exceptional optical properties of As\({}_{2}\)S\({}_{3}\) (see Figure 4) not only make this crystal promising for next-generation nanophotonics but also change the operation principle of classical optical elements. In order to demonstrate this, we investigated the waveplate characteristics of As\({}_{2}\)S\({}_{3}\) flake. Traditionally, the retardance \(\delta\) between the fast- and slow-axes of an anisotropic waveplate is determined by the simple expression \(2\pi\Delta nt/\lambda\), where \(\lambda\) is the wavelength of light in vacuum, \(\Delta n\) defines the material's birefringence, and \(t\) denotes the waveplate's thickness. However, this formula only holds for small values of \(\Delta n\) since it disregards the light scattering at the faces of an anisotropic material. Due to the significant difference in the refractive index components along the principal directions of the waveplate, the phase accumulation due to the repeated Fabry-Perot reflections makes the full retardance deviate from the simplified formula (see Figure 5a and Supplementary Notes 8-9). As a result, the giant optical anisotropy enables quarter-wave retardance at multiple wavelengths and at a thickness that is lower than predicted by the simplified expression. For instance, our As\({}_{2}\)S\({}_{3}\) operates as a true zero-order quarter-waveplate at two wavelengths (512 and 559 nm), and at 559 nm its thickness is lower than expected from the simplified expression. In contrast, the simplified equation predicts only single-wavelength operation at 522 nm (see Figure 5b). Here, we utilized a micro-transmittance scheme (see Figure 1c) at 512 and 559 nm to check this concept. The resulting transmittance maps in Figures 5d-g confirm the predicted quarter-waveplate behavior for the selected wavelengths. Furthermore, Figures 5d-g demonstrate that the polarization angles responsible for a quarter-waveplate mode differ from 45\({}^{\circ}\) with respect to the principal optical axes, unlike classical quarter-waveplates with 45\({}^{\circ}\) orientation[31]. This effect also originates from the giant optical anisotropy, which results in unequal absolute values of transmission amplitudes, \(A\) and \(B\), for the light polarized along principal directions (see Figure 5a). It explains our observations from Figure 5d-g. Finally, we would like to note that our quarter-waveplate has an extremely small thickness of 345 nm compared to the previous record-holder of ferrocene-based true zero-order quarter-waveplate with 1071
Figure 4: **As\({}_{2}\)S\({}_{3}\) in a family of high refractive index and birefringent materials.****a**, Refractive index and **b**, the birefringence of van der Waals As\({}_{2}\)S\({}_{3}\) and conventional photonic materials in their transparency windows. **c**, Comparison of the maximum in-plane refractive index of van der Waals As\({}_{2}\)S\({}_{3}\) in the transparency window with the established highly refractive materials.
nm thickness operating at 636 nm wavelength[31]. Hence, our device has about threefold improvement in size, which brings us a step closer to miniaturized next-generation optical elements.
## Conclusion
In summary, we provide a new route for exploring anisotropic vdW materials by comparing their crystal structures and bandgaps. In combination with optical characterization, it becomes a convenient tool for a quick assessment of promising vdW materials for anisotropy-based applications[13, 14, 15, 16, 17, 18, 19]. Our approach reveals that As\({}_{2}\)S\({}_{3}\) is a perfect vdW material for visible range nanophotonics with the largest in-plane optical anisotropy, high refractive index, and zero optical losses. These properties enrich photonic applications with a variety of novel possibilities at the nanoscale. For example, we designed an ultrathin two-wavelength As\({}_{2}\)S\({}_{3}\)-based quarter-wave plate, which is three times more compact than the thinnest single-wavelength quarter-waveplate[31]. Furthermore, our anisotropy analysis can be used beyond vdW materials. For instance, our assay predicts large anisotropy for non-vdW Sr\({}_{98}\)TiS\({}_{3}\), which recently was discovered to have colossal optical anisotropy in the near-infrared range[17]. Besides, anisotropy of mechanical, electronic, optical, and other properties are closely connected, which allows using the proposed method for other topics. Indeed, As\({}_{2}\)S\({}_{3}\) also demonstrates high mechanical anisotropy[48, 49] in addition to the found giant optical anisotropy in our work. Therefore, our findings can lead to the rapid development of low-symmetry materials[34, 35, 36, 37] by establishing a milestone for their anisotropy evaluation.
## Author Contributions
A.S.S. and G.A.E. contributed equally to this work. G.A.E., A.V.A., V.S.V., and K.S.N. suggested and directed the project. A.S.S., G.A.E., M.K.T., D.V.G., A.V.S., D.M.T., M.S.M., S.M.N., and D.A.G. performed the measurements and analyzed the data. A.N.T. and D.A.G. prepared the samples. O.G.M., A.M., K.V.K., I.K., and A.A.V. provided theoretical support. A.S.S. and G.A.E. wrote the original manuscript. A.S.S., G.A.E., D.A.G., A.A.V., A.V.A., V.S.V., and K.S.N. reviewed and edited the paper. All authors contributed to the discussions and commented on the paper.
Figure 5: **True zero-order quarter-wave plates based on ultrathin van der Waals As\({}_{2}\)S\({}_{3}\).****a**, The concept of As\({}_{2}\)S\({}_{3}\) waveplate: a combination of “classical” phase accumulation and Fresnel contribution arising from giant optical anisotropy. **b**, The comparison of phase retardance between classical and As\({}_{2}\)S\({}_{3}\)-based true-zero order waveplates. **c**, Schematic representation of the experimental setup. Measured polarized transmittance countourplot at **d**, 512 nm and **e**, 559 nm. The dashed lines show the quarter-waveplate operation regime. The data are normalized to the maximum values for each wavelength of transmitted light throughout the figure. Transmittance calculations are based on the anisotropic dielectric function presented in Figure 3e at **f**, 512 nm and **g**, 559 nm.
## Competing Interests
The authors declare no competing interests.
### Methods
**Sample preparation.** Bulk synthetic As\({}_{2}\)S\({}_{3}\) crystals were purchased from 2d semiconductors (Scottsdale) and exfoliated on top of Si, Si/SiO\({}_{2}\), quartz and Schott glass substrates at a room temperature by commercial scotch tapes from Nitto Denko Corporation (Osaka, Japan). Prior to exfoliation, the corresponding substrates were subsequently cleaned in acetone, isopropanol alcohol, and deionized water, and then, subjected to oxygen plasma (O\({}_{2}\)) to remove the ambient adsorbates.
**Atomic-force microscopy characterization.** The thickness of As\({}_{2}\)S\({}_{3}\) flakes was accurately characterized by an atomic force microscope (NT-MDT Ntegra II) operated in contact mode at ambient conditions. AFM measurements were acquired using silicon tips (ETALON, HA_NC ScanSens) with a head curvature radius of < 10 nm, a spring constant of 3.5 N/m and a resonant frequency of 140 kHz. Gwyddion software was used for image processing and quantitative analysis.
**X-ray diffraction analysis.** X-ray diffraction analysis of As\({}_{2}\)S\({}_{3}\) single crystal was performed on a Bruker D8 QUEST diffractometer with a Photon III CMOS area detector using Mo K\(\alpha\) radiation (\(\lambda\) = 0.71073 A) focused by a multilayer Montel mirror. Full data set was collected at 293 K within \(\varphi\)- and \(\omega\)-scans applying sample-to-detector distance of 80 mm and 100 mm to improve the precision of refined unit cell parameters. Raw data were indexed with cell_now and integrated using SAINT from the SHELXTL PLUS package[52, 53]. Absorption correction was performed using a numerical method based on crystal shape as implemented in SADABS. Crystal structure was solved by direct methods and refined anisotropically with the full-matrix F2 least-squares technique using SHELXTL PLUS. Further details of the data collection and refinement parameters are summarized in Supplementary Table 1. Selected interatomic distances and bond angles are listed in Supplementary Table 2. It is worth noting that the crystal structure of monoclinic As\({}_{2}\)S\({}_{3}\) was previously reported[54, 55] in non-conventional unit cell setting, which can be transformed to conventional setting by \(\begin{pmatrix}0&0&1\\ 0&-1&0\\ 1&0&0\end{pmatrix}\) matrix. Unit cell parameters and atomic positions reported in present work were determined with higher precision (Supplementary Table 3). CSD reference number 2258216 contains supplementary crystallographic data for this paper. These data can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data request/cif.
**First-principle calculations.** Electronic bandstructure calculations were performed using the screened hybrid functional HSE06 with 25% of mixing as implemented in Vienna _ab initio_ simulation package (VASP) code[56, 57]. The core electrons are described with projector augmented wave (PAW) pseudopotentials treating the As 4s and 4p and the 5 3s and 3p electrons as valence. A kinetic energy cutoff for the plane-wave basis was set to 350 eV. To calculate bandstructure we generated a path in reciprocal space using Spglib and Seek-path and used a standardized primitive cell by conventions of Spglib. Optical properties of As\({}_{2}\)S\({}_{3}\) were calculated using HSE06 hybrid functional. For this we used f-centered k-points mesh sampling the Brillouin zone with a resolution of 2n\(\cdot\)0.05 A\({}^{-1}\). Optical properties were calculated within GW approximation on wavefunctions calculated using HSE06 hybrid functional using the VASP code. For this, we obtained ground-state one-electron wavefunctions from HSE06 and used them to start the GW routines. Finally, we calculated the imaginary and real parts of the frequency-dependent dielectric function within GW approximation.
**Angle-resolved micro-transmittance.** The spectroscopic transmittance was measured in the 500-900 nm spectral range on an optical upright microscope (Zeiss Axio Lab.A1) equipped with a halogen light source,
analyzer, polarizer, and grating spectrometer (Ocean Optics QE65000) coupled by optical fiber (Thorlabs M92102) with core diameter 200 \(\upmu\)m. The transmitted light was collected from a spot of <15 \(\upmu\)m using an objective with \(\times\)50 magnification and numerical aperture N.A. = 0.8 (Objective "N-Achroplan" 50x/0.8 Pol M27). The detailed description of micro-transmittance setup can be found in publication[58].
**Imaging Mueller matrix ellipsometry.** A commercial Accention nanofilm_ep4 ellipsometer (Accurion GmbH) was used to measure 11 elements of the Mueller matrix (m\({}_{12}\), m\({}_{13}\), m\({}_{14}\), m\({}_{21}\), m\({}_{22}\), m\({}_{23}\), m\({}_{24}\), m\({}_{31}\), m\({}_{32}\), m\({}_{33}\), m\({}_{34}\)). The measurements were carried with a 5deg sample rotation angle step, 550 nm incident light wavelength and 50deg incident angle in rotation compensator mode.
**Scanning near-field optical microscopy.** Near-field imaging was performed using a commercially available scattering type Scanning Near Field Optical Microscope (neaSNOM), which allows to simultaneously scan the topography of the sample along with amplitude and phase of the near-field signal. For the illumination of the sample we used tunable Ti:Sapphire laser (Avesta) with a wavelength in the spectral range of 700-1000 nm. The measurements were conducted using reflection mode. As a scattering probe we used a platinum/iridium5 (Pttf\({}_{5}\)) coated AFM tip (ARROW-NCPt-50, Nanoworld) with a resonant frequency of about 275 kHz and a tapping amplitude of 100 nm.
## Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.
|
2303.02203 | X$^3$KD: Knowledge Distillation Across Modalities, Tasks and Stages for
Multi-Camera 3D Object Detection | Recent advances in 3D object detection (3DOD) have obtained remarkably strong
results for LiDAR-based models. In contrast, surround-view 3DOD models based on
multiple camera images underperform due to the necessary view transformation of
features from perspective view (PV) to a 3D world representation which is
ambiguous due to missing depth information. This paper introduces X$^3$KD, a
comprehensive knowledge distillation framework across different modalities,
tasks, and stages for multi-camera 3DOD. Specifically, we propose cross-task
distillation from an instance segmentation teacher (X-IS) in the PV feature
extraction stage providing supervision without ambiguous error backpropagation
through the view transformation. After the transformation, we apply cross-modal
feature distillation (X-FD) and adversarial training (X-AT) to improve the 3D
world representation of multi-camera features through the information contained
in a LiDAR-based 3DOD teacher. Finally, we also employ this teacher for
cross-modal output distillation (X-OD), providing dense supervision at the
prediction stage. We perform extensive ablations of knowledge distillation at
different stages of multi-camera 3DOD. Our final X$^3$KD model outperforms
previous state-of-the-art approaches on the nuScenes and Waymo datasets and
generalizes to RADAR-based 3DOD. Qualitative results video at
https://youtu.be/1do9DPFmr38. | Marvin Klingner, Shubhankar Borse, Varun Ravi Kumar, Behnaz Rezaei, Venkatraman Narayanan, Senthil Yogamani, Fatih Porikli | 2023-03-03T20:29:49Z | http://arxiv.org/abs/2303.02203v1 | # X\({}^{3}\)KD: Knowledge Distillation Across Modalities, Tasks and Stages
###### Abstract
Recent advances in 3D object detection (3DOD) have obtained remarkably strong results for LiDAR-based models. In contrast, surround-view 3DOD models based on multiple camera images underperform due to the necessary view transformation of features from perspective view (PV) to a 3D world representation which is ambiguous due to missing depth information. This paper introduces X\({}^{3}\)KD, a comprehensive knowledge distillation framework across different modalities, tasks, and stages for multi-camera 3DOD. Specifically, we propose cross-task distillation from an instance segmentation teacher (X-IS) in the PV feature extraction stage providing supervision without ambiguous error backpropagation through the view transformation. After the transformation, we apply cross-modal feature distillation (X-FD) and adversarial training (X-AT) to improve the 3D world representation of multi-camera features through the information contained in a LiDAR-based 3DOD teacher. Finally, we also employ this teacher for cross-modal output distillation (X-OD), providing dense supervision at the prediction stage. We perform extensive ablations of knowledge distillation at different stages of multi-camera 3DOD. Our final X\({}^{3}\)KD model outperforms previous state-of-the-art approaches on the nuScenes and Waymo datasets and generalizes to RADAR-based 3DOD. Qualitative results video at [https://youtu.be/1do9DPFmr38](https://youtu.be/1do9DPFmr38).
## 1 Introduction
3D object detection (3DOD) is an essential task in various real-world computer vision applications, especially autonomous driving. Current 3DOD approaches can be categorized by their utilized input modalities,, camera images [24, 36, 42] or LiDAR point clouds [21, 51, 55], which dictates the necessary sensor suite during inference. Recently, there has been significant interest in surround-view multi-camera 3DOD, aiming to leverage multiple low-cost monocular cameras, which are conveniently embedded in current vehicle designs in contrast to expensive LiDAR scanners. Existing solutions to 3DOD are mainly based on extracting a unified representation from multiple cameras [24, 26, 33, 37] such as the bird's-eye view (BEV) grid. However, predicting 3D bounding boxes from 2D perspective-view (PV) images involves an ambiguous 2D to 3D transformation without depth information, which leads to lower performance compared to LiDAR-based 3DOD [24, 26, 51, 1].
While LiDAR scanners may not be available in commercially deployed vehicle fleets, they are typically available in training data collection vehicles to facilitate 3D annotation. Therefore, LiDAR data is privile
Figure 1: While previous approaches considered multi-camera 3DOD in a standalone fashion or with depth supervision, **we propose X\({}^{3}\)KD, a knowledge distillation framework using cross-modal and cross-task information** by distilling information from LiDAR-based 3DOD and instance segmentation teachers into different stages (marked by red arrows) of the multi-camera 3DOD.
during training but not during inference. The recently introduced BEVDepth [24] approach pioneers using accurate 3D information from LiDAR data at training time to improve multi-camera 3DOD, see Fig. 1 (top part). Specifically, it proposed an improved Lift-Splat-Shoot PV-to-BEV transform (LSS++) and depth supervision (DS) by projected LiDAR points, which we analyze in Table 1. We observe that the LSS++ architecture yields significant improvements, though depth supervision seems to have less effect. This motivates us to find additional types of supervision to transfer accurate 3D information from LiDAR point clouds to multi-camera 3DOD. To this end, we propose cross-modal knowledge distillation (KD) to not only use LiDAR _data_ but a high-performing LiDAR-based 3DOD _model_, as in Fig. 1 (middle part). To provide an overview of the effectiveness of cross-modal KD at various multi-camera 3DOD network stages, we present three distillation techniques: feature distillation (X-FD) and adversarial training (X-AT) to improve the feature representation by the intermediate information contained in the LiDAR 3DOD model as well as output distillation (X-OD) to enhance output-stage supervision.
For optimal camera-based 3DOD, extracting useful PV features before the view transformation to BEV is equally essential. However, gradient-based optimization through an ambiguous view transformation can induce non-optimal supervision signals. Recent work proposes pre-training the PV feature extractor on instance segmentation to improve the extracted features [45]. Nevertheless, neural networks are subject to catastrophic forgetting [19] such that knowledge from pre-training will continuously degrade if not retained by supervision. Therefore, we propose cross-task instance segmentation distillation (X-IS) from a pre-trained instance segmentation teacher into a multi-camera 3DOD model, see Fig. 1 (bottom part). As shown in Table 1, our X\({}^{3}\)KD framework significantly improves upon BEVDepth without additional complexity during inference.
To summarize, our main contributions are as follows:
* We propose X\({}^{3}\)KD, a KD framework across modalities, tasks, and stages for multi-camera 3DOD.
* Specifically, we introduce cross-modal KD from a strong LiDAR-based 3DOD teacher to the multi-camera 3DOD student, which is applied at multiple network stages in bird's eye view, _i.e_., feature-stage (X-FD and X-AT) and output-stage (X-OD).
* Further, we present cross-task instance segmentation distillation (X-IS) at the PV feature extraction stage.
* X\({}^{3}\)KD outperforms previous approaches for multi-camera 3DOD on the nuScenes and Waymo datasets.
* We transfer X\({}^{3}\)KD to RADAR-based 3DOD and train X\({}^{3}\)KD only through KD without using ground truth.
* Our extensive ablation studies on nuScenes and Waymo provide a comprehensive evaluation of KD at different network stages for multi-camera 3DOD.
## 2 Related Work
**Multi-View Camera-Based 3D Object Detection**: Current multi-view 3D object detectors can be divided into two main streams: First, DETR3D and succeeding works [26, 28, 42, 29, 54] project a sparse set of learnable 3D queries/priors onto 2D image features with subsequent sampling and an end-to-end 3D bounding box regression. Second, LSS and following works [14, 24, 36] employ a view transformation consisting of a depth prediction, a point cloud reconstruction, and a voxel pooling to project points to BEV. 3D bounding boxes are predicted from these BEV features. While such works focus on improving the network architecture and view transformation, we focus on better model optimization. In this direction, M\({}^{2}\)BEV [45] proposed instance segmentation pre-training of the PV feature extraction. We propose cross-task instance segmentation distillation to retain this knowledge during 3DOD training.
Most current state-of-the-art works focus on incorporating temporal information either through different kinds of feature-level aggregation [13, 24, 29, 26] or by improving depth estimation by temporal stereo approaches [23, 43]. While the usual setting considers data from 2 time steps, recently proposed SOLOFusion [34] separately models long-range and short-range temporal dependencies in input data from 16 time steps. Our work focuses on a different direction, _i.e_., we try to optimally exploit the information contained in LiDAR point clouds. In this direction, BEVDepth [24] and succeeding works [23, 34] supervise the depth estimation with projected LiDAR points. We explore this path further by using cross-modal knowledge distillation (KD) from a LiDAR-based 3DOD teacher.
**Multi-Modal 3D Object Detection**: Recently, there has been a trend to fuse different sensor modalities, especially camera and LiDAR, with the idea of combining modality-specific useful information, hence improving the final 3DOD performance [18, 49, 25, 31, 1, 1]. Existing 3DOD methods mostly perform multi-modal fusion at one of the three stages: First, various approaches [40, 41, 46] propose to decorate/augment the raw LiDAR points with image features. Second, intermediate feature fusion of the modalities in a shared representation space, such as the BEV space, has been explored [4, 18, 25, 31, 49]. Third,
\begin{table}
\begin{tabular}{l|c c|c|c c} _Model_ & _LSS++_ & _DS_ & GFLOPS & _mAP_\({}^{\dagger}\) & _NDS_\({}^{\dagger}\) \\ \hline & ✗ & ✗ & 298 & 32.4 & 44.9 \\ BEVDepth\({}^{\dagger}\) & ✗ & ✓ & 298 & 33.1 & 44.9 \\ & ✓ & ✗ & 316 & 34.9 & 47.0 \\ & ✓ & ✓ & 316 & 35.9 & 47.2 \\ \hline \hline \multicolumn{2}{l|}{**X\({}^{3}\)KD** (Ours)} & ✓ & ✓ & 316 & **39.0** & **50.5** \\ \end{tabular}
\end{table}
Table 1: **Analysis of BEVDepth\({}^{\dagger}\)** (re-implementation of [24]): We compare the architectural improvement of a larger Lift-Splat-Shoot (LSS++) transform to using depth supervision (DS).
proposal-based fusion methods [1, 3, 20] keep the feature extraction of different modalities independent and aggregate multi-modal features via proposals or queries in the 3DOD prediction head. While these approaches require both sensors to be available during inference, our X\({}^{3}\)KD approach requires only camera sensors during inference. We also apply our KD approach to less frequently explored RADAR- and camera-RADAR fusion-based models.
**Knowledge Distillation for 3D Object Detection:** Employing the KD technique from [11] some recent works have explored KD for 3DOD [5, 27, 44, 52]. Most works focus on LiDAR-based 3DOD settings and propose methods to improve performance or efficiency [48, 52] or solve problems that are specific to point clouds, such as KD into sparser point clouds [44, 53]. Some initial works have also proposed concepts for cross-modal KD in 3D semantic segmentation [30] or simple single or stereo camera-based 3DOD models [5, 9, 12, 27, 56]. However, current research focus has shifted to more general multi-camera settings, where up to our knowledge, we are the first to investigate KD across modalities, tasks, and stages comprehensively.
## 3 Proposed X\({}^{3}\)KD Framework
We first define our considered problem and baseline in Sec. 3.1. Next, we give an overview on X\({}^{3}\)KD in Sec. 3.2 presenting specific advancements in Secs. 3.3 and 3.4.
### Problem Formulation and Baseline Method
**Problem Definition**: We aim at developing a 3DOD model with camera images \(\mathbf{x}\in\mathbb{R}^{N^{\text{cam}}\times H^{\text{cam}}\times N^{\text{cam}} \times 3}\) as input, where \(N^{\text{cam}}\), \(H^{\text{cam}}\), and \(W^{\text{cam}}\) represent the number of images, image height, and image width, respectively, and \(N^{\text{bbox}}\) 3D bounding boxes \(\mathbf{\overline{b}}=\big{\{}\left(\mathbf{\overline{b}}_{n}^{\text{reg}},\mathbf{ \overline{b}}_{n}^{\text{cls}}\right)n\in\big{\{}1,\ldots,N^{\text{bbox}} \big{\}}\big{\}}\) as output. Each bounding box is represented by regression parameters \(\mathbf{\overline{b}}_{n}^{\text{reg}}\in\mathbb{R}^{9}\) (three, three, two, and one for the center, spatial extent, velocity, and yaw angle, respectively), and a classification label \(\mathbf{\overline{b}}_{n}^{\text{cls}}\in\mathcal{S}\) from the set of \(|\mathcal{S}|\) classes \(\mathcal{S}=\big{\{}1,\ldots,|\mathcal{S}|\big{\}}\). During training, not only are camera images available, but we can also make use of a 3D LiDAR point cloud \(\mathbf{l}\in\mathbb{R}^{P\times 5}\) with \(P\) points, each one containing the 3D position, intensity, and ring index. The point cloud \(\mathbf{l}\) is not available during inference.
**Baseline Model**: We build upon the recently published state-of-the-art method BEVDepth [24], whose setup is depicted in the blue box of Fig. 2. First, all images are processed by a PV feature extractor, yielding features \(\mathbf{f}^{\text{PV}}\in\mathbb{R}^{N^{\text{cam}}\times H^{\text{PV}}\times W^{ \text{PV}}\times C^{\text{PV}}}\) in PV with spatial extent \(H^{\text{PV}}\times W^{\text{PV}}\) and number of channels \(C^{\text{PV}}\). Afterwards, the features are passed through the Lift-Splat-Shoot transform [36], which predicts discretized depth values \(\hat{\mathbf{d}}\), transforms pixels corresponding to \(\mathbf{f}^{\text{PV}}\) into a point cloud representation and obtains BEV features \(\mathbf{f}^{\text{BEV}}\in\mathbb{R}^{H^{\text{BEV}}\times W^{\text{BEV}}\times C ^{\text{BEV}}}\) via voxel pooling. BEV features are further processed by an encoder-decoder network as in [24], yielding refined features \(\mathbf{f}^{\text{REF}}\in\mathbb{R}^{H^{\text{BEV}}\times W^{\text{BEV}}\times C^ {\text{BEV}}}\). Finally, the Center-Point prediction head [51], predicts dense object probability scores \(\hat{\mathbf{b}}^{\text{cls}}\in\mathbb{I}^{H^{\text{BEV}}\times W^{\text{BEV}} \times|\mathcal{S}|}\) for each class as well as corresponding regression parameters \(\hat{\mathbf{b}}^{\text{reg}}\in\mathbb{R}^{H^{\text{HV}}\times W^{\text{BEV}} \times 9}\). The final bounding box predictions \(\mathbf{\overline{b}}\) are generated by non-learned decoding of these dense representations [51].
**Baseline Training**: The baseline is trained by optimizing the 3D bounding box losses \(\mathcal{L}^{\text{CPoint}}\) from Center-point [51] as well as the depth loss \(\mathcal{L}^{\text{depth}}\) from [24], yielding
\[\mathcal{L}^{\text{GT}}=\mathcal{L}^{\text{depth}}(\hat{\mathbf{d}},\mathbf{d})+ \mathcal{L}^{\text{CPoint}}(\hat{\mathbf{b}}^{\text{cls}},\hat{\mathbf{b}}^{\text{reg}}, \mathbf{b}), \tag{1}\]
where \(\mathbf{d}\) is the depth ground truth generated from projected LiDAR points and \(\mathbf{b}\) is the set of ground truth bounding boxes. For more details, we refer to the supplementary.
### X\({}^{3}\)KD Overview
Our X\({}^{3}\)KD framework (Fig. 2) improves the performance of a multi-camera 3DOD model without introducing additional complexity during inference. Hence, our model's inference setup is equal to the one of our baseline. During training, however, we explore multiple knowledge distillation (KD) strategies across modalities, tasks, and stages.
**X\({}^{3}\)KD Loss**: First, we employ a pre-trained LiDAR-based 3DOD model, as shown in Fig. 2 (top part). We propose three losses for distilling knowledge across different stages into the camera-based 3DOD: An output-stage distillation (X-OD) loss \(\mathcal{L}^{\text{X-OD}}\) between the outputs of the camera and LiDAR models, a feature-stage distillation (X-FD) scheme and a corresponding loss \(\mathcal{L}^{\text{X-FD}}\) to guide the focus of the BEV features after the view transformation, and a feature-stage adversarial training (X-AT) with a loss \(\mathcal{L}^{\text{X-AT}}\) between the camera and LiDAR model features to encourage their feature similarity. Second, we use an instance segmentation network, cf. Fig. 2 (bottom part). We propose cross-task instance segmentation distillation (X-IS) by imposing a loss \(\mathcal{L}^{\text{X-IS}}\) between the output of an additional PV instance segmentation head and teacher-generated pseudo labels. Our total loss for X\({}^{3}\)KD is then given by:
\[\mathcal{L}^{\text{X}^{3}\text{KD}}\!\!=\!\!\sum_{i\in\mathcal{I}}\lambda^{i} \mathcal{L}^{i},\mathcal{I}\!=\!\{\text{GT},\text{X-OD},\text{X-FD},\text{X-AT}, \text{X-IS}\} \tag{2}\]
### Cross-modal Knowledge Distillation
The current superiority of LiDAR-based 3DOD over multi-camera 3DOD can be attributed to the ambiguous view transformation in multi-camera models, which may place features at the wrong position in the final representation (_e.g_., a BEV grid). Meanwhile, LiDAR-based models operate on a 3D point cloud, which can easily be projected onto any view representation. Thereby, the extracted features preserve 3D information. Our cross-modal KD com
ponents transfer this knowledge to the multi-camera 3DOD model across different network stages, cf. Fig. 2 (top part).
**LiDAR-based 3DOD Model Architecture**: Our LiDAR-based 3DOD model is mainly based on Center-Point [51]. First, the point cloud \(\mathbf{l}\in\mathbb{R}^{P\times 5}\) is processed by the Sparse Encoder from SECOND [47], yielding 3D sparse features \(\mathbf{\tilde{f}}^{\rm 3D}\in\mathbb{R}^{H^{\rm BW}\times W^{\rm BW}\times \tilde{D}^{\rm 3D}\times\tilde{C}^{\rm 3D}}\) with volumetric extent \(H^{\rm BEV}\times W^{\rm BEV}\times\tilde{D}^{\rm 3D}\) and number of channels \(\tilde{C}^{\rm 3D}\). Then, the features are projected onto the same BEV plane as in the camera-based 3DOD model, yielding BEV features \(\mathbf{\tilde{f}}^{\rm BEV}\in\mathbb{R}^{H^{\rm BEV}\times W^{\rm BEV}\times \tilde{C}^{\rm BEV}}\) with \(\tilde{C}^{\rm BEV}=\tilde{D}^{\rm 3D}\cdot\tilde{C}^{\rm 3D}\). These are further processed by an encoder-decoder network, yielding refined BEV features \(\mathbf{\tilde{f}}^{\rm REF}\in\mathbb{R}^{H^{\rm BEV}\times W^{\rm BEV}\times \tilde{C}^{\rm REF}}\). Finally, the features are passed through a prediction head, yielding probability score maps \(\mathbf{\tilde{b}}^{\rm els}\in\mathbb{I}^{H^{\rm BEV}\times W^{\rm BEV}\times| \mathcal{S}|}\) and regression maps \(\mathbf{\tilde{b}}^{\rm reg}\in\mathbb{R}^{H^{\rm BEV}\times W^{\rm BEV}\times 9}\) analogous to the outputs \(\mathbf{\hat{b}}^{\rm cls}\) and \(\mathbf{\hat{b}}^{\rm reg}\) of the multi-camera 3DOD model.
**Output-stage Distillation (X-OD)**: Following many approaches in KD [11, 7, 50], we distill knowledge at the output stage by imposing losses between the teacher's outputs \(\mathbf{\hat{b}}^{\rm cls}\) and \(\mathbf{\tilde{b}}^{\rm reg}\) and the student's outputs \(\mathbf{\hat{b}}^{\rm els}\) and \(\mathbf{\hat{b}}^{\rm reg}\). Specifically, we impose a Gaussian focal loss \(\mathcal{L}^{\rm GFocal}\)[22] between \(\mathbf{\hat{b}}^{\rm cls}\) and \(\mathbf{\hat{b}}^{\rm rel}\) to put more weight on rare classes and compensate for the class imbalance. As this loss only considers pseudo labels as a positive sample if they are exactly \(1\), we select high-confidence teacher output probabilities \(\mathbf{\hat{b}}^{\rm cls}\), _i.e._, probability values over a threshold \(\alpha^{\rm 3D-bbox}\), and set them to \(1\). Further, the regression output of the student \(\mathbf{\hat{b}}^{\rm reg}\) is supervised by the corresponding output \(\mathbf{\tilde{b}}^{\rm reg}\) of the teacher by imposing a Smooth L1 loss \(\mathcal{L}^{\rm SmoothL1}\)[8]. Finally, we propose to weigh the regression loss by the teacher's pixel-wise averaged output probabilities \(\langle\mathbf{\hat{b}}^{\rm cls}_{s}\rangle=\frac{1}{|\mathcal{S}|}\sum_{s\in \mathcal{S}}\mathbf{\hat{b}}^{\rm cls}_{s}\in\mathbb{R}^{H^{\rm BEV}\times W^{\rm BEV}}\) to weigh regions which likely contain objects higher than the background. Overall, X-OD is defined as:
\[\mathcal{L}^{\rm X\text{-OD}}\big{(}\mathbf{\hat{b}},\mathbf{\tilde{b}}\big{)}\!=\! \mathcal{L}^{\rm GFocal}\big{(}\mathbf{\hat{b}}^{\rm cls},\mathbf{\hat{b}}^{\rm cls} \big{)}\!+\!\mathcal{L}^{\rm SmoothL1}\big{(}\mathbf{\hat{b}}^{\rm reg},\mathbf{\hat{b }}^{\rm reg}\big{)} \tag{3}\]
**Feature-stage Distillation (X-FD)**: Our X-FD compo
Figure 2: **We present X\({}^{3}\)KD, a knowledge distillation (KD) framework for multi-camera 3DOD. We employ an inference setup (middle blue box) relying only on multi-camera image input (LiDAR point cloud in the output is just shown for visualization). During training, we apply KD across several network stages (red arrows originating from the blue box): In perspective-view (PV) feature extraction, we apply cross-task instance segmentation distillation (X-IS) from an instance segmentation teacher (yellow box). In the bird’s eye view (BEV), we apply cross-modal feature distillation (X-FD), adversarial training (X-AT), and output distillation (X-OD) from a LiDAR-based 3DOD teacher (green box). X\({}^{3}\)KD significantly enhances the multi-camera 3DOD without inducing extra complexity during inference.**
ent exploits the precise and sparse nature of features extracted from LiDAR point clouds, which precisely encode locations of relevant objects for 3DOD. Thereby, the mean sparse feature activation \(\hat{\mathbf{h}}\), cf. Fig. 3 (right), provides a good initial estimate for the potential location of objects. While it would be natural to impose similarity losses between BEV features from the camera and LiDAR models, these features are structurally quite different (cf. Fig. 3), such that our attempts to impose such losses did lead to unstable training behavior. Therefore, we add a small BEV decoder to the multi-camera model, which outputs a prediction \(\hat{\mathbf{h}}\) for the mean sparse feature activations from the LiDAR teacher \(\tilde{\mathbf{h}}\). The X-FD loss \(\mathcal{L}^{\text{X-FD}}\) is then given as:
\[\mathcal{L}^{\text{X-FD}}=\mathrm{L}1\big{(}\hat{\mathbf{h}},\tilde{\mathbf{h}}\big{)} \tag{4}\]
**Feature-stage Adversarial Training (X-AT)**: We further propose X-AT to encourage a more global feature similarity between the refined features \(\mathbf{f}^{\text{REF}}\) and \(\mathbf{\tilde{f}}^{\text{REF}}\) from both modalities in BEV space. Due to the structural dissimilarity of features from both modalities directly after the BEV projection (Fig. 3), we apply the adversarial training on the refined features \(\tilde{\mathbf{f}}^{\text{REF}}\) and \(\tilde{\mathbf{f}}^{\text{REF}}\). We pass these cross-modal features through a gradient reversal layer and a patch-based discriminator network [16], which outputs two modality-specific probabilities. The discriminator is optimized to classify the features by modality using a binary cross-entropy loss \(\mathcal{L}^{\text{X-AT}}\) between the output probabilities \(\hat{\mathbf{s}}\) and the ground truth modality labels \(\mathbf{s}\):
\[\mathcal{L}^{\text{X-AT}}=\mathrm{BCE}\left(\hat{\mathbf{s}},\mathbf{s}\right) \tag{5}\]
We then encourage modality-agnostic features in the multi-camera 3DOD model through gradient reversal.
### Cross-task Knowledge Distillation
Learning a good feature representation in PV is difficult when all supervision signals are backpropagated through an ambiguous view transformation. As a possible solution, M\({}^{2}\)BEV [45] proposes instance segmentation (IS) pre-training. However, deep neural networks exhibit catastrophic forgetting such that this initial knowledge is not necessarily preserved during 3DOD training. Therefore, we propose cross-task instance segmentation distillation (X-IS) to preserve the knowledge contained in the PV features continuously. Specifically, we use the outputs of a pre-trained instance segmentation network as pseudo labels to optimize an additional PV instance segmentation head, cf. Fig. 2.
**Pseudo Label Generation**: In this work, we use the well-established Mask R-CNN architecture [10] as a teacher; see Fig. 2 (bottom left). We use its original architecture, consisting of a feature extractor, a feature pyramid network (FPN), a region proposal network (RPN), and a region of interest (ROI) head, including a mask branch. As output, we obtain \(N^{\text{IS}}\) bounding boxes \(\tilde{\mathbf{y}}=\big{\{}\big{(}\mathbf{g}_{n}^{\text{bbox}},\tilde{y}_{n}^{\text{ cls}},\tilde{y}_{n}^{\text{score}}\big{)},n\in\big{\{}1,\dots,N^{\text{IS}}\big{\}} \big{\}}\) with four parameters for bounding box center and spatial extent \(\tilde{\mathbf{y}}_{n}^{\text{bbox}}\in\mathbb{R}^{4}\), a classification result \(\tilde{y}_{n}^{\text{cls}}\in\mathcal{S}^{\text{IS}}\) from the set of IS classes \(\mathcal{S}^{\text{IS}}\), and an objectness score \(\tilde{y}_{n}^{\text{score}}\in\mathbb{I}\) with \(\mathbb{I}=[0,1]\). Additionally, we obtain corresponding object masks \(\tilde{\mathbf{m}}=\big{\{}\tilde{\mathbf{m}}_{n},n\in\big{\{}1,\dots,N^{\text{IS}} \big{\}}\big{\}}\) with single masks \(\tilde{\mathbf{m}}_{n}\in\{0,1\}^{H^{\text{mask}}_{n}\times W^{\text{mask}}_{n}}\) and spatial resolution \(H^{\text{mask}}_{n}\times W^{\text{mask}}_{n}\). We select all samples with a score \(\tilde{y}_{n}^{\text{score}}>\alpha^{\text{2D-box}}\) as pseudo labels.
**X-IS Loss Computation**: The teacher-generated pseudo labels are used to supervise an additional PV instance segmentation head, cf. Fig. 2 (bottom right), which uses the same RPN and ROI head architectures as the teacher. The RPN head outputs region proposals \(\hat{\mathbf{a}}=\big{(}\hat{\mathbf{a}}^{\text{cls}},\hat{\mathbf{a}}^{\text{reg}}\big{)}\) with foreground/background scores \(\hat{\mathbf{a}}^{\text{cls}}\in\mathbb{I}^{H^{\text{PX}}\times W^{\text{PX}}\times 2K}\) and regression parameters \(\hat{\mathbf{a}}^{\text{reg}}\in\mathbb{R}^{H^{\text{PX}}\times W^{\text{PX}}\times 4K}\) relative to each of the \(K\) anchors. Our RPN loss \(\mathcal{L}^{\text{rpn}}\) is then comprised of an assignment strategy between pseudo GT and PV head outputs as detailed in [38] and subsequent application of BCE and L1 differences for optimizing \(\hat{\mathbf{a}}^{\text{cls}}\) and \(\hat{\mathbf{a}}^{\text{reg}}\), respectively. The \(N^{\text{RPN}}\) region proposals with the highest foreground scores are subsequently passed through the ROI head, which outputs refined bounding boxes \(\hat{\mathbf{y}}=\big{\{}\big{(}\hat{\mathbf{y}}_{n}^{\text{bbox}},\hat{y}_{n}^{\text{ cls}}\big{)},n\in\big{\{}1,\dots,N^{\text{RPN}}\big{\}}\big{\}}\) with class probabilities \(\hat{\mathbf{y}}_{n}^{\text{cls}}\in\mathbb{I}^{|\mathcal{S}^{\text{IS}}|}\), four bounding box regression parameters \(\hat{\mathbf{y}}_{n}^{\text{bbox}}\in\mathbb{R}^{4}\) as well as class-specific mask probabilities \(\hat{\mathbf{m}}=\big{\{}\hat{\mathbf{m}}_{n},n\in\big{\{}1,\dots,N^{\text{IS}}\big{\}} \big{\}}\) with single masks \(\hat{\mathbf{m}}_{n}\in\mathbb{I}^{H^{\text{mask}}\times W^{\text{mask}}\times| \mathcal{S}^{\text{IS}}|}\). Our bounding box loss \(\mathcal{L}^{\text{bbox}}\) is comprised of an assignment strategy between ground truth \(\tilde{\mathbf{y}}\) and prediction \(\tilde{\mathbf{y}}\) and subsequent application of L1 difference between \(\hat{\mathbf{y}}_{n}^{\text{bbox}}\) and \(\tilde{\mathbf{y}}_{n}^{\text{bbox}}\) as well as cross-entropy (CE) difference between \(\hat{\mathbf{y}}_{n}^{\text{cls}}\) and one-hot encoded \(\tilde{\mathbf{y}}_{n}^{\text{cls}}\). For computing the mask loss \(\mathcal{L}^{\text{mask}}\), we apply a binary cross entropy (BCE) difference between ground truth \(\tilde{\mathbf{m}}\) and prediction \(\hat{\mathbf{m}}\), selecting only the output corresponding to the ground truth mask's class. More details can be found in [10]. Overall, our X-IS loss \(\mathcal{L}^{\text{X-IS}}\) can be written as:
\[\mathcal{L}^{\text{X-IS}}=\mathcal{L}^{\text{rpn}}\left(\hat{\mathbf{a}},\tilde{\mathbf{y} }\right)+\mathcal{L}^{\text{bbox}}\left(\hat{\mathbf{y}},\tilde{\mathbf{y}}\right)+ \mathcal{L}^{\text{mask}}\left(\hat{\mathbf{m}},\tilde{\mathbf{m}}\right). \tag{6}\]
## 4 Experiments
We first provide our experimental setup (Sec. 4.1) and a state-of-the-art comparison (Sec. 4.2). Next, we verify and analyze our method's components in Secs. 4.3 and 4.4. Last, we evaluate RADAR-based models (Sec. 4.5).
Figure 3: **Mean feature activations** from the camera-based student after the view transformation (left) and the LiDAR-based teacher (right) exhibit structural dissimilarity.
### Experimental Setup
X\({}^{3}\)KD is implemented using mmdetection3d [6] and PyTorch [35] libraries and trained on 4 NVIDIA A100 GPUs.1 Here, we describe our main setup on nuScenes while more details are provided in the supplementary.
Footnote 1: We use mmdetection3d v1.0, Python 3.8, PyTorch 1.11, CUDA 11.3
**Datasets**: Similar to most recent works [1, 23, 24, 26, 51], we evaluate on the nuScenes and Waymo benchmark datasets. The nuScenes dataset [2] contains 28K, 6K, and 6K samples for training, validation, and test, respectively. We use data from a LiDAR sensor and 6 cameras with bounding box annotations for 10 classes. For the Waymo dataset [39], we use the data from a LiDAR sensor and 5 cameras with annotations for cars, pedestrians, and cyclists. It provides 230K annotated frames from 798, 202, and 150 sequences for training, validation, and test, respectively.
**Evaluation Metrics**: For nuScenes, we employ the officially defined _mAP_ and _NDS_ metrics. The _NDS_ metric considers _mAP_ as well as true positive (_TP_) metrics \(\mathbb{TP}=\{\textit{mATE},\textit{mASE},\textit{mADE},\textit{mAVE},\textit{ mAAE}\}\) for translation, scale, orientation, velocity, and attribute, respectively, _i.e._, \(\textit{NDS}=\frac{1}{10}\left(5\cdot m\textit{AP}\right)+\sum_{\textit{TP}\in \mathbb{TP}}1-\min\left(1,\textit{TP}\right)\). For Waymo, we employ the official metrics of the camera-only 3D object detection track [15]: The _LET-3D-AP_ calculates average precision after longitudinal error correction, while _LET-3D-APL_ also penalizes the longitudinal error.
**Network Architecture and Training**: For a fair comparison, our network architecture follows previous works [13, 17, 23, 24, 43, 26]. We consider the ResNet-50-based setting with a resolution of \(256\times 704\) and the ResNet-101-based setting with resolutions of \(512\times 1408\) or \(640\times 1600\). Further network design choices are adopted from [13]. We train all models for \(24\) epochs using the CBGS training strategy [57], a batch size of \(16\) and AdamW [32] with an initial learning rate of \(2\cdot 10^{-4}\). The loss weights are set to \(\lambda^{\text{GT}}=1\), \(\lambda^{\text{X-FD}}=10\), \(\lambda^{\text{X-AT}}=10\), \(\lambda^{\text{X-OD}}=1\), and \(\lambda^{\text{X-IS}}=1\) while the thresholds are set to \(\alpha^{\text{3D-bbox}}=0.6\) and \(\alpha^{\text{2D-bbox}}=0.2\). Our LiDAR teacher is based on the CenterPoint architecture [51] and the TransFusion training schedule [1]. The supplementary contains further explanations, hyperparameter studies, and configurations for the Waymo dataset.
### State-of-the-art Comparisons
We perform a comparison of X\({}^{3}\)KD with all contributions, _i.e._, X\({}^{3}\)KD\({}_{\text{all}}\), to other SOTA methods in Table 2. In the ResNet-50-based setting, our model achieves the best results with scores of \(39.0\) and \(50.5\) in _mAP_ and _NDS_, respectively. In the high-resolution ResNet-101-based setting, our model achieves SOTA scores of \(46.1\) and \(56.7\). _At this resolution, we outperform all previous SOTA methods in all considered metrics and outperform the second best result by 2.9 points in mAP and 2.5 points in NDS_. To explicitly show that our method improves on top of current SOTA baselines, we retrain our strongest baseline among
\begin{table}
\begin{tabular}{l|l|c|c|c c c c c|c c} _Set_ & _Model_ & _Backbone_ & _Resolution_ & _mATE\({}^{\dagger}\)_ & _mASE\({}^{\dagger}\)_ & _mAP\({}^{\dagger}\)_ & _mAP\({}^{\dagger}\)_ & _NDS_ \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & BEVDet [14] & & & 0.725 & 0.279 & 0.589 & 0.860 & 0.245 & 29.8 & 37.9 \\ & BEVDet4D [45] & & & 0.703 & 0.278 & 0.495 & 0.354 & 0.206 & 32.2 & 45.7 \\ & BEVDepth [24] & \multirow{2}{*}{ResNet-50} & & 0.629 & **0.267** & 0.479 & 0.428 & 0.198 & 35.1 & 47.5 \\ & BEVDepth [24] & & & 0.636 & 0.272 & 0.493 & 0.499 & 0.198 & 35.9 & 47.2 \\ & STS\({}^{\star}\)[43] & & & 0.601 & 0.275 & 0.450 & 0.446 & 0.212 & 37.7 & 48.9 \\ & BEVStero\({}^{\star}\)[23] & & & & **0.598** & 0.270 & **0.438** & 0.367 & **0.190** & 37.2 & 50.0 \\ & **X\({}^{\text{3}}\)KD\({}_{\text{all}}\)** & ResNet-50 & \(256\times 704\) & 0.615 & 0.269 & 0.471 & **0.345** & 0.203 & **39.0** & **50.5** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & PETR [28] & & & 0.710 & 0.270 & 0.490 & 0.885 & 0.224 & 35.7 & 42.1 \\ & BEVDepth [24] & \multirow{2}{*}{ResNet-101} & & 0.579 & 0.265 & 0.387 & 0.364 & 0.194 & 40.9 & 53.1 \\ & BEVDepth [24] & & & 0.565 & 0.266 & 0.358 & 0.331 & **0.190** & 41.2 & 53.5 \\ & STS\({}^{\star}\)[43] & & & **0.525** & 0.262 & 0.380 & 0.369 & 0.204 & 43.1 & 54.2 \\ & **X\({}^{\text{3}}\)KD\({}_{\text{all}}\)** & ResNet-101 & \(512\times 1408\) & 0.552 & **0.257** & **0.338** & **0.328** & 0.199 & **44.8** & **55.3** \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & DETR3D [42] & & & 0.716 & 0.268 & 0.379 & 0.842 & 0.200 & 34.9 & 43.4 \\ & BEVFormer [26] & \multirow{2}{*}{ResNet-101} & & \(900\times 1600\) & 0.673 & 0.274 & 0.372 & 0.394 & 0.198 & 41.6 & 51.7 \\ & PolarFormer [17] & & & 0.648 & 0.270 & 0.348 & 0.409 & 0.201 & 43.2 & 52.8 \\ & BEVDepth [24] & \multirow{2}{*}{ResNet-101} & & \(640\times 1600\) & 0.571 & 0.260 & 0.379 & 0.374 & **0.196** & 42.8 & 53.6 \\ & **X\({}^{\text{3}}\)KD\({}_{\text{all}}\)** & ResNet-101 & \(640\times 1600\) & **0.539** & **0.255** & **0.320** & **0.324** & **0.196** & **46.1** & **56.7** \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & BEVFormer [26] & & & 0.631 & 0.257 & 0.405 & 0.435 & 0.143 & 44.5 & 53.5 \\ & BEVDepth [27] & \multirow{2}{*}{ResNet-101} & & \(640\times 1600\) & 0.533 & 0.254 & 0.443 & 0.044 & **0.129** & 43.1 & 53.9 \\ \cline{1-1} & PolarFormer [17] & & & 0.610 & 0.258 & **0.391** & 0.458 & **0.129** & **45.6** & 54.3 \\ \cline{1-1} & **X\({}^{\text{3}}\)KD\({}_{\text{all}}\)** & ResNet-101 & \(640\times 1600\) & **0.506** & **0.253** & **0.414** & **0.366** & **0.131** & **45.6** & **56.1** \\ \hline \end{tabular}
\end{table}
Table 2: **Performance comparison on the nuScenes dataset**: We ensure comparability regarding backbone and image resolution. Baseline results are cited except for BEVDepth\({}^{\dagger}\) which we reproduced in our framework; \({}^{\star}\) indicates recent ArXiv works; best numbers in boldface.
\begin{table}
\begin{tabular}{l|c|c|c c c|c} _Model_ & _LET-3D-AP\({}^{\dagger}\)_ & \multicolumn{4}{c}{_LET-3D-APL\({}^{\dagger}\)_} \\ & _All_ & _Vehicle_ & _Pedestrian_ & _Cyclist_ & _All_ \\ \hline BEVDepth\({}^{\dagger}\) & 3
published works, _i.e_., BEVDepth [24], in our code framework, dubbed BEVDepth\({}^{\dagger}\). At all resolutions, we are able to closely reproduce the reported results and improve by about 3 points in both _mAP_ and _NDS_ upon them. _On the test set, we outperform the second best approach PolarFormer [17] by \(1.8\) points in terms of the main NDS metric_ and achieve best results in 5 out of 7 metrics. We also show results for BEVDepth\({}^{\dagger}\) and X\({}^{3}\)KD variants on the Waymo dataset in Table 3. As on nuScenes, our X\({}^{3}\)KD\({}_{\text{all}}\) model clearly outperforms the baseline in all metrics.
### Method Ablation Studies
**Effectiveness of the Proposed Components**: We incrementally add our contributions in Table 4 and evaluate them in terms of _NDS_ and _mAP_. First, we individually add X-OD, X-FD, and X-AT. For all three components, there is an improvement in the _NDS_ metric, while the _mAP_ metric remains similar or slightly worse. Adding all three components (X\({}^{3}\)KD\({}_{\text{modal}}\)) gives a clear improvement over the baseline as well as applying each component individually. Particularly, we observe that the _additional cross-modal supervision improves bounding box velocity estimation from multi-camera input_ as can be seen by the apparent improvement in the _mAVE_ metric. Using X-IS, surprisingly gives an even more substantial improvement. This might indicate that _supervision in BEV cannot completely compensate for the lack of rich features in PV_. Finally, adding all components together to _our proposed_ X\({}^{3}\)KD\({}_{\text{all}}\)_model clearly outperforms all other variants_ in terms of the main _NDS_ and _mAP_ metrics and is best in 4 out of 7 metrics in Table 4.
**Cross-Modal Output Distillation (X-OD)**: We provide insights into our X-OD design in Table 5. In the top part, we observe that models trained with output distillation improve over the baseline in terms of _NDS_ and that the confidence-based weighting is particularly effective for orientation (_mAOE_) and velocity (_mAVE_) prediction. Further, we train the multi-camera 3DOD without using annotations (Table 5, bottom part) solely from KD. In this setting, the weighting yields even more significant improvements in particular in terms of the _NDS_ metric. Also, _the_ X-OD\({}_{\text{w/o GT}}\)_model surprisingly outperforms the model variants trained with annotations in terms of the mAP metric._ This promising result indicates that future work might be able to use large-scale pre-training with KD on unlabelled data for further performance improvements.
**Cross-task Instance Segmentation Distillation (X-IS)**: Ablations on our X-IS design are shown in Table 6. We observe that initialization of the backbone with weights from a pre-trained instance segmentation as well as cross-task distillation, improves the baseline's result. Combining both aspects to X-IS yields the best result in both _mAP_ and _NDS_. Using a different teacher model based on ConvNeXt-T yields similarly good results and shows that the feature extraction architectures of the instance segmentation teacher and the multi-camera 3DOD student do not need to match. Also, knowledge can be distilled from a simple ResNet-50-based model into a more sophisticated architecture such as ConvNeXt-T (bottom part of Table 6). Overall, _cross-task
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline _Model_ & _Student_ & _Teacher_ & _Pre. Dist._ & _mAP_\({}^{\dagger}\) & _NDST_ \\ & _Backbone_ & _Backbone_ & & & \\ \hline BEVDepth\({}^{\dagger}\) & ResNet-50 & NA & ✗ & ✗ & 35.9 & 47.2 \\ \hline & ResNet-50 & ResNet-50 & ✗ & ✓ & 36.4 & 48.8 \\ & ResNet-50 & NA & ✓ & ✗ & 37.7 & 49.5 \\ X-IS & ResNet-50 & ResNet-50 & ✓ & ✓ & **38.7** & **50.1** \\ X-IS & ResNet-50 & ConvNeXt-T & ✓ & ✓ & **38.5** & 49.9 \\ \hline & ConvNeXt-T & NA & ✗ & ✗ & 38.3 & 50.8 \\ & ConvNeXt-T & ResNet-50 & ✗ & ✓ & **38.9** & **51.4** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Ablation study on cross-task instance segmentation distillation (X-IS) on the nuScenes validation set.** We evaluate the effect of using pre-trained weights (Pre.) and knowledge distillation (Dist.) as well as different teacher/student backbones.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c|c c} _Model_ & X-OD & X-FD & X-AT & X-IS & _mAP_\({}^{\dagger}\) & _mASE_\({}^{\dagger}\) & _mAOE_\({}^{\dagger}\) & _mAVE_\({}^{\dagger}\) & _mAP_\({}^{\dagger}\) & _NDST_ \\ \hline BEVDepth\({}^{\dagger}\) & ✗ & ✗ & ✗ & ✗ & 0.636 & 0.272 & 0.493 & 0.499 & 0.198 & 35.9 & 47.2 \\ X-OD & ✓ & ✗ & ✗ & ✗ & 0.642 & 0.278 & **0.456** & **0.338** & **0.188** & 35.7 & 48.7 \\ X-FD & ✗ & ✓ & ✗ & ✗ & 0.644 & 0.276 & 0.479 & 0.361 & 0.200 & 36.1 & 48.5 \\ X-AT & ✗ & ✗ & ✓ & ✗ & 0.648 & 0.277 & 0.492 & 0.354 & 0.192 & 35.5 & 48.1 \\ X\({}^{3}\)KD\({}_{\text{modal}}\) & ✓ & ✓ & ✓ & ✗ & 0.632 & 0.271 & **0.456** & 0.342 & 0.203 & 36.8 & 49.4 \\ X-IS & ✗ & ✗ & ✗ & ✓ & 0.635 & 0.273 & 0.462 & 0.350 & 0.204 & 38.7 & 50.1 \\ X\({}^{3}\)KD\({}_{\text{all}}\) & ✓ & ✓ & ✓ & ✓ & **0.615** & **0.269** & 0.471 & 0.345 & 0.203 & **39.0** & **50.5** \\ \hline LiDAR Teacher & NA & NA & NA & NA & 0.301 & 0.257 & 0.298 & 0.256 & 0.195 & 59.0 & 66.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation study of X\({}^{3}\)KD on the nuScenes validation set**: We incrementally add our proposed cross-modal feature distillation (X-FD), adversarial training (X-AT) and output distillation (X-OD) as well as our cross-task instance segmentation distillation (X-IS). All X\({}^{3}\)KD variants in the top part are solely based on multi-camera images during inference. Best numbers in boldface, second best underlined.
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c} \hline \hline _Model_ & _Dist._ & _Weight_ & _w/o GT_ & _mAP_\({}^{\dagger}\) & _mAVE_\({}^{\dagger}\) & _mAP_\({}^{\dagger}\) & _NDST_ \\ \hline BEVDepth\({}^{\dagger}\) & ✗ & ✗ & ✗ & 0.493 & 0.499 & **35.9** & 47.2 \\ & ✓ & ✗� & ✗ & 0.477 & 0.342 & 35.6 & 48.5 \\ X-OD & ✓ & ✓ & ✗ & **0.456** & **0.338** & 35.7 & **48.7** \\ \hline & ✓ & ✗ & ✓ & 1.090 & 0.972 & 36.1 & 35.3 \\ X-OD\({}_{\text{w/o GT}}\) & ✓ & ✓ & ✓ & **0.724** & **0.570** & **36.5** & **43.7** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Ablation study on cross-modal output distillation (X-OD) on the nuScenes validation set.** We show the effect of weighing the regression loss in (3) by the teacher output probabilities (\(\bar{\mathbf{b}}_{a}^{\text{cls}}\)) (Weight) during distillation (Dist.). We also show that our method can be trained without annotations (w/o GT).
distillation can improve performance without requiring an additional pre-training step_.
### Method Analysis
**Performance-Complexity Trade-off**: We analyze our method's efficiency compared to state-of-the-art methods [13, 24, 26] in Fig. 5. We compare to reimplementations of BEVDepth [24] and BEVDet4D [13] as well as reported results of BEVFormer [26]. All reported models are ResNet-50-based or ResNet-101-based to ensure that a better trade-off cannot be attributed to a more efficient backbone. We observe that X\({}^{3}\)KD (red curve) outperforms BEVDepth (blue curve) at equal complexity due to the improved supervision from KD. Also, compared to BEVDet4D and BEVFormer a better trade-off can be observed, likely because of the absence of LiDAR supervision in BEVDet4D and the complex Transformer model in BEVFormer. Accordingly, _our results show that X\({}^{3}\)KD achieves a better complexity-performance trade-off than current state-of-the-art methods_.
**Qualitative Results**: We further show qualitative results of X\({}^{3}\)KD and BEVDepth in Fig. 4. As highlighted by the white boxes, X\({}^{3}\)KD detects and places objects more accurately in the scene. In particular, the recognition of objects and the prediction of their orientation shows improved characteristics in the X\({}^{3}\)KD output, which is coherent with a better quantitative performance of X\({}^{3}\)KD in Table 4. Further qualitative results are given in the supplementary.
### Generalization to RADAR
We also generalize X\({}^{3}\)KD to RADAR-based and camera-RADAR fusion-based models. For RADAR-based models, we cannot apply cross-task KD from the instance segmentation teacher. Hence, we only use the cross-modal KD contributions, _i.e_., X\({}^{3}\)KD\({}_{\mathrm{modal}}\). Our results on the nuScenes validation set show that X\({}^{3}\)KD\({}_{\mathrm{modal}}\) significantly enhances the performance in both settings. Notably, the transfer from camera to RADAR was straightforward as we achieved the reported improvements without requiring tuning of hyperparameters. Further, we evaluate our fusion-based X\({}^{3}\)KD\({}_{\mathrm{modal}}\) model on the nuScenes test set, where _we outperform all other Camera-RADAR_, _fusion-based models_, _hence setting the state-of-the-art result_.
## 5 Conclusions
We proposed X\({}^{3}\)KD, a KD framework for multi-camera 3DOD. By distilling across tasks from an instance segmentation teacher and across modalities from a LiDAR-based 3DOD teacher into different stages of a multi-camera 3DOD student, we show that the model performance can be enhanced without inducing additional complexity during inference. We evaluated X\({}^{3}\)KD on the nuScenes and Waymo datasets, outperforming previous approaches by 2.9% _mAP_ and 2.5% _NDS_. The transferability to other sensors, such as RADAR, and the possibility to train 3DOD models without annotations further demonstrate X\({}^{3}\)KD's effectiveness. Combining these two findings could be used in future applications to train 3DOD models for arbitrary sensors, requiring only a LiDAR-based 3DOD model.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline _Model_ & _RADAR_ & _Cam._ & \multicolumn{2}{c|}{Validation} & \multicolumn{2}{c}{Test} \\ & _Input_ & _Input_ & **mAP\({}^{\dagger}\)** & **NDST** & **mAP\({}^{\dagger}\)** & **NDST** \\ \hline RADAR only & ✓ & ✗ & 12.9 & 13.0 & - & - \\
**X\({}^{3}\)KD\({}_{\mathrm{modal}}\)** & ✓ & ✗ & **17.7** & **23.5** & - & - \\ \hline Fusion only & ✓ & ✓ & 38.9 & 51.0 & 40.2 & 52.3 \\
**X\({}^{3}\)KD\({}_{\mathrm{modal}}\)** & ✓ & ✓ & **42.3** & **53.8** & **44.1** & **55.3** \\ \hline \end{tabular}
\end{table}
Table 7: **Generalization of our method to RADAR:** We distill knowledge from a LiDAR-based 3DOD into a RADAR-based and a RADAR-camera fusion-based 3DOD model. For RADAR-based models, we report the mAP just for the car class as these models underperform on other classes due to the point cloud sparsity.
Figure 4: **Qualitative results on nuScenes**: We show the multi-camera input (top) and bounding box visualizations (bottom). We compare ResNet-101-based X\({}^{3}\)KD\({}_{\mathrm{all}}\) to BEVDepth\({}^{\dagger}\) and the ground truth (GT) for a resolution of \(640\times 1600\). Best viewed on screen and in color.
Figure 5: **Complexity Analysis** of X\({}^{3}\)KD in comparison to BEVDepth [24], BEVDet4D [13], and BEVFormer [26]. |
2306.08696 | Femtoscopic correlation measurement with symmetric Lévy-type source at
NA61/SHINE | Measuring quantum-statistical, femtoscopic (including final state
interactions) momentum correlations with final state interactions in
high-energy nucleus-nucleus collisions reveal the space-time structure of the
particle-emitting source created. In this paper, we report NA61/SHINE
measurements of femtoscopic correlations of identified pion pairs and describe
said correlations based on symmetric L\'evy-type sources in Ar+Sc collisions at
150A GeV/c. We investigate the transverse mass dependence of the L\'evy-type
source parameters and discuss their possible interpretations. | Barnabas Porfy | 2023-06-14T18:34:56Z | http://arxiv.org/abs/2306.08696v2 | # Femtoscopic correlation measurement with symmetric Levy-type source at NA61/SHINE
###### Abstract
Measuring quantum-statistical, femtoscopic (including final state interactions) momentum correlations with final state interactions in high-energy nucleus-nucleus collisions reveal the space-time structure of the particle-emitting source created. In this paper, we report NA61/SHINE measurements of femtoscopic correlations of identified pion pairs and describe said correlations based on symmetric Levy-type sources in Ar+Sc collisions at 150\(A\) GeV/\(c\). We investigate the transverse mass dependence of the Levy-type source parameters and discuss their possible interpretations.
Quark-Gluon Plasma; Femtoscopy; Critical endpoint; Small systems
## 1 Introduction
The NA61/SHINE is a fixed target experiment using a large acceptance hadron spectrometer located in the North Area H2 beam line of the CERN Super Proton Synchrotron accelerator [1]. Its main goals include the investigation and mapping of the phase diagram of strongly interacting matter, as well as measuring cross sections of processes relevant for cosmic rays and neutrino physics. In this paper, we are focusing on mapping the QCD phase diagram. In order to accomplish this, NA61/SHINE performs measurements of different collision systems at multiple energies. The experiment provides excellent tracking down to \(p_{T}=0\) GeV/\(c\). This performance is achieved by using four large Time Projection Chambers (TPC's), which cover the full forward hemisphere. The experiment also features a modular calorimeter, called the Projectile Spectator Detector. It is located on the beam axis, after the TPC's, and measures the forward energy which determines the collision centrality of the events. A setup of the NA61/SHINE detector system is shown in Fig. 1.
The search for the critical endpoint (CEP) and investigation of the QCD phase diagram requires analysis at different temperatures and baryon-chemical potentials. To study, we need to map the phase diagram using different system sizes at various energies. NA61/SHINE investigations cover several beam momenta (13\(A\), 20\(A\), 30\(A\), 40\(A\), 75\(A\) and 150\(A\) GeV/\(c\)) and collision systems (p+p,p+Pb,Be+Be,Ar+Sc,Xe+La,Pb+Pb). In this paper, we describe the femtoscopic correlations of identical pions emitted from central Ar+Sc collisions at beam momentum of 150\(A\) GeV/\(c\). This field is often called femtoscopy as it reveals the femtometer scale structure of particle production.
## 2 Femtoscopy with Levy shaped sources
The method of quantum-statistical (Bose-Einstein) correlations is based on the work of R. Hanbury Brown and R. Q. Twiss (HBT) [2], who applied it first in astrophysical intensity correlation measurements. The method was developed to determine the apparent angular diameter of stellar objects. Shortly afterwards, a similar quantum-statistical method was applied in momentum correlation measurements for proton-antiproton collisions [3; 4] by Goldhaber and collaborators. Their objective was to understand pion-pion correlations and gain information on the radius, \(R\), of the interaction volume in high-energy particle collisions. The key relationship for measuring Bose-Einstein correlations shows that the spatial |
2303.02565 | Indirect Exchange Interaction Leads to Large Lattice Contribution to
Magnetocaloric Entropy Change | Materials with a large magnetocaloric response are highly desirable for
magnetic cooling applications. It is suggested that a strong spin-lattice
coupling tends to generate a large magnetocaloric effect, but no microscopic
mechanism has been proposed. In this work, we use spin lattice dynamics
simulation to examine the lattice contribution to the magnetocaloric entropy
change in bcc iron (Fe) and hcp gadolinium (Gd) with exchange interaction
parameters determined from ab initio calculations. We find that indirect
Ruderman Kittel Kasuya Yosida (RKKY) exchange interaction in hcp Gd leads to
longer range spin lattice coupling and more strongly influences the low
frequency long wavelength phonons. This results in a higher lattice
contribution towards the total magnetocaloric entropy change as compared to bcc
Fe with short range direct exchange interactions. Our analysis provides a
framework for understanding the magnetocaloric effect in magnetic materials
with strong spin lattice couplings. Our finding suggests that long range
indirect RKKY type exchange gives rise to a larger lattice contribution to the
magnetocaloric entropy change and is, thus, beneficial for magnetocaloric
materials. | Lokanath Patra, Bolin Liao | 2023-03-05T03:25:54Z | http://arxiv.org/abs/2303.02565v1 | # Indirect Exchange Interaction Leads to Large Lattice Contribution to Magnetocaloric Entropy Change
###### Abstract
Materials with a large magnetocaloric response are highly desirable for magnetic cooling applications. It is suggested that a strong spin-lattice coupling tends to generate a large magnetocaloric effect, but no microscopic mechanism has been proposed. In this work, we use spin-lattice dynamics simulation to examine the lattice contribution to the magnetocaloric entropy change in bcc iron (Fe) and hcp gadolinium (Gd) with exchange interaction parameters determined from _ab-initio_ calculations. We find that indirect Ruderman-Kittel-Kasuya-Yosida (RKKY) exchange interaction in hcp Gd leads to longer-range spin-lattice coupling and more strongly influences the low-frequency long-wavelength phonons. This results in a higher lattice contribution towards the total magnetocaloric entropy change as compared to bcc Fe with short-range direct exchange interactions. Our analysis provides a framework for understanding the magnetocaloric effect in magnetic materials with strong spin-lattice couplings. Our finding suggests that long-range indirect RKKY-type exchange gives rise to a larger lattice contribution to the magnetocaloric entropy change and is, thus, beneficial for magnetocaloric materials.
Magnetocaloric Effect, Spin-lattice Coupling, Spin-lattice Dynamics, Indirect Exchange Interactions
Introduction
Magnetic refrigeration is based on the magnetocaloric effect (MCE), which is the material's ability to heat (cool) when magnetized (demagnetized) in an adiabatic process. [1; 2; 3; 4] The MCE originates from the magnetic order-disorder transition induced by an external magnetic field and the associated entropy change. Understanding and designing materials with a strong MCE are of both great scientific and technological importance. Fundamentally, MCE provides a convenient probe to examine the interplay between magnetism and other excitations in condensed matter systems. [5; 6] Technologically, MCE has been widely adopted to obtain cryogenic temperatures in space missions, [7] observatory astronomy [8] and scientific experimentation, [9] where compact and reliable cooling solutions are required. Magnetic refrigeration near room temperature has also been considered as an environmentally friendly alternative to conventional refrigeration based on vapor compression cycles. [10] MCE materials can be characterized by the isothermal entropy change \(\Delta\)S, which measures the change in the equilibrium entropy of a material as a result of an externally applied magnetic field. Under isothermal conditions, the entropy change \(\Delta\)S manifests itself as the amount of heat released or absorbed by the material when an external magnetic field is applied or removed. Therefore, \(\Delta\)S is a metric for the cooling capacity of an MCE material. Improvement in the overall performance of a magnetic cooling system is primarily dependent on the isothermal entropy change of the magnetic refrigerant material and is of significant current scientific interest, particularly since the discovery of the giant MCE in Gd\({}_{5}\)(Si\({}_{2}\)Ge\({}_{2}\)) in 1997 by Pecharsky and Gschneidner. [11]
Magnetocaloric materials with a strong coupling between spin and lattice degrees of freedom are known to exhibit a large MCE. Prominent examples are the first-order MCE materials, where the magnetic transition is associated with a first-order structural phase transition. Notable first-order MCE materials include Gd\({}_{5}\)(Si\({}_{x}\)Ge\({}_{4-x}\)) series, [12; 13; 11], MnAs\({}_{1-x}\)Sb\({}_{x}\) alloys, [14] La-Fe-Si-based alloys, [15; 16] Mn-Fe-P-based alloys [17] and Ni-Mn-based Heusler compounds. [18; 19] The observed large MCE is governed by the concurrent magnetic and structural phase transition as a result of the strong spin-lattice coupling phenomenon since the external applied magnetic field can simultaneously change the magnetic and lattice entropy in these materials. The lattice contribution has been reported to be as high as \(50\%-60\%\) or more of the total entropy change in the materials undergoing a magnetostruc
tural or magnetoelastic transition, [20; 21; 22] and the magnitude of the lattice entropy change (\(\Delta S_{L}\)) is closely related to the volume change (\(\Delta V/V\)) during the phase transition, which can create mechanical issues and pose practical challenges in applying first-order MCE materials. [23] Even in conventional second-order MCE materials, a strong spin-lattice coupling is usually an indicator of a strong MCE. Based on this observation, Bocarsly et al. performed a computational screening of MCE materials using the spin-dependent lattice parameter as a proxy, which is an approximate computational measure of spin-lattice coupling. [24] Despite the abundance of empirical evidence, an atomic-level understanding of the relationship between spin-lattice coupling and the MCE is currently lacking. In particular, it is unclear what microscopic mechanisms are responsible for the strong spin-lattice coupling and the associated high MCE. In this light, atomistic computational methods to quantify the entropy contributions from the spin and the lattice degrees of freedom separately are of pivotal importance for the discovery and optimization of the MCE materials.
The isothermal entropy change in magnetocaloric materials can be calculated from their magnetizations as functions of temperature and applied magnetic field, following the thermodynamic Maxwell relation [1]:
\[\Delta S(T,\Delta H)=\mu_{0}\int_{H_{i}}^{H_{f}}\left(\frac{\partial M(T,H)}{ \partial T}\right)_{H}\ dH, \tag{1}\]
where \(\Delta H=H_{f}-H_{i}\) is the change of the applied external field (\(H_{f}\) and \(H_{i}\) are final and initial fields, respectively), \(\mu_{0}\) is the Bohr magneton, \(M\) is the magnetization and \(T\) is the temperature. The field- and temperature-dependent magnetization can be obtained by simulating the dynamics of atomic spins in magnetic materials using atomic spin dynamics (ASD) simulations with magnetic exchange interaction parameters (\(J_{ij}\), where \(i\) and \(j\) label the interacting spins) calculated from _ab initio_ methods. [25; 26; 27]. However, these simulations ignore the effect of thermal fluctuations of the atoms (lattice vibrations) at finite temperatures and, thus, cannot capture the spin-lattice coupling effect and the resultant lattice contribution to the magnetocaloric entropy change (\(\Delta S_{L}\)). Hence, ASD simulations need to be modified to account for the lattice vibrations and explicitly include the dependence of the spin exchange interactions on the dynamical lattice positions in order to simulate the dynamics of materials with strong spin-lattice coupling. [28; 29] For this purpose, spin-lattice dynamics (SLD) simulations add the lattice dynamics and the lattice-dependent spin exchange interactions to the ASD simulations and have been reported to be an effective
tool to predict magnetic and thermodynamic properties of magnetic materials more accurately. [30; 31; 32; 33] However, SLD simulations have not been applied to analyze the spin and lattice contributions to the magnetocaloric entropy change thus far.
In the current study, we use SLD simulations to quantify the spin and lattice contributions to the magnetocaloric entropy change with a particular focus on understanding the effect of different types of magnetic exchange interactions on MCE. For this purpose, we carry out a thorough comparison between two representative direct and indirect exchange materials, body-centered-cubic (bcc) Fe and hexagonal-closed-pack (hcp) Gd, respectively. [34] In magnetic materials with direct exchange interactions, such as bcc Fe, the magnetic exchange interactions are mediated directly by spin-polarized conduction electrons near the Fermi level. In this case, the strength of the direct exchange interaction decreases rapidly with the distance between magnetic ions. In contrast, indirect exchange interactions can couple magnetic moments over relatively large distances. [35] Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction is a particular form of indirect magnetic exchange interaction that is dominant in metals with little or no direct overlap between neighboring magnetic electrons. [36]. Instead, the exchange interactions between magnetic ions are mediated by conduction electrons. hcp Gd is an archetypal example of the RKKY interaction. [37; 38; 39] The RKKY interaction features an oscillating interaction strength with a periodicity determined by the Fermi wavevector that can lead to longer-range interactions between magnetic ions. [40] With detailed SLD simulations of bcc Fe and hcp Gd, we demonstrate that longer-range RKKY interactions can lead to stronger spin-lattice coupling affecting low-frequency and long-wavelength phonons, which gives rise to a much higher contribution from the lattice to the magnetocaloric entropy change. Our study provides a microscopic mechanism for the enhancement of MCE via spin-lattice coupling and suggests that RKKY interaction is a preferable type of exchange interaction when searching for materials with a strong MCE. We note that, since the electronic contribution to the magnetocaloric entropy change is negligible in hcp Gd, [41] the electronic entropy contribution is not discussed in this work.
Computational methods
Density functional theory (DFT) calculations were conducted using the Vienna ab initio simulation package (VASP) based on the projected augmented wave pseudopotentials. [42] The Perdew-Burke-Ernzerhof form of generalized gradient approximation (PBE-GGA) [43] was used for structural optimization. A plane-wave cut-off energy of 600 eV was utilized for all of our calculations. The energy and force convergence criteria were set to be 1 \(\times\) 10\({}^{-5}\) and 0.01 eV/A, respectively. Monkhorst-Pack [44]\(\mathbf{k}\)-point grids of (20 \(\times\) 20 \(\times\) 20) and (16 \(\times\) 16 \(\times\) 9) were used to sample the Brillouin zone for the optimization of bcc Fe and hcp Gd, respectively. Magnetic exchange parameters \(J_{ij}\) were extracted from a full-potential linear muffin-tin orbital method (FP-LMTO) calculation using the SPR-KKR code, [45] in which the spin-configuration-dependent ground-state energy was fitted to a Heisenberg Hamiltonian
\[H_{h}=-\sum_{i\neq j}J_{ij}\mathbf{e}_{i}\cdot\mathbf{e}_{j}, \tag{2}\]
where \(\mathbf{e}_{i}\) and \(\mathbf{e}_{j}\) are unit vectors pointing in the direction of local magnetic moments at atomic site \(i\) and \(j\).
Spin lattice dynamics (SLD) simulations (See the Supplementary Information for details) were performed using \(20\times 20\times 20\) supercells using the LAMMPS program. [46; 47] Molecular dynamics (MD) timestep, spin, and lattice thermostat damping constants were set to 0.1 fs, 0.1 (Gilbert damping with no units), and 0.1 ps, respectively. The spins were oriented along the \(z\)-direction at the start of the simulation. To measure the magnetic properties in the canonical ensemble, we initially thermalized the system under NVT dynamics at the target spin and lattice temperatures for 40 ps and then sampled the target properties for 10 ps using a sample interval of 0.001 ps. For pressure-controlled simulations, after the initial 40 ps of temperature equilibration, we froze the spin configuration and ran isobaric-isothermal NPT dynamics to allow the system to thermally expand, while still accounting for the effect of the magnetic pressure generated by the spin Hamiltonian. The pressure damping parameter was set to 10 ps. The pressure equilibration run was terminated once the system pressure dropped below 0.05 GPa. After this, the spin configuration was unfrozen and another equilibration run was carried out under NVT dynamics for 20 ps. Unfreezing the spin configurations causes a small jump in the pressure, typically within the range of +/-2 GPa. To reduce this pressure fluctuation, a series of uniform isotropic box deformations
were performed under the NVE ensemble. During this procedure, the box was deformed in 0.02% increments every 2 ps until the magnitude of the pressure was reduced to negligible values (\(<\)10 MPa).
## III Results and Discussions
The spin-dependent electron density of states (DOS) of bcc Fe and hcp Gd is shown in Fig. 1(a) and (b). In bcc Fe, the conducting \(d\) electrons near the Femi level are spin-polarized and are responsible for the direct exchange interactions. In contrast, in hcp Gd, the conducting electrons near the Fermi level are not spin-polarized while the magnetism comes from the deep \(f\) electrons. In this case, the conducting electrons mediate the RKKY interactions. The magnetic exchange parameters (\(J_{ij}\)) as a function of interatomic distance (\(R_{ij}\)) for bcc Fe are given in Fig. 1(c). In bcc Fe, \(J_{ij}\) is short-ranged as it decreases rapidly with distance and the values are smaller by at least an order of magnitude after the first two nearest neighbor interactions. Similar dominant contributions within the first two nearest neighbors towards the Heisenberg Hamiltonian for bcc Fe were reported by Wang _et al._[48] The calculated exchange parameters have similar values as reported by previous studies. [48; 49; 50; 51] The calculated \(J_{ij}\) values were fitted to the Bethe-Slater curve to analyze their behavior as a function of interatomic distance. Bethe-Slater curve fitted exchange parameters were previously found to explain the physical properties of bcc Fe accurately. [52]
The behavior of \(J_{ij}\) as a function of interatomic distance in hcp Gd is quite different as compared to bcc Fe. Despite the fact that Gd is ferromagnetic at a lower temperature, many of the exchange constants are antiferromagnetic. The dependence of the magnetic exchange parameters on the interatomic distance (Fig. 1(d)) reveals an oscillatory behavior between ferromagnetism and antiferromagnetism as the interatomic distance grows. This is characteristic of the RKKY exchange. [53; 54; 36] The exchange parameters within the first ten nearest neighbors were considered for further calculations, beyond which the interaction strength becomes negligible. The RKKY-type exchange interaction is mediated by the valence electrons and plays an important role in the magnetic ordering in rare-earth metals or their related compounds, which is sensitive to the atomic separation between these rare-earth atoms. Unlike the transition metals, the large magnetic moment (\(\sim\)7.5 \(\mu_{B}\) per atom) and strongly correlated behavior of rare-earth hcp Gd originated from half-filled 4
shells. Due to the strong localization of these orbitals, the overlap between neighboring atomic sites is dominated by the \(6s\), \(6p\), and \(5d\) states. [55] The direct exchange contribution from the \(4f\)-orbital has been found to be small and antiferromagnetic and hence does not affect the magnetic ordering significantly. [56]The simulated \(J_{ij}\) values for hcp Gd agree well with previously reported computational and experimental values. [39; 57; 58] The calculated RKKY-type \(J_{ij}\) values were fitted with Bathe-Slater curves using different cut-off radii. The RKKY-type \(J_{ij}\) values were fitted with Bathe-Slater curves using different cut-off radii.
details of the fitting process are given in the Supplementary Information.
The calculated \(J_{ij}\) values were used to simulate the Curie temperatures (\(T_{C}\)) of bcc Fe and hcp Gd using ASD as well as SLD simulations. The resulting magnetization versus temperature curves are displayed in Fig. 2(a) and (b). The results were then fitted to a simple power-law decay function of the form \(M(T)=(1-\frac{T}{T_{C}})^{\beta}\); where \(M\) is the magnetization and \(\beta\) is the critical exponent. The calculated \(T_{C}\) values with the static-lattice-based ASD approach are 1180 K and 330 K for bcc Fe and hcp Gd, respectively, and have a fair agreement with experimental measurements. However, a spin-only model for itinerant magnetism is invalid by construction for quantitative studies as the anomalous temperature dependence of the lattice constants observed in bcc Fe and hcp Gd is completely missing from the atomistic spin dynamics simulations. [30] Proper incorporation of finite-temperature lattice dynamics should improve the calculated \(T_{C}\) that is dominated by itinerant \(d-\) and \(f-\) electron magnetism. The lattice effect in the description of the finite-temperature magnetism in bcc Fe and hcp Gd has been done recently. [27] Incorporating the lattice dynamics into effect, our SLD simulations predicted the T\({}_{C}\) to be 1020 K and 310 K, respectively, for bcc Fe and hcp Gd, with better agreements with the experimentally measured values. The higher difference between ASD and SLD simulated values in bcc Fe indicates a stronger effect of lattice vibrations at the higher Curie temperature.
To understand the difference between the magnetocaloric responses of materials with direct and indirect RKKY exchange coupling parameters, we calculated the isothermal entropy changes with both ASD and SLD simulations for bcc Fe (Fig. 2(c)) and hcp Gd (Fig. 2(d)). The external-field-dependent magnetization versus temperature curves were simulated (Figs. S1 and S3 in the Supplementary Information) and the entropy changes were evaluated using Eqn. 1. The total entropy change of bcc Fe was calculated to be 0.13 and 0.16 J kg\({}^{-1}\) K\({}^{-1}\) with ASD and SLD simulations, respectively for a magnetic field change of 2 T. The difference between the entropy change values (0.03 J kg\({}^{-1}\) K\({}^{-1}\)) can be attributed to the lattice contribution to the isothermal entropy change at the transition temperature, which amounts to 23% of the pure spin contribution from the ASD simulation. Next, we analyze the effect of indirect RKKY exchange on the isothermal entropy change in hcp Gd using similar approaches. The evaluated isothermal entropy change using the spin-only Hamiltonian through the ASD simulation is \(\sim\)4.5 J kg\({}^{-1}\) K\({}^{-1}\) for a magnetic field change of 2 T. This calculated value based on ASD is smaller compared to the experimentally mea
sured value of \(\sim\)6 J kg\({}^{-1}\) K\({}^{-1}\). [59] The sizable discrepancy suggests that the missing lattice dynamics in the ASD simulation can be significant in indirect RKKY exchange materials such as hcp Gd. To verify this hypothesis, we performed SLD simulations of hcp Gd using the interatomic potential developed by Baskes _et al._[60] and an isothermal entropy change of \(\sim\)6.8 J kg\({}^{-1}\) K\({}^{-1}\) was predicted. This result suggests a lattice entropy contribution of 2.3 J kg\({}^{-1}\) K\({}^{-1}\) for a magnetic field change of 2 T, which is 51% of the pure spin contribution. Our findings agree well with Martinho Vieira et al. [41], where a Monte Carlo simulation along with DFT was used to study the magnetocaloric response in hcp Gd. The SLD evalu
Figure 2: **The ASD and SLD Simulation of the Curie Temperature and Isothermal Entropy Change in bcc Fe and hcp Gd.** The simulated magnetization as a function of temperature is shown for (a) bcc Fe and (b) hcp Gd using both ASD and SLD. The total isothermal entropy changes (\(\Delta S\)) are shown for (c) bcc Fe and (d) hcp Gd calculated using both ASD and SLD methods for a field change of 2 T. The \(T_{C}\) and \(\Delta S\) values are provided and compared with the experimentally measured values when available. Including lattice dynamics and spin-lattice coupling leads to a better agreement with experimental values. We did not find available experimental \(\Delta S\) data for bcc Fe.
ated total entropy change is higher than the measured value by \(0.8\,\mathrm{J}\,\mathrm{kg}^{-1}\,\mathrm{K}^{-1}\), which can be potentially attributed to the sample purity and measurement uncertainty in the experiment.
The significant lattice entropy change with an applied magnetic field indicates a stronger magnon-phonon coupling in indirect RKKY-exchange-based materials. Microscopically, this result suggests that the phonon structure in hcp Gd near the Curie temperature is sensitively tuned by the external magnetic field. Therefore, it is informative to explicitly examine the phonon dispersion relation of hcp Gd as influenced by an external magnetic field to
Figure 3: **External-magnetic-field-dependent phonon spectra and density of states (DOS)**. Simulated phonon spectra and DOS in (a, b) bcc Fe and (c, d) hcp Gd at \(0\,\mathrm{T}\) and \(5\,\mathrm{T}\) with the SLD approach. The changes in low-frequency phonons for hcp Gd are indicated with arrow marks. The magnetic field has a stronger influence on low-frequency and long-wavelength phonons in hcp Gd.
determine which phonon modes are mostly affected by the applied field. For this purpose, the frequencies of the phonon modes were calculated from solving the dynamic matrix elements obtained from the lattice Green's functions that can be directly calculated from the atomic trajectories in the SLD simulation. [61] Figure 3 shows the phonon dispersions of bcc Fe and hcp Gd calculated with magnetic fields of \(0\,\mathrm{T}\) and \(5\,\mathrm{T}\), respectively. Data with other applied field values are included in the Supplementary Information (Figs. S2 and S4). As seen from Fig. 3(a), in bcc Fe, significant changes in the phonon dispersion can only be noticed at higher phonon frequency ranges (\(>3\,\mathrm{T}\mathrm{H}\mathrm{z}\)) and near the Brillouin zone boundaries, whereas the low-frequency and long-wavelength phonons remain unaffected. This feature is more clearly shown in the phonon density of states shown in Fig. 3(b). This observation can be understood as follows. Since the spin-lattice coupling originates from the dependence of the magnetic exchange parameters on the interatomic distance (\(\frac{dJ_{ij}}{dR_{ij}}\)), the rapidly decreasing \(J_{ij}\) as a function of \(R_{ij}\) in direct-exchange materials [such as bcc Fe, as shown in Fig. 1(c)] determines that the spin-lattice coupling mainly affects the short-wavelength lattice vibrations, with a wavelength on the order of the range of the direct exchange. These short-wavelength phonons usually reside near the Brillouin zone boundary and within the higher frequency range, so their occupation is lower at a given temperature, leading to a smaller contribution to the entropy change. In contrast, the phonon dispersion of hcp Gd, as shown in Fig. 3(c), shows noticeable changes in lower-frequency and longer-wavelength ranges, as labeled by the arrows in Fig. 3(c), indicating that the spin-lattice coupling in hcp Gd occurs on a larger length scale. This is consistent with the oscillatory behavior of the magnetic exchange parameters as a result of the indirect RKKY exchange interaction, as shown in Fig. 1(d). Although the overall magnetic exchange strength in hcp Gd is weaker than that in bcc Fe, which leads to a lower Curie temperature in hcp Gd, the much slower decay of the magnetic exchange parameters and their oscillatory behavior as a function of distance leads to significant \(\frac{dJ_{ij}}{dR_{ij}}\) at longer distances. As a result of this long-range spin-lattice coupling, lattice vibrations associated with phonons with longer wavelengths are more affected by the external field, which can also be seen in the field-dependent phonon density of states shown in Fig. 3(d). Since these phonons have lower frequencies and, thus, higher occupation numbers at a given temperature, they contribute more to the field-induced isothermal entropy change.
To compare the lattice entropy contribution from different phonon modes in bcc Fe and hcp Gd more clearly, we further evaluated the lattice entropy change directly based on the
field-dependent phonon dispersions. The vibrational entropy of a particular phonon mode with frequency \(\omega\) in a harmonic crystal is given by the standard formula for non-interaction bosons: [62]
\[S_{ph}(\omega,T)=k_{B}[(n+1)\ln{(n+1)}-n\ln{n}], \tag{3}\]
where \(k_{B}\) is the Boltzmann constant and \(n\) is the occupation number of this phonon mode. Although this result is only rigorously true, it has been shown that, [63] to the leading order in perturbation theory, Eqn. 3 is still valid in anharmonic crystals as long as the renormalized phonon frequencies are used. In our case, the phonon frequencies extracted from the SLD simulation include the full renormalization effect due to both anharmonic phonon-phonon interactions and spin-lattice interactions. Using Eqn. 3, the calculated total lattice entropy change from the field-dependent phonon dispersions for a field change of 2 T is 0.05 J kg\({}^{-1}\) K\({}^{-1}\) for bcc Fe and 2.5 J kg\({}^{-1}\) K\({}^{-1}\) for hcp Gd. These values are similar to those evaluated by comparing the ASD and SLD simulations, as shown in Fig. 2(c) and (d). To further contrast the effects of direct exchange and indirect RKKY exchange interactions on lattice entropy changes, the accumulated contribution to the total lattice
Figure 4: **The accumulated contribution to the total lattice entropy change from phonon modes as a function of phonon frequency in bcc Fe and hcp Gd.** The phonon frequency is normalized to the maximum phonon frequency in either material. The accumulated contribution is also normalized to the total lattice entropy change. The entropy change is evaluated with a magnetic field of 2 T. It is clearly shown that lower-frequency phonons in hcp Gd has a much more significant contribution than those in bcc Fe.
entropy change from phonons with different frequencies in bcc Fe and hcp Gd under a field change of 2 T is shown in Fig. 4. As clearly seen in Fig. 4, the lower-frequency phonons have negligible contributions toward the lattice entropy change in bcc Fe, whereas in hcp Gd, the lower-frequency phonons have a significant contribution towards the lattice entropy change. This result confirms that the indirect RKKY-type exchange in hcp Gd can impact the short- as well as long-wavelength phonons due to its long interaction range, while the direct exchange in bcc Fe is short-ranged and can only affect the short-wavelength phonons. Our analysis provides a microscopic mechanism of how indirect RKKY exchange can lead to longer-range spin-lattice coupling and a significantly enhanced lattice contribution to the isothermal magnetocaloric entropy change.
## IV Conclusion
In summary, we applied SLD simulation to directly evaluate the spin and lattice contributions to the isothermal magnetocaloric entropy change in bcc Fe and hcp Gd. Based on a detailed analysis of the field-dependent phonon properties, we conclude that the indirect RKKY-type exchange in hcp Gd leads to a long-range spin-lattice coupling that affects long-wavelength and low-frequency phonons and, thus, causes an enhanced lattice contribution to the total entropy change. Our work provides a microscopic picture of how different types of spin-lattice coupling can give rise to distinct magnetocaloric responses and suggests that indirect RKKY exchange interactions are more desirable for a large MCE response, potentially guiding the future search for more efficient MCE materials.
###### Acknowledgements.
We thank Dr. Amir Jahromi, Dr. Ali Kashani, and Dr. Leo Ma for their helpful discussions. This work is based on research supported by the National Aeronautics and Space Administration (NASA) under award number 80NSSC21K1812. This work used Stampede2 at Texas Advanced Computing Center (TACC) through allocation MAT200011 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants 2138259, 2138286, 2138307, 2137603, and 2138296. Use was also made of computational facilities purchased with funds
from the National Science Foundation (award number CNS-1725797) and administered by the Center for Scientific Computing (CSC) at the University of California, Santa Barbara (UCSB). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR-1720256) at UCSB.
|
2301.03002 | Topological classes of thermodynamics of rotating AdS black holes | In this paper, we extend our previous work [Phys. Rev. D 107, 024024 (2023)]
to the more general cases with a negative cosmological constant, and
investigate the topological numbers for the singly rotating Kerr-AdS black
holes in all dimensions and the four-dimensional Kerr-Newman-AdS black hole as
well as the three-dimensional Ba\~nados-Teitelboim-Zanelli black hole. We find
that the topological numbers of black holes are remarkably influenced by the
cosmological constant. In addition, we also demonstrate that the dimension of
spacetimes has an important effect on the topological number for rotating AdS
black holes. Furthermore, it is interesting to observe that the difference
between the topological number of the AdS black hole and that of its
corresponding asymptotically flat black hole is always unity. This new
observation leads us to conjure that it might be valid also for other black
holes. Of course, this novel conjecture needs to be further verified by
examining the topological numbers of many other black holes and their AdS
counterparts in the future work. | Di Wu, Shuang-Qing Wu | 2023-01-08T09:02:43Z | http://arxiv.org/abs/2301.03002v4 | # Topological classes of thermodynamics of rotating AdS black holes
###### Abstract
In this paper, we extend our previous work [Phys. Rev. D **107**, 024024 (2023)] to the more general cases with a negative cosmological constant, and investigate the topological numbers for the singly rotating Kerr-AdS black holes in all dimensions and the four-dimensional Kerr-Newman-AdS black hole as well as the three-dimensional Banados-Teitelboim-Zanelli black hole. We find that the topological numbers of black holes are remarkably influenced by the cosmological constant. In addition, we also demonstrate that the dimension of spacetimes has an important effect on the topological number for rotating AdS black holes. Furthermore, it is interesting to observe that the difference between the topological number of the AdS black hole and that of its corresponding asymptotically flat black hole is always unity. This new observation leads us to require that it might be valid also for other black holes. Of course, this novel conjecture needs to be further verified by examining the topological numbers of many other black holes and their AdS counterparts in the future work.
## I Introduction
Recently, topology, as an important mathematical tool that applies to black hole physics, has received considerable interest and enthusiasm. The current research on topology is mainly embodied in two aspects. On the one hand, there is the research on the light rings [1; 2; 3; 4] of some black holes, which may provide more footprints for the observation of black holes and has been extended to timelike circular orbits [5; 6]; on the other hand, there is the research on the thermodynamic topological classification of various black holes [7; 8; 9; 10].
In particular, a new method to investigate the thermodynamic topological properties of black holes is proposed in Ref. [7] by considering black hole solutions as topological thermodynamic defects and constructing topological numbers, and further, dividing all black holes into three categories according to their different topological numbers. Because these topological numbers are universal constants that are independent of the black hole solution parameters, hence they are very important for understanding the nature of black holes and gravity. The topological approach proposed in Ref. [7] quickly gained popularity due to its straightforwardness and adaptability, and subsequently it was effectively used to explore the topological numbers of several well-known black hole solutions [14; 15; 16; 17; 18; 19], i.e., the Schwarzschild-AdS black hole [14], the static black holes in Lovelock gravity [15], the static Gauss-Bonnet-AdS black holes [16], the static black hole in nonlinear electrodynamics [17], and the static Born-Infeld AdS black hole [18], as well as some static hairy black holes [19]. However, all of the preceding researches [14; 15; 16; 17; 18; 19] are limited to the static cases, leaving the topological numbers of rotating black holes and AdS scalar hairy black holes unexplored. Very recently, we have extended the topological approach to rotating black hole cases and investigated the topological numbers for the cases of rotating Kerr and Kerr-Newman black holes [20].
Since the study of the topological number of black holes is still in its infancy and the topological number of the rotating AdS black holes remains virgin territory, it deserves to be explored deeply. On the other hand, the study of rotating AdS black holes has already shed light on the nature of gravity through gauge-gravity dualities [21; 22; 23], so it is very important and remarkable to investigate the topological number of rotating AdS black holes. These two aspects motivate us to conduct the present work. In this paper, we shall investigate the topological number of the \(d\)-dimensional singly rotating Kerr-AdS black holes and the four-dimensional Kerr-Newman-AdS black hole, as well as the three-dimensional Banados-Teitelboim-Zanelli (BTZ) black hole [24; 25; 26]. Compared with the previous paper [20], the aim of the present work is concentrated on investigating the impact of the cosmological constant on the topological number of black holes, which has not been studied in any previous related literature. We will see that the cosmological constant is important in determining the topological number of rotating black holes, and observe that the difference between the topological number of the AdS black hole and that of its corresponding asymptotically flat black hole is always unity, which leads us to conjure that it might also hold true for other black holes.
The remaining part of this paper is organized as follows. In Sec. II, we first give a brief review of the topological approach and investigate the topological number of the four-dimensional Schwarzschild-AdS black hole as a warmup exercise. In Sec. III, we will focus on the topological number of the four-dimensional rotating Kerr-AdS black hole. In Sec. IV, we shall extend these discussions to the cases of the \(d\)-dimensional singly rotating Kerr-AdS black holes. In Sec. V, we will investigate the topological number of the three-dimensional rotating BTZ black hole. In Sec. VI, we then turn to discuss the topological number of the four-dimensional Kerr-Newman-AdS black hole. Finally, we present our conclusions in Sec. VII. In the Appendix A, the topological number of the three-dimensional charged BTZ black hole is also investigated.
Schwarzschild-AdS\({}_{4}\) black hole
In this section, we first present a brief review of the topological approach proposed in Ref. [7], then investigate the topological number of the four-dimensional Schwarzschild-AdS black hole as a warmup exercise. There are two reasons for us to do so. On the one hand, it can be used to show that the charge parameter has a significant effect on the topological numbers of the static AdS\({}_{4}\) black holes. On the other hand, it is also convenient for us to make a comparison with the corresponding results of the rotating Kerr-AdS\({}_{4}\) black hole, so as to observe the influence of the rotation parameter on the topological number of the Schwarzschild-AdS\({}_{4}\) black hole.
As shown in Ref. [7], one can introduce the generalized off-shell Helmholtz free energy
\[\mathcal{F}=M-\frac{S}{\tau}\,, \tag{1}\]
for a black hole thermodynamic system with mass \(M\) and entropy \(S\), where \(\tau\) is an extra variable that can be thought of as the inverse temperature of the cavity enclosing the black hole. Only when \(\tau=1/T\) does the generalized Helmholtz free energy become on-shell.
In Ref. [7], a core vector \(\phi\) is defined as1
Footnote 1: One can also construct the vector \(\phi\) in a more general form as
\[\phi=\left(\frac{\partial\mathcal{F}}{\partial r_{h}},\ -C\cot\Theta\csc\Theta \right),\]
where \(C\) is an arbitrary positive constant. Changing \(C\) to a different value will slightly change the direction of the unit vector \(n\), but will not change the position of the zero point of the vector field of its corresponding winding number. Therefore, one can just set \(C=1\) for the sake of simplicity. However, the authors of a new preprint [27] criticize that this definition of the vector field \(\phi\) in Ref. [7] is not intrinsic.
\[\phi=\left(\frac{\partial\mathcal{F}}{\partial r_{h}},\ -\cot\Theta\csc \Theta\right), \tag{2}\]
in which the two parameters \(r_{h}\) and \(\Theta\) obey \(0<r_{h}<+\infty\) and \(0\leq\Theta\leq\pi\), respectively. The component \(\phi^{\Theta}\) is divergent at \(\Theta=0,\pi\), thus the direction of the vector points outward there.
Using Duan's \(\phi\)-mapping topological current theory [28; 29; 30], a topological current can be described as follows:
\[j^{\mu}=\frac{1}{2\pi}\varepsilon^{\mu\nu\rho}\epsilon_{ab}\partial_{\nu}n^{a }\partial_{\rho}n^{b}\,,\qquad\mu,\nu,\rho=0,1,2, \tag{3}\]
where \(\partial_{\nu}=\partial/\partial x^{\nu}\) and \(x^{\nu}=(\tau,\ r_{h},\ \Theta)\). The unit vector \(n\) reads as \(n=(n^{\nu},n^{\Theta})\), where \(n^{\nu}=\partial^{\nu_{h}}/||\phi||\) and \(n^{\Theta}=\phi^{\Theta}/||\phi||\). It is simple to demonstrate that the topological current (3) given above is conserved, allowing one to easily deduce \(\partial_{\mu}j^{\mu}=0\). It is then established that the topological current \(j^{\mu}\) is a \(\delta\)-function of the field configuration [29; 30]:
\[j^{\mu}=\delta^{2}(\phi)J^{\mu}\left(\frac{\phi}{x}\right), \tag{4}\]
where the 3-dimensional Jacobian \(J^{\mu}\left(\phi/x\right)\) is defined as: \(\varepsilon^{ab}J^{\mu}\left(\phi/x\right)=\varepsilon^{\mu\nu\rho}\partial_ {\nu}\phi^{a}\partial_{\rho}\phi^{b}\). It is simple to indicate that \(j^{\mu}\) equals to zero only when \(\phi^{a}(x_{i})=0\), hence the topological number \(W\) can be determined as follows:
\[W=\int_{\Sigma}j^{0}d^{2}x=\sum_{i=1}^{N}\beta_{i}\eta_{i}=\sum_{i=1}^{N}w_{ i}\,, \tag{5}\]
where \(\beta_{i}\) is the positive Hopf index that counts the number of the loops of the vector \(\phi^{a}\) in the \(\phi\)-space when \(x^{\mu}\) are around the zero point \(z_{i}\), whilst \(\eta_{i}=sign(J^{0}(\phi/x)_{z})=\pm 1\) is the Brouwer degree, and \(w_{i}\) is the winding number for the \(i\)-th zero point of \(\phi\) that is contained in \(\Sigma\). It is important to keep in mind that two loops \(\Sigma_{1}\) and \(\Sigma_{2}\) have the same winding number if they both enclose the same zero point of \(\phi\). On the other hand, if there is no zero point in the surrounding region, then one can arrive at the topological number: \(W=0\).
In the following, we shall investigate the topological number of the four-dimensional Schwarzschild-AdS black hole via the above topological approach. For the Schwarzschild-AdS\({}_{4}\) black hole, its metric has the form [31]
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}\big{(}d\theta^{2}+\sin^{2}\theta d \varphi^{2}\big{)}\,, \tag{6}\]
where
\[f(r)=1-\frac{2m}{r}+\frac{r^{2}}{l^{2}}\,,\]
in which \(m\) is the mass parameter, and \(l\) is the cosmological scale associated with the pressure \(P=3/(8\pi l^{2})\) of the four-dimensional AdS black holes [32; 33; 34]. The mass and entropy associated with the above solution (6) can be computed via the standard method and have the following exquisite forms:
\[M=m\,,\qquad S=\pi r_{h}^{2}\,, \tag{7}\]
where \(r_{h}\) are the locations of the event and Cauchy horizons that satisfy the equation: \(f(r_{h})=0\).
For the Schwarzschild-AdS\({}_{4}\) black hole, one can define the generalized Helmholtz free energy as
\[\mathcal{F}=\frac{r_{h}}{2}+\frac{4\pi}{3}Pr_{h}^{2}-\frac{\pi r_{h}}{\tau}\,. \tag{8}\]
The components of the vector \(\phi\) can be easily calculated as:
\[\phi^{r_{h}}=\frac{1}{2}+4\pi Pr_{h}^{2}-\frac{2\pi r_{h}}{\tau}\,,\quad\phi^{ \Theta}=-\cot\Theta\csc\Theta\,. \tag{9}\]
By solving the equation \(\phi^{r_{h}}=0\), one can obtain a curve on the \(r_{h}-\tau\) plane. For the four-dimensional Schwarzschild-AdS black hole, one can get
\[\tau=\frac{4\pi r_{h}}{1+8\pi Pr_{h}^{2}}\,. \tag{10}\]
Taking the pressure \(Pr_{0}^{2}=0.0022\), where \(r_{0}\) is an arbitrary length scale set by the size of a cavity enclosing the black hole, we show zero points of \(\phi^{r_{h}}\) in the \(r_{h}-\tau\) plane in Fig. 1. For small \(\tau\), such as \(\tau=\tau_{1}\), there are two intersection points for the Schwarzschild-AdS\({}_{4}\) black hole. The intersection points exactly satisfy the condition \(\tau=1/T\), and thus represent the
on-shell Schwarzschild-AdS\({}_{4}\) black holes with the characteristic temperature \(T=1/\tau\). The two intersection points for the Schwarzschild-AdS\({}_{4}\) black hole can coincide with each other when \(\tau=\tau_{c}\), and then vanish when \(\tau>\tau_{c}\), therefore \(\tau_{c}\) is an annihilation point and can be found at \(\tau_{c}=26.72\tau_{0}\), which can be seen straightforwardly from Fig. 1. Furthermore, the annihilation point \(\tau_{c}\) divides the Schwarzschild-AdS\({}_{4}\) black hole into the upper and lower branches with the winding numbers \(w=1\) and \(w=-1\), respectively. One can see that, for the Schwarzschild-AdS\({}_{4}\) solutions at any given temperature, there can exist one thermodynamically stable black hole and one thermodynamically unstable black hole. It has been shown that the winding number can be used to characterize the local thermodynamic stability, with positive and negative values corresponding to thermodynamically stable and unstable black holes [7], respectively.
Alternatively, the unit vector field \(n\) can also be plotted for any arbitrarily selected typical values (keep in mind that \(\tau\) must be less than \(\tau_{c}\)), for instance, \(\tau/r_{0}=26\) and \(Pr_{0}^{2}=0.0022\) in Fig. 2, where we find two zero points: ZP\({}_{1}\) at \(r_{h}=3.36\tau_{0}\) and ZP\({}_{2}\) at \(r_{h}=5.38\tau_{0}\), with the winding numbers \(w_{1}=1\) and \(w_{2}=-1\), respectively, to determine the topological number for the four-dimensional Schwarzschild-AdS black hole. Based upon the local property of the zero point, one can easily find that the topological number is: \(W=-1+1=0\) for the Schwarzschild-AdS\({}_{4}\) black hole, which is consistent with the result given in Ref. [14].
In addition, the fact that the topological number of the Schwarzschild-AdS\({}_{4}\) black hole is zero while that of the Schwarzschild black hole is \(-1\)[7] suggests that the cosmological constant significantly changes the topological number of the static black holes. Furthermore, since the topological number of the RN-AdS\({}_{4}\) black hole is: \(W=1\), it is easy to see that the Schwarzschild-AdS\({}_{4}\) black hole and the RN-AdS\({}_{4}\) black hole belong to two different topological classes according to the topological classification method proposed in Ref. [7], which indicates that the electric charge has an important influence on the topological number of static AdS\({}_{4}\) black holes.
## III Kerr-AdS\({}_{4}\) black hole
From now on, we come to the main subject of this paper, i.e., exploring the topological number of rotating AdS black holes. In this section, we will focus on the topological number of the four-dimensional Kerr-AdS black hole, whose metric in the asymptotically non-rotating frame has the form [31]
\[ds^{2} = -\frac{\Delta_{r}}{\Sigma}\Big{(}\frac{\Delta_{\theta}}{\Xi}dt- \frac{a}{\Xi}\sin^{2}\theta d\varphi\Big{)}^{2}+\frac{\Sigma}{\Delta_{r}}dr^{ 2}+\frac{\Sigma}{\Delta_{\theta}}d\theta^{2} \tag{11}\] \[+\frac{\Delta_{\theta}\sin^{2}\theta}{\Sigma}\Big{[}\frac{a(r^{2} +l^{2})}{l^{2}\Xi}dt-\frac{r^{2}+a^{2}}{\Xi}d\varphi\Big{]}^{2}\]
in terms of the Boyer-Lindquist coordinates, where
\[\Delta_{r} = (r^{2}+a^{2})\Big{(}1+\frac{r^{2}}{l^{2}}\Big{)}-2mr\,,\quad \Xi=1-\frac{a^{2}}{l^{2}}\,,\] \[\Delta_{\theta} = 1-\frac{a^{2}}{l^{2}}\cos^{2}\theta\,,\quad\Sigma=r^{2}+a^{2}\cos ^{2}\theta\,,\]
in which \(a\) is the rotation parameter, \(m\) is the mass parameter, and \(l\) is the AdS radius.
The mass \(M\) and entropy \(S\) associated with the above solution (11) are [35]
\[M=\frac{m}{\Xi^{2}}\,,\qquad S=\frac{\pi(r_{h}^{2}+a^{2})}{\Xi}\,. \tag{12}\]
Using the definition of the generalized off-shell Helmholtz free energy (1) and \(l^{2}=3/(8\pi P)\), one can easily get
\[\mathscr{F}=\frac{3(r_{h}^{2}+a^{2})\big{[}2\pi r_{h}(8\pi Pa^{2}+4Pr_{h}\tau- 3)+3\tau\big{]}}{2r_{h}(8\pi Pa^{2}-3)^{2}\tau} \tag{13}\]
Figure 1: Zero points of the vector \(\varphi^{n}\) shown on the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.0022\) for the Schwarzschild-AdS\({}_{4}\) black hole. The annihilation point for this black hole is represented by the red dot with \(\tau_{c}\). There are two Schwarzschild-AdS\({}_{4}\) black holes when \(\tau=\tau_{1}\). Obviously, the topological number is: \(W=1-1=0\).
Figure 2: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane with \(Pr_{0}^{2}=0.0022\) and \(\tau/r_{0}=26\) for the Schwarzschild-AdS\({}_{4}\) black hole. The zero points (ZPs) marked with black dots are at \((r_{h}/r_{0},\Theta)=(3.36,\pi/2)\), \((5.38,\pi/2)\) for ZP\({}_{1}\) and ZP\({}_{2}\), respectively. The blue contours \(C_{i}\) are closed loops surrounding the zero points.
for the Kerr-AdS\({}_{4}\) black hole. Then the components of the vector \(\phi\) can be computed as
\[\phi^{r_{h}} = \frac{12\pi(8\pi Pq^{2}-3)r_{h}^{3}+3a^{2}(8\pi Pr_{h}^{2}-3)\tau}{2 r_{h}^{2}(8\pi Ra^{2}-3)^{2}\tau} \tag{14}\] \[+\frac{9(1+8\pi Pr_{h}^{2})}{2(8\pi Pa^{2}-3)^{2}}\,,\] \[\phi^{\Theta} = -\cot\Theta\csc\Theta\,. \tag{15}\]
By solving the equation \(\phi^{r_{h}}=0\), one can obtain
\[\tau=\frac{4\pi r_{h}^{3}(3-8\pi Pa^{2})}{a^{2}(8\pi Pr_{h}^{2}-3)+3(8\pi Pr_{ h}^{2}+1)r_{h}^{2}} \tag{16}\]
as the zero point of the vector field \(\phi\).
Taking the pressure \(Pr_{0}^{2}=0.0022\) and the rotation parameter \(a=r_{0}\) for the Kerr-AdS\({}_{4}\) black hole, we show zero points of \(\phi^{r_{h}}\) in the \(r_{h}-\tau\) plane in Fig. 3, and the unit vector field \(n\) in Fig. 4 with \(\tau=24r_{0}\), \(26r_{0}\), and \(28r_{0}\), respectively. From Figs. 3 and 4, one can observe that for these values of \(Pr_{0}^{2}\) and \(a/r_{0}\), one generation point and one annihilation point can be found at \(\tau/r_{0}=\tau_{a}/r_{0}=24.90\) and \(\tau/r_{0}=\tau_{b}/r_{0}=26.82\), respectively. One can see that there are one large black hole branch for \(\tau<\tau_{a}\), three black hole branches for \(\tau_{a}<\tau<\tau_{b}\), and one small black hole branch for \(\tau>\tau_{b}\). Calculating the winding number \(w\) for these three black hole branches, we find that both the small and large black hole branches have \(w=1\), while the intermediate black hole branch has \(w=-1\). The Kerr-AdS\({}_{4}\) black hole always has the topological number \(W=1\), unlike the Kerr black hole, which has a topological number of zero [20]. Therefore, from the thermodynamic topological standpoint, the Kerr-AdS\({}_{4}\) black hole and the Kerr black hole are different kinds of black hole solutions, and this indicates that the cosmological constant is important in determining the topological number for the rotating black hole. What is more, since the topological number of the Schwarzschild-AdS\({}_{4}\) black hole is zero, while that of the Kerr-AdS\({}_{4}\) black
Figure 3: Zero points of \(\phi^{r_{h}}\) shown in the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.0022\) and \(a=r_{0}\) for the Kerr-AdS\({}_{4}\) black hole. The red solid, blue dashed, and black solid lines are for the large black hole (LBH), intermediate black hole (IBH), and small black hole (SBH), respectively. The annihilation and generation points are represented by red and black dots, respectively.
hole is 1, so it can be inferred that the rotation parameter has a remarkable effect on the topological number for the uncharged AdS\({}_{4}\) black hole.
## IV Singly rotating Kerr-AdS black holes in arbitrary dimensions
In this section, we will extend the above discussions to the cases of higher-dimensional rotating black holes by considering the singly rotating Kerr-AdS solutions in arbitrary dimensions. For \(d\)-dimensional singly rotating Kerr-AdS black holes, the metric in the asymptotically non-rotating frame has the form [36; 37]
\[ds^{2} = -\frac{\Delta_{r}}{\Sigma}\Big{(}\frac{\Delta_{\theta}}{\Xi}dt- \frac{a}{\Xi}\sin^{2}\theta d\varphi\Big{)}^{2}+\frac{\Sigma}{\Delta_{r}}dr^{ 2}+\frac{\Sigma}{\Delta_{\theta}}d\theta^{2} \tag{17}\] \[+\frac{\Delta_{\theta}\sin^{2}\theta}{\Sigma}\Big{[}\frac{a(r^{2 }+l^{2})}{l^{2}\Xi}dt-\frac{r^{2}+a^{2}}{\Xi}d\varphi\Big{]}^{2}\] \[+r^{2}\cos^{2}\theta d\Omega_{d-4}^{2}\,,\]
where \(d\Omega_{d-4}\) denotes the line element of the \((d-4)\)-dimensional unit sphere, and
\[\Delta_{r}=(r^{2}+a^{2})\Big{(}1+\frac{r^{2}}{l^{2}}\Big{)}-2mr ^{5-d}\,,\quad\Xi=1-\frac{a^{2}}{l^{2}}\,,\] \[\Delta_{\theta}=1-\frac{a^{2}}{l^{2}}\cos^{2}\theta\,,\quad\Sigma =r^{2}+a^{2}\cos^{2}\theta\,.\]
The thermodynamic quantities are [38]
\[M =\frac{\omega_{d-2}m}{4\pi\Xi^{2}}\Big{[}\frac{(d-4)\Xi}{2}+1 \Big{]}\,,\quad J=\frac{\omega_{d-2}ma}{4\pi\Xi^{2}}\,, \tag{18}\] \[\Omega =\frac{a(r_{h}^{2}+l^{2})}{l^{2}(r_{h}^{2}+a^{2})}\,,\quad S= \frac{\mathcal{A}}{4}=\frac{\omega_{d-2}}{4\Xi}(r_{h}^{2}+a^{2})r_{h}^{d-4}\,,\] \[T =\frac{r_{h}}{2\pi}\Big{(}1+\frac{r_{h}^{2}}{l^{2}}\Big{)}\Big{(} \frac{1}{r_{h}^{2}+a^{2}}+\frac{d-3}{2r_{h}^{2}}\Big{)}-\frac{1}{2\pi r_{h}}\,,\] \[V =\frac{r_{h}\mathcal{A}}{d-1}\Big{[}1+\frac{a^{2}(r_{h}^{2}+l^{2} )}{(d-2)\Xi l^{2}r_{h}^{2}}\Big{]}\,,\quad P=\frac{(d-1)(d-2)}{16\pi l^{2}}\,,\]
where \(\omega_{d-2}=2\pi^{(d-1)/2}/\Gamma[(d-1)/2]\), and \(r_{h}\) is determined by the horizon equation: \(\Delta_{r}=0\).
In our previous paper [20], we have uniformly considered the topological numbers of \(d\)-dimensional singly rotating Kerr black holes. Here, we will extend that work to the cases of \(d\)-dimensional singly rotating Kerr-AdS black holes. Since there is one more additional thermodynamic quantity associated with the cosmological constant to be included, it will be more convenient to separately consider the topological numbers of different dimensions.
### \(d=5\) case
We first consider \(d=5\) case. From Eq. (IV), one can obtain the expression of the generalized Helmholtz free energy as
\[\mathcal{F} =-\frac{\pi(r_{h}^{2}+a^{2})}{8\pi(4\pi Pa^{2}-3)^{2}}\Big{[}12 \pi r_{h}(3-4\pi Pa^{2}) \tag{19}\] \[\quad+\tau(4\pi Pr_{h}^{2}+3)(4\pi Ra^{2}-9)\Big{]}\,,\]
so the components of the vector \(\phi\) can be computed as
\[\phi^{\prime_{h}} =\frac{\pi}{4\tau(4\pi Pa^{2}-3)^{2}}\Big{\{}6\pi(4\pi Pa^{2}-3)( 3r_{h}^{2}+a^{2}) \tag{20}\] \[-r_{h}\tau(4\pi Pa^{2}-9)\big{[}4\pi P(2r_{h}^{2}+a^{2})+3\big{]} \Big{\}}\,,\] \[\phi^{\Theta} =-\cot\Theta\csc\Theta\,. \tag{21}\]
It is simple to obtain
\[\tau=\frac{6\pi(4\pi Pa^{2}-3)(3r_{h}^{2}+a^{2})}{r_{h}(4\pi Pa^{2}-9)[4\pi P( 2r_{h}^{2}+a^{2})+3]} \tag{22}\]
as the zero point of the vector field \(\phi\).
For the singly rotating Kerr-AdS\({}_{5}\) black hole, we plot the zero points of the component \(\phi^{\prime_{h}}\) with \(Pr_{0}^{2}=0.02\) and \(a/r_{0}=1\) in Fig. 5, and the unit vector field \(n\) in Fig. 6 with \(\tau=5r_{0}\), \(7r_{0}\), and \(9r_{0}\), respectively. Note that for these values of \(Pr_{0}\) and \(a/r_{0}\), one generation point and one annihilation point can be found in Fig. 5 at \(\tau/r_{0}=\tau_{a}/r_{0}=5.96\) and \(\tau/r_{0}=\tau_{b}/r_{0}=7.35\), respectively. From Figs. 5 and 6, one can easily obtain the topological number \(W=1\) for the singly rotating Kerr-AdS\({}_{5}\) black hole using the local property of the zero points, which is the same as that of the four-dimensional Kerr-AdS black hole in the previous section but different from that of the five-dimensional singly rotating Kerr black hole, which is \(W=0\)[20]. Therefore, the topological number of the five-dimensional rotating black hole is significantly changed when the cosmological constant is turned on.
Figure 5: The zero points of \(\phi^{\prime_{h}}\) shown in the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.02\) and \(a/r_{0}=1\) for the singly rotating Kerr-AdS\({}_{5}\) black hole. The red solid, blue dashed, and black solid lines are for the large black hole (LBH), intermediate black hole (IBH), and small black hole (SBH), respectively. The annihilation and generation points are represented by red and black dots, respectively.
### \(d=6\) case
Next, we consider \(d=6\) case whose generalized Helmholtz free energy is
\[\mathcal{F} = -\frac{2\pi r_{h}(r_{h}^{2}+a^{2})}{3\tau(4\pi Pa^{2}-5)^{2}}\Big{[} 5\pi r_{h}(5-4\pi Pa^{2}) \tag{23}\] \[+\tau(4\pi P_{h}^{2}+5)(2\pi Pa^{2}-5)\Big{]}\,.\]
Thus, the components of the vector \(\phi\) are
\[\phi^{r_{h}} = \frac{2\pi}{3\tau(4\pi Pa^{2}-5)}\Big{\{}10\pi r_{h}(4\pi Pa^{2}- 5)(2r_{h}^{2}+a^{2}) \tag{24}\] \[-\tau(2\pi Pa^{2}-5)\big{[}20\pi Pr_{h}^{4}+3(4\pi Pa^{2}+5)r_{h} ^{2}\] \[+5a^{2}\big{]}\Big{\}}\,,\] \[\phi^{\Theta} = -\cot\Theta\csc\Theta\,. \tag{25}\]
So the zero point of the vector field \(\phi\) is
\[\tau=\frac{10\pi r_{h}(4\pi Pa^{2}-5)(2r_{h}^{2}+a^{2})}{(2\pi Pa^{2}-5)\left[ 20\pi Pr_{h}^{4}+3(4\pi Pa^{2}+5)r_{h}^{2}+5a^{2}\right]}\,. \tag{26}\]
Taking \(Pr_{0}^{2}=0.1\) and \(a/r_{0}=1\) for the singly rotating Kerr-AdS\({}_{6}\) black hole, we plot zero points of \(\phi^{r_{h}}\) in the \(r_{h}-\tau\) plane in Fig. 7, and the unit vector field \(n\) with \(\tau=2.5r_{0}\) in Fig. 8, respectively. For small \(\tau\), such as \(\tau=\tau_{1}\), there are two intersection points for the singly rotating Kerr-AdS\({}_{6}\) black hole. These two intersection points can coincide with each other when \(\tau=\tau_{\rm c}\), and then vanish when \(\tau>\tau_{\rm c}\), therefore \(\tau_{\rm c}\) is an annihilation point that can be found at \(\tau_{\rm c}=2.81r_{0}\). Based upon the local property of the zero point, one can easily obtain the the topological number \(W=0\) for the singly rotating Kerr-AdS\({}_{6}\) black hole, which is different from that of the six-dimensional singly rotating Kerr black hole (\(W=-1\)) [20]. This indicates that the cosmological constant is important in determining the topological number for the six-dimensional rotating black hole.
Figure 6: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane. The zero points (ZPs) marked with black dots are at \((r_{h}/r_{0},\Theta)=(6.07,\pi/2),\ (0.35,\pi/2),\ (1.52,\pi/2),\ (3.18,\pi/2),\)\((0.23,\pi/2),\) for ZP\({}_{1}\), ZP\({}_{2}\), ZP\({}_{3}\), ZP\({}_{4}\), and ZP\({}_{5}\), respectively. The blue contours \(C_{i}\) are closed loops surrounding the zero points.
Figure 7: Zero points of \(\phi^{r_{h}}\) shown in the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.1\) and \(a/r_{0}=1\) for the singly rotating Kerr-AdS\({}_{6}\) black hole. The red dot with \(\tau_{\rm c}\) represents the annihilation point for the black hole. There are two singly rotating Kerr-AdS\({}_{6}\) black holes when \(\tau=\tau_{1}\). It is easy to obtain the topological number: \(W=1-1=0\).
### \(d=7\) case
Then, we consider \(d=7\) case with its generalized Helmholtz free energy being
\[\mathcal{F}= -\frac{3\pi^{2}r_{h}^{2}(r_{h}^{2}+a^{2})}{16\tau(15-8\pi Pa^{2})^{2 }}\Big{[}20\pi r_{h}(15-8\pi Pa^{2})\] \[+\tau(8\pi Pr_{h}^{2}+15)(8\pi Pa^{2}-25)\Big{]}\,. \tag{27}\]
Therefore, one can straightforwardly obtain
\[\tau=\frac{10\pi r_{h}(8\pi Pa^{2}-15)(5r_{h}^{2}+3a^{2})}{(8\pi Pa^{2}-25) \big{[}6r_{h}^{2}(4\pi Pr_{h}^{2}+5)+a^{2}(16\pi Pr_{h}^{2}+15)\big{]}} \tag{28}\]
by solving the equation \(\phi^{r_{h}}=0\).
Taking \(Pr_{0}^{2}=0.3\) and \(a/r_{0}=1\) for the singly rotating Kerr-AdS\({}_{7}\) black hole, we plot the zero points of \(\phi^{r_{h}}\) in the \(r_{h}-\tau\) plane in Fig. 9, and the unit vector field \(n\) with \(\tau=r_{0}\) in Fig. 10, respectively. Note that for the values of \(Pr_{0}^{2}=0.3\) and \(a/r_{0}=1\), one annihilation point can be found at \(\tau/r_{0}=\tau_{c}/r_{0}=1.30\). Based on the local property of the zero points, we get the topological number \(W=0\) for the singly rotating Kerr-AdS\({}_{7}\) black hole, while that of the seven-dimensional singly rotating Kerr black hole is: \(W=-1\)[20]. This demonstrates that the cosmological constant is crucial to determine the topological number of the rotating black hole in seven dimensions.
### \(d=8\) case
Let us turn to the \(d=8\) case. Similar to the procedure done in the previous three subsections, one can get the generalized Helmholtz free energy as follows:
\[\mathcal{F}= -\frac{2\pi^{2}r_{h}^{3}(r_{h}^{2}+a^{2})}{15\tau(8\pi Pa^{2}-21) ^{2}}\Big{[}42\pi r_{h}(21-8\pi Ra^{2})\] \[+\tau(8\pi Pr_{h}^{2}+21)(16\pi Ra^{2}-63)\Big{]}\,. \tag{29}\]
As a result, by solving the equation \(\phi^{r_{h}}=0\), one can easily arrive at
\[\tau=\frac{84\pi r_{h}(8\pi Pa^{2}-21)(3r_{h}^{2}+2a^{2})}{(16\pi Pa^{2}-63) \big{[}56\pi Pr_{h}^{4}+5(8\pi Pa^{2}+21)r_{h}^{2}+63a^{2}\big{]}} \tag{30}\]
as the zero point of the vector field.
In Figs. 11 and 12, taking \(Pr_{0}^{2}=0.5\) and \(a/r_{0}=1\) for the singly rotating Kerr-AdS\({}_{8}\) black hole, we plot the zero points of \(\phi^{r_{h}}\) in the \(r_{h}-\tau\) plane and the unit vector field \(n\) with \(\tau/r_{0}=0.8\), respectively. Note that for the values of
Figure 8: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane with \(Pr_{0}^{2}=0.1\), \(a/r_{0}=1\), and \(\tau/r_{0}=2.5\) for the singly rotating Kerr-AdS\({}_{6}\) black hole. The zero points (ZPs) marked with black dots are at \((r_{h}/r_{0},\Theta)=(0.80,\pi/2)\), \((2.43,\pi/2)\) for \(\mathrm{ZP_{1}}\) and \(\mathrm{ZP_{2}}\), respectively. The blue contours \(C_{i}\) are closed loops surrounding the zero points.
Figure 10: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane with \(Pr_{0}^{2}=0.3\), \(a/r_{0}=1\), and \(\tau=r_{0}\) for the singly rotating Kerr-AdS\({}_{7}\) black hole. The zero points (ZPs) marked with black dots are at \((r_{h}/r_{0},\Theta)=(0.48,\pi/2)\), \((2.40,\pi/2)\) for \(\mathrm{ZP_{1}}\) and \(\mathrm{ZP_{2}}\), respectively. The blue contours \(C_{i}\) are closed loops surrounding the zero points.
Figure 9: Zero points of \(\phi^{r_{h}}\) shown in the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.3\) and \(a/r_{0}=1\) for the singly rotating Kerr-AdS\({}_{7}\) black hole. The red dot with \(\tau_{c}\) denotes the annihilation point for the black hole. There are two singly rotating Kerr-AdS\({}_{7}\) black holes for \(\tau=\tau_{1}\). Obviously, the topological number is \(W=0\).
\(Pr_{0}^{2}=0.5\) and \(a/r_{0}=1\), one annihilation point can be found at \(\tau/r_{0}=\tau_{\rm c}/r_{0}=0.92\). Based on the local property of the zero points, it is easy to find that the topological number \(W=0\) for the singly rotating Kerr-AdS\({}_{8}\) black hole. Combined with the fact that the eight-dimensional singly rotating Kerr black hole has a topological number: \(W=-1\)[20], it is evident that the cosmological constant is important in determining the topological number for the rotating black hole in eight dimensions.
### \(d=9\) case
Finally, we investigate the topological number for the nine-dimensional singly rotating Kerr-AdS black hole whose generalized Helmholtz free energy is
\[\mathcal{F}= -\frac{\pi^{3}r_{h}^{4}(r_{h}^{2}+a^{2})}{48\tau(2\pi Pr^{2}-7)^{ 2}}\Big{[}28\pi r_{h}(7-2\pi Pa^{2})\] \[+\tau(10\pi Pa^{2}-49)(2\pi Pr_{h}^{2}+7)\Big{]}\,. \tag{31}\]
Thus, the zero point of the vector constructed in the topological approach can be written as
\[\tau=\frac{14\pi r_{h}(2\pi Pa^{2}-7)(7r_{h}^{2}+5a^{2})}{(10\pi Pa^{2}-49)[8 \pi Pr_{h}^{4}+3(2\pi Pa^{2}+7)r_{h}^{2}+14a^{2}]}\,. \tag{32}\]
In Figs. 13 and 14, taking \(Pr_{0}^{2}=1\) and \(a/r_{0}=1\) for the nine-dimensional singly rotating Kerr-AdS black hole, we plot the zero points of \(\phi^{th}\) in the \(r_{h}-\tau\) plane and the unit vector field \(n\) with \(\tau/r_{0}=0.25\), respectively. Note that for the values of \(Pr_{0}^{2}=1\) and \(a/r_{0}=1\), one annihilation point can be found at \(\tau/r_{0}=\tau_{\rm c}/r_{0}=0.27\). Based on the local property of the zero points, the topological number is easily determined as \(W=0\), which is different from the topological number of the nine-dimensional singly rotating Kerr black hole (\(W=-1\))[20]. Therefore, this fact indicates that the cosmological constant plays a significant role in determining the topological number for the nine-dimensional rotating black hole.
### Summary: The impact of dimension of the spacetime
Summarizing our results from Subsects. IV.1-IV.5, we can find that the topological number of the five-dimensional singly rotating Kerr-AdS\({}_{5}\) black hole is \(W=0\), while the six- to nine-dimensional singly rotating Kerr-AdS black holes have both \(W=-1\). Thus it indicates the dimension of spacetime has an important effect on the topological number for rotating AdS black holes.
Figure 11: Zero points of \(\phi^{th}\) shown in the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.5\) and \(a/r_{0}=1\) for the singly rotating eight-dimensional Kerr-AdS black hole. The red dot with \(\tau_{\rm c}\) denotes the annihilation point for the black hole. There are two singly rotating Kerr-AdS\({}_{8}\) black holes for \(\tau=\tau_{1}\). It is easy to find that the topological number is \(W=0\).
Figure 12: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane with \(Pr_{0}^{2}=0.5\), \(a/r_{0}=1\), and \(\tau/r_{0}=0.8\) for the singly rotating Kerr-AdS\({}_{8}\) black hole. The zero points (ZPs) marked with black dots are at \((r_{h}/r_{0},\Theta)=(0.59,\pi/2)\), \((1.85,\pi/2)\) for \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\), respectively. The blue contours \(C_{i}\) are closed loops surrounding the zero points.
## V Three-dimensional rotating BTZ black hole
Because the BTZ black hole [24; 25] is a first nontrivial exact solution to the three-dimensional gravity theory, it is important to study the topological number of the rotating BTZ black hole. Therefore, in this section, we turn our attention to the three-dimensional BTZ black hole solution, whose metric is given by [24; 25; 26; 36]
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}\Big{(}d\varphi-\frac{J}{2r^{2}}dt \Big{)}^{2}\,, \tag{33}\]
where
\[f(r)=-2m+\frac{r^{2}}{l^{2}}+\frac{J^{2}}{4r^{2}}\,,\]
in which \(m\) is the mass parameter, \(l\) is the AdS radius, \(J\) is the angular momentum that must satisfy \(|J|\leq ml\).
The mass and entropy associated with the above solution (33) are given by [39]
\[M=\frac{m}{4}=\frac{r_{h}^{2}}{8l^{2}}+\frac{J^{2}}{32r_{h}^{2}}\,,\qquad S= \frac{1}{2}\pi r_{h}\,, \tag{34}\]
where \(r_{h}\) is the location of the event horizon. Utilizing the definition of the generalized Helmholtz free energy (1) and substituting \(l^{2}=1/(8\pi P)\), one can obtain
\[\mathcal{F}=\pi Pr_{h}^{2}+\frac{J^{2}}{32r_{h}^{2}}-\frac{\pi r_{h}}{2\tau}\,. \tag{35}\]
Thus, the components of the vector \(\phi\) are
\[\phi^{r_{h}}=2\pi Pr_{h}-\frac{J^{2}}{16r_{h}^{3}}-\frac{\pi}{2\tau}\,,\quad \phi^{\Theta}=-\cot\Theta\csc\Theta\,. \tag{36}\]
By solving the equation \(\phi^{r_{h}}=0\), one can obtain
\[\tau=\frac{8\pi r_{h}^{3}}{32\pi Pr_{h}^{4}-J^{2}} \tag{37}\]
as the zero point of the vector field \(\phi\).
For the three-dimensional rotating BTZ black hole, we take \(Pr_{0}^{2}=0.02\) and \(J/r_{0}=0.5\), and then plot the zero points of the component \(\phi^{r_{h}}\) in Fig. 15, and the unit vector field \(n\) with \(\tau/r_{0}=10\) in Fig. 16, respectively. Obviously, there is only one thermodynamically stable rotating BTZ black hole for any value of \(\tau\), which is also consistent with the conclusion given in Ref. [40] via the Joule-Thomson expansion. Based on the local property of the zero points, the topological number \(W=1\) can be found for the three-dimensional rotating BTZ black hole. In the Appendix A, we will also investigate the topological number of the three dimensional charged BTZ black hole, and find that its value is: \(W=0\).
Figure 16: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane with \(Pr_{0}^{2}=0.02\), \(J/r_{0}=0.5\), and \(\tau/r_{0}=10\) for the rotating BTZ black hole. The zero point (ZP) marked with black dot is at \((r_{h}/r_{0},\Theta)=(1.31,\pi/2)\). The blue contour \(C\) is closed loop surrounding the zero point.
Figure 14: The red arrows represent the unit vector field \(n\) on a portion of the \(r_{h}-\Theta\) plane with \(Pr_{0}^{2}=1\), \(a/r_{0}=1\), and \(\tau/r_{0}=0.25\) for the singly rotating Kerr-AdS\({}_{9}\) black hole. The zero points (ZPs) marked with black dots are at \((r_{h}/r_{0},\Theta)=(0.56,\pi/2)\), \((1.39,\pi/2)\) for \(\mathcal{\rm ZP}_{1}\) and \(\mathcal{\rm ZP}_{2}\), respectively. The blue contours \(C_{i}\) are closed loops surrounding the zero points.
## VI Kerr-Newman-AdS\({}_{4}\) black hole
Finally, we would like to investigate the topological number of the four-dimensional Kerr-Newman-AdS black hole [31], whose metric and Abelian gauge potential are [35; 36]
\[ds^{2} = -\frac{\Delta_{r}}{\Sigma}\Big{(}\frac{\Delta_{\theta}}{\Xi}dt- \frac{a}{\Xi}\sin^{2}\theta d\varphi\Big{)}^{2}+\frac{\Sigma}{\Delta_{r}}dr^{ 2}+\frac{\Sigma}{\Delta_{\theta}}d\theta^{2} \tag{38}\] \[+\frac{\Delta_{\theta}\sin^{2}\theta}{\Sigma}\Big{[}\frac{a(r^{2} +l^{2})}{l^{2}\Xi}dt-\frac{r^{2}+a^{2}}{\Xi}d\varphi\Big{]}^{2}\,,\] \[A = \frac{qr}{\Sigma}\Big{(}\frac{\Delta_{\theta}}{\Xi}dt-\frac{a \sin^{2}\theta}{\Xi}d\varphi\Big{)}\,, \tag{39}\]
where
\[\Delta_{r}=(r^{2}+a^{2})\Big{(}1+\frac{r^{2}}{l^{2}}\Big{)}-2mr+q ^{2}\,,\quad\Xi=1-\frac{a^{2}}{l^{2}}\,,\] \[\Delta_{\theta}=1-\frac{a^{2}}{l^{2}}\cos^{2}\theta\,,\quad\Sigma =r^{2}+a^{2}\cos^{2}\theta\,,\]
in which \(a\), \(m\) and \(q\) are the rotation, the mass and electric charge parameters, respectively, and \(l\) is the AdS radius. The horizon radius \(r_{h}\) is determined by equation: \(\Delta_{r}=0\).
The mass and entropy associated with the above metric (38) can be calculated via the standard method and their results are
\[M=\frac{m}{\Xi^{2}}\,,\qquad S=\frac{\pi(r_{h}^{2}+a^{2})}{\Xi}\,. \tag{40}\]
Then, one can straightforwardly obtain the generalized Helmholtz free energy of this black hole as
\[\mathcal{F} = \frac{24\pi Pr_{h}^{2}(r_{h}^{2}+a^{2})+a^{2}[16\pi PQ^{2}(4\pi Pa ^{2}-3)+9]}{2r_{h}(8\pi Pa^{2}-3)^{2}} \tag{41}\] \[+\frac{9(r_{h}^{2}+Q^{2})}{2r_{h}(8\pi Pa^{2}-3)^{2}}+\frac{6\pi (r_{h}^{2}+a^{2})}{2\tau(8\pi Ra^{2}-3)}\,,\]
with \(Q=q/\Xi\) being the electric charge of the black hole. Therefore, the zero point of the vector can be easily given as
\[\tau=\frac{12\pi r_{h}^{3}(8\pi Pa^{2}-3)}{X-24\pi P(3r_{h}^{2}+a^{2})r_{h}^{2 }}\,, \tag{42}\]
where \(X=a^{2}[16\pi PQ^{2}(4\pi Pa^{2}-3)+9]+9(Q^{2}-r_{h}^{2})\).
For the four-dimensional Kerr-Newman-AdS black hole, we take \(Pr_{0}^{2}=0.02\), \(a/r_{0}=1\), \(Q/r_{0}=1\), and plot the zero points of the component \(\phi^{r_{h}}\) in Fig. 17, and the unit vector field \(n\) with \(\tau/r_{0}=10\) in Fig. 18, respectively. Obviously, there is only one thermodynamically stable Kerr-Newman-AdS\({}_{4}\) black hole for any value of \(\tau\). Based upon the local property of the zero points, one can get the topological number \(W=1\) for the four-dimensional Kerr-Newman-AdS black hole, which is identical to that of the four-dimensional Kerr-AdS black hole. This fact indicates that the electric charge parameter has no effect on the topological number of rotating AdS black holes. Compared with the four-dimensional Kerr-Newman black hole which has a topological number of zero, it can be inferred that the cosmological constant plays an crucial role in determining the topological number for the rotating charged black hole.
## VII Conclusions
In this paper, we have extended our previous work [20] to the more general rotating AdS black hole cases and investigated the topological numbers of the singly rotating Kerr-AdS black holes in arbitrary dimensions and the four-dimensional Kerr-Newman-AdS black hole as well as the three-dimensional rotating or charged BTZ black hole. Table 1 summarizes some interesting results found in the present work. We find that the \(d\geq 6\) singly rotating Kerr-AdS black holes, the Schwarzschild-AdS black hole, and the charged BTZ black hole belong to the same kind of topological classes because of their topological numbers being \(W=0\), while the Kerr-Newman-AdS black hole, the \(d=4,5\) singly rotating Kerr-AdS black holes, and the rotating BTZ black hole belong to another kind of topological class due to their topological number being \(W=1\).
Figure 17: Zero point of the vector \(\phi^{r_{h}}\) shown on the \(r_{h}-\tau\) plane with \(Pr_{0}^{2}=0.02\), \(a/r_{0}=1\), and \(Q/r_{0}=1\) for the Kerr-Newman-AdS\({}_{4}\) black hole. There is only one stable Kerr-Newman-AdS\({}_{4}\) black hole for any value of \(\tau\). The topological number of this black hole is \(W=1\).
As a new consequence, we have discovered that the topological number of the rotating black holes is significantly influenced by the cosmological constant. Furthermore, combining our results with those in Refs. [7; 14; 20], we have tabulated Table 2, from which we have also observed a new interesting phenomenon: the difference between the topological number of the AdS black hole and that of its corresponding asymptotically flat black hole is always unity. We conjecture that this might also be true for other kinds of black holes. However, this needs to be tested by further investigating the topological numbers of many other black holes and their AdS counterparts.
As far as the impact of the electric charge parameter on the topological number of the four-dimensional black holes is concerned, one can infer from Table 1 that the electric charge parameter can change the topological number of the static AdS\({}_{4}\) black holes, while from Tables 1 and 2 that it has no impact on the topological number of the four-dimensional rotating AdS black holes because the four-dimensional Kerr-AdS and Kerr-Newman-AdS black holes have the same topological number.
Finally, it should be mentioned that all rotating AdS black holes studied in the present paper are under-rotating, namely, their rotation angular velocities are less than the speed of light (i.e., \(a<l\)). Therefore, a most related issue is to investigate their ultraspinning and over-rotating cases, and to test our guess by checking the topological numbers of the ultraspinning AdS black holes (i.e., \(a=l\)) [41; 42; 43; 44; 45; 46; 47; 48] and the over-rotating Kerr-AdS black holes (i.e., \(a>l\)) [49; 50].
###### Acknowledgements.
We thank Prof. Yen Chin Ong for useful advices. We are also greatly indebted to the anonymous referee for his/her constructive comments to improve the presentation of this work. This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 12205243, No. 11675130, by Sichuan Science and Technology Program under Grant No. 2023NSFSC1347, and by the Doctoral Research Initiation Project of China West Normal University under Grant No. 21E028.
## Appendix A Three-dimensional charged BTZ black hole
In this appendix, we will investigate the topological number of the three-dimensional charged BTZ black hole, whose metric reads [51]
\[ds^{2} =-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\varphi^{2}\,, \tag{12}\] \[A =-q\ln\Big{(}\frac{r}{l}\Big{)}dt\,, \tag{13}\]
where
\[f(r)=-2m-\frac{q^{2}}{2}\ln\Big{(}\frac{r}{l}\Big{)}+\frac{r^{2}}{l^{2}}\,, \tag{14}\]
in which \(m\) and \(q\) are the mass parameter and electric charge, respectively, and \(l\) is the AdS radius. The event horizon is determined by: \(f(r_{h})=0\).
For the three-dimensional charged BTZ black hole, the mass and the entropy are [39]
\[M=\frac{m}{4}=\frac{r_{h}^{2}}{8l^{2}}-\frac{q^{2}}{16}\ln\Big{(}\frac{r_{h}} {l}\Big{)}\,,\qquad S=\frac{1}{2}\pi r_{h}\,. \tag{15}\]
Substituting \(l^{2}=1/(8\pi P)\) into the definition of the generalized Helmholtz free energy (1), one can arrive at
\[\mathcal{F}=\pi Pr_{h}^{2}+\frac{q^{2}}{16}\ln(2r_{h}\sqrt{2\pi P})-\frac{\pi r _{h}}{2\tau}\,, \tag{16}\]
\begin{table}
\begin{tabular}{c|c|c|c} \hline BH solution & \(W\) & Generation point & Annihilation point \\ \hline SchwarzschildBH [7] & -1 & 0 & 0 \\ Schwarzschild-AdS\({}_{4}\) BH [14] & 0 & 0 & 1 \\ \hline RN BH [7] & 0 & 1 & 0 \\ RN-AdS\({}_{4}\) BH [7] & 1 & 1 or 0 & 1 or 0 \\ \hline Kerr BH [20] & 0 & 1 & 0 \\ Kerr-AdS\({}_{4}\) BH & 1 & 1 or 0 & 1 or 0 \\ \hline Kerr-Newman BH [20] & 0 & 1 & 0 \\ Kerr-Newman-AdS\({}_{4}\) BH & 1 & 0 & 0 \\ \hline \(d=5\) singly rotating Kerr BH [20] & 0 & 1 & 0 \\ \(d=5\) singly rotating Kerr-AdS BH & 1 & 1 or 0 & 1 or 0 \\ \hline \(d\geq 6\) singly rotating Kerr BH [20] & -1 & 0 & 0 \\ \(d\geq 6\) singly rotating Kerr-AdS BH & 0 & 0 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The topological number \(W\), numbers of generation and annihilation points for various black holes and their AdS extensions.
\begin{table}
\begin{tabular}{c|c|c|c} \hline BH solution & \(W\) & Generation point & Annihilation point \\ \hline Schwarzschild-AdS\({}_{4}\) BH [14] & 0 & 0 & 1 \\ \(d\geq 6\) singly rotating Kerr-AdS BH & 0 & 0 & 1 \\ \hline Charged BTZ BH & 0 & 0 & 1 \\ \hline \(d=5\) singly rotating Kerr-AdS BH & 1 & 1 or 0 & 1 or 0 \\ RN-AdS\({}_{4}\) BH [7] & 1 & 1 or 0 & 1 or 0 \\ Kerr-Newman-AdS\({}_{4}\) BH & 1 & 0 & 0 \\ Rotating BTZ BH & 1 & 0 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The topological number \(W\), numbers of generation and annihilation points for various AdS black holes.
Thus, the components of the vector \(\phi\) are
\[\phi^{\tau_{h}}=2\pi Pr_{h}-\frac{q^{2}}{16r_{h}^{3}}-\frac{\pi}{2\tau}\,,\quad \phi^{\Theta}=-\cot\Theta\csc\Theta\,. \tag{10}\]
By solving the equation \(\phi^{\tau_{h}}=0\), one can obtain
\[\tau=\frac{8\pi r_{h}}{32\pi Pr_{h}^{2}+q^{2}} \tag{11}\]
as the zero point of the vector field \(\phi\).
In Figs. 19 and 20, we take \(P=0.002\) and \(q/r_{0}=1\) for the three-dimensional charged BTZ black hole, and plot the zero points of \(\phi^{\tau_{h}}\) in the \(r_{h}-\tau\) plane and the unit vector field \(n\) with \(\tau/r_{0}=25\), respectively. Note that for the values of \(P=0.002\) and \(q/r_{0}=1\), one annihilation point can be found at \(\tau/r_{0}=\tau_{\rm c}/r_{0}=28.03\). Based on the local property of the zero points, one can get its topological number \(W=0\).
Note that by simply eliminating the charge \(q\) in Eq. (11), the zero point of the vector field \(\phi\) of the static neutral BTZ black hole can be directly expressed as \(\tau=1/(4Pr_{h})\), and its topological number can be easily obtained as \(W=1\), which implies that the electric charge has a remarkable impact on the topological number of static AdS\({}_{3}\) black holes.
|
2310.15121 | Rational approximation for Hitchin representations | We show that for $S$ a closed surface of genus at least $3$, the set of
Hitchin representations of $\pi_1(S)$ with image in
$\operatorname{SL}(n,\mathbf{Q})$ is dense in the Hitchin component. When
$n=2$, we recover a particular case of a result of Takeuchi but using a
completely different method. | Jacques Audibert, Michael Zshornack | 2023-10-23T17:26:18Z | http://arxiv.org/abs/2310.15121v1 | # Rational approximation for Hitchin representations
###### Abstract.
We show that for \(S\) a closed surface of genus at least \(3\), the set of Hitchin representations of \(\pi_{1}(S)\) with image in \(\operatorname{SL}(n,\mathbf{Q})\) is dense in the Hitchin component. When \(n=2\), we recover a particular case of a result of Takeuchi but using a completely different method.
The second author acknowledges the support of the Big Bang Theory Graduate Fellowship.
## 1. Introduction
Let \(S\) be a closed, orientable surface of genus at least \(2\) and let
\[\mathcal{H}_{n}(S)\subset\operatorname{Hom}(\pi_{1}(S),\operatorname{SL}(n, \mathbf{R}))/\operatorname{SL}(n,\mathbf{R})\]
denote the Hitchin component of \(S\).1 It is a connected component that consists only of discrete and faithful representations. Representations in these components have many interesting dynamical and geometric properties, the study of which comprises a very active field of modern research (e.g. see [1]). Their arithmetic properties also provide a powerful tool in the study of lattices in higher rank. See for instance [1] and [1] where the authors use Hitchin representations to construct Zariski-dense surface subgroups of \(\operatorname{SL}(n,\mathbf{Z})\) for every odd \(n\).
Footnote 1: For \(n\) even, there are two such components. We pick one of them.
More generally, the underlying rational structure of the Hitchin component provides means of understanding surface subgroups of \(\operatorname{SL}(n,\mathbf{Q})\). Such subgroups also have a number of interesting properties, for one, they satisfy analogs of strong approximation [10]. In addition, while not necessarily contained in lattices of \(\operatorname{SL}(n,\mathbf{R})\), these subgroups are contained in lattices of products of Lie groups and \(p\)-adic Lie groups. As such, they admit actions on spaces constructed from the symmetric spaces and Bruhat-Tits buildings associated to the groups in this product. Numerous open questions surround the properties of such actions, see [12] or [13].
In this note we investigate one aspect of the Hitchin component's rational structure. Let \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) denote the conjugacy classes of representations in \(\mathcal{H}_{n}(S)\) whose image is conjugate to a subgroup of \(\operatorname{SL}(n,\mathbf{Q})\). Representations in \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) provide surface subgroups of \(\operatorname{SL}(n,\mathbf{Q})\) and our main result is that they are abundant, in the sense of the following theorem.
**Theorem 1.1**.: _When the genus of \(S\) is at least \(3\), \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) is dense in \(\mathcal{H}_{n}(S)\)._
From now on, we make the standing assumption that the genus of \(S\) is at least \(3\). Some discussion on what is missing in the genus \(2\) case, as well as a discussion of other generalizations of Theorem 1.1, is left to Section 4.
When \(n=2\), \(\mathcal{H}_{2}(S)\) is the Teichmuller space of \(S\) and Theorem 1.1 recovers a particular case of a classical result of Takeuchi [13]. In this case, the images of the representations studied are always cocompact lattices in \(\operatorname{SL}(2,\mathbf{R})\), whereas they are always of infinite covolume when \(n>2\), thus our proof uses different methods.
While Theorem 1.1 concerns conjugacy classes of representations, the same statement holds at the level of individual representations. Let
\[\widetilde{\mathcal{H}}_{n}(S)\subset\operatorname{Hom}(\pi_{1}(S),\operatorname {SL}(n,\mathbf{R}))\]
be the connected component of representations corresponding to the Hitchin component. If \(\widetilde{\mathcal{H}}_{n}(S)_{\mathbf{Q}}\) denotes the set of representations whose image is contained in \(\operatorname{SL}(n,\mathbf{Q})\), then Theorem 1.1, along with the well-known fact that \(\operatorname{SL}(n,\mathbf{Q})\) is dense in \(\operatorname{SL}(n,\mathbf{R})\), gives rise to the following immediate corollary.
**Corollary 1.1.1**.: \(\widetilde{\mathcal{H}}_{n}(S)_{\mathbf{Q}}\) _is dense in \(\widetilde{\mathcal{H}}_{n}(S)\)._
_Acknowledgements._ The authors are indebted to Arnaud Maret for making us aware of [1, Lemma 3.2] and its implications. This idea was key in the beginning of this work. We also thank him for his feedback. Together with Julien Marche and Maxime Wolff, they have a similar project on representations in Hilbert modular groups. The second author also thanks Darren Long for many helpful discussions.
## 2. Twist flows on the Hitchin component
Our proof of Theorem 1.1 utilizes a particular deformation of representations which we will use to construct rational approximations to an arbitrary Hitchin representation. The purpose of this section is to describe the nature of these deformations. Our treatment follows the one given in [1], where they are interpreted as generalizations of the twist flow of a hyperbolic surface. Our interest in these deformations is in the arithmetic control at the level of representations they provide. This is similar to the perspective taken in [10], where these same deformations are used to perform deformations with substantial arithmetic control as well.
Let \(\operatorname{Tr}:\operatorname{SL}(n,\mathbf{R})\to\mathbf{R}\) be the trace and for any \(\gamma\in\pi_{1}(S)\), let \(\operatorname{Tr}_{\gamma}:\mathcal{H}_{n}(S)\to\mathbf{R}\) denote the function
\[\operatorname{Tr}_{\gamma}([\rho])=\operatorname{Tr}(\rho(\gamma)).\]
Define \(F:\operatorname{SL}(n,\mathbf{R})\to\mathfrak{sl}(n,\mathbf{R})\) as
\[F(A)=A-\frac{\operatorname{Tr}(A)}{n}I_{n}.\]
For any non trivial \(\gamma\in\pi_{1}(S)\) which is freely homotopic in \(S\) to a nonseparating simple closed curve, let \(S\backslash\gamma\) denote compact the surface one gets by deleting a regular open neighborhood of \(\gamma\) from \(S\). Recall that \(\pi_{1}(S)\) is an HNN-extension of \(\pi_{1}(S\backslash\gamma)\). Define a flow on \(\operatorname{Hom}(\pi_{1}(S),\operatorname{SL}(n,\mathbf{R}))\) by setting
\[\Xi_{\gamma}^{t}(\rho)(\alpha)=\begin{cases}\rho(\alpha)&\text{if }\alpha\in\pi_{1} (S\backslash\gamma)\\ \exp(tF(\rho(\gamma)))\rho(\alpha)&\text{if }i(\alpha,\gamma)=+1\end{cases}\]
where \(i\) denotes the algebraic intersection number. This does define a new representation of \(\pi_{1}(S)\) because \(\exp(tF(\rho(\gamma)))\) centralizes \(\rho(\gamma)\).
**Definition 2.1**.: The resulting flow, \(\Xi_{\gamma}^{t}\), on \(\operatorname{Hom}(\pi_{1}(S),\operatorname{SL}(n,\mathbf{R}))\) is called the **generalized twist flow** about \(\gamma\).
The main result of this section is the following, which states that two Hitchin representations may be connected via a path which is a piecewise concatenation of twist flows of the above form.
**Lemma 2.1**.: _Let \(\rho_{1}\) and \(\rho_{2}\) be Hitchin representations. Then there exist nonseparating simple closed curves \(\gamma_{1},\ldots,\gamma_{k}\) and real numbers \(t_{1},\ldots,t_{k}\) so that the representation_
\[\rho_{2}^{\prime}:=\Xi_{\gamma_{k}}^{t_{k}}(\ldots(\Xi_{\gamma_{1}}^{t_{1}}( \rho_{1}))\ldots)\]
_is conjugate to \(\rho_{2}\). In other words, \([\rho_{2}^{\prime}]=[\rho_{2}]\) on \(\mathcal{H}_{n}(S)\)._
_Remark_.: Analogs of this lemma in the context of other Lie groups have been known before (e.g. [1, Lemma 3.2] proves this for \(\operatorname{SU}(2)\)). To our knowledge, a proof in the context of the \(\operatorname{SL}(n,\mathbf{R})\)-Hitchin component has never been recorded, so we include one here.
To establish this result, we first exploit a connection between the flows \(\Xi_{\gamma}^{t}\) and the underlying geometry of the Hitchin component. In [11], Goldman defines a symplectic form on the the character variety of a surface group that gives \(\mathcal{H}_{n}(S)\) the structure of a connected symplectic manifold. Denote by \(\xi_{\gamma}^{t}\) the flow associated to the Hamiltonian vector field of the function \(\operatorname{Tr}_{\gamma}\). This flow is related to our earlier flow \(\Xi_{\gamma}^{t}\) via the following result.
**Theorem 2.2** ([11, Theorem 4.7]).: _The flow \(\Xi_{\gamma}^{t}\) on \(\widetilde{\mathcal{H}}_{n}(S)\) covers the flow \(\xi_{\gamma}^{t}\) on \(\mathcal{H}_{n}(S)\)._
Thus, to establish the result of Lemma 2.1, we analyze the action of the flows \(\xi_{\gamma}^{t}\) on \(\mathcal{H}_{n}(S)\). We do so via an application of the following theorem of Bridgeman, Canary and Labourie, which is regarded as a sort of "infinitesimal marked trace rigidity" for Hitchin representations.
**Theorem 2.3** ([1, Proposition 10.1]).: _For any \([\rho]\in\mathcal{H}_{n}(S)\) the collection of differentials_
\[\{(d\operatorname{Tr}_{\gamma})_{[\rho]}\,:\,\gamma\text{ is a nonseparating simple closed curve}\}\]
_generates the cotangent space to \(\mathcal{H}_{n}(S)\) at \([\rho]\)._
It is worth noting that the use of this theorem in proving Lemma 2.1 is the only step in our proof of Theorem 1.1 where we need the fact that the genus of \(S\) is at least \(3\). More discussion on what is missing in the genus \(2\) case is done in Section 4.
Proof of Lemma 2.1.: Let \(\mathfrak{G}\) denote the group generated by the flows \(\xi_{\gamma}^{t}\) for all nonseparating simple closed curves \(\gamma\) and all \(t\). By Theorem 2.3, this group acts transitively on \(\mathcal{H}_{n}(S)\) (cf. [1, Lemma 3.2]). Thus, given \([\rho_{1}],[\rho_{2}]\in\mathcal{H}_{n}(S)\), there exist nonseparating simple closed curves, \(\gamma_{1},\ldots,\gamma_{k}\), and \(t_{1},\ldots,t_{k}\in\mathbf{R}\) so that
\[[\rho_{2}]=\xi_{\gamma_{k}}^{t_{k}}(\ldots(\xi_{\gamma_{1}}^{t_{1}}([\rho_{1} ]))\ldots).\]
Lemma 2.1 then follows from the above equality viewed at the level of representations and Theorem 2.2.
## 3. Proof of Theorem 1.1
We now explain the steps to establishing our main result. The results of the previous section allow one to build approximations to Hitchin representations via twist flows which are essentially controlled by choices of matrices in certain centralizers. One may build rational approximations to these flows through an application of the following theorem to these matrix centralizes.
**Theorem 3.1** ([14, Theorem 7.7]).: _For \(G\) a connected algebraic group defined over \(\mathbf{Q}\), \(G(\mathbf{Q})\) is dense in \(G(\mathbf{R})\) endowed with the Euclidean topology._
For \(A\in\operatorname{SL}(n,\mathbf{Q})\), let \(Z_{A}\) denote the centralizer of \(A\) in the algebraic group \(\operatorname{SL}_{n}\). It is an algebraic group defined over \(\mathbf{Q}\) and for any subfield \(k\) of \(\mathbf{C}\), \(Z_{A}(k)\) is the set of matrices in \(\operatorname{SL}(n,k)\) that commutes with \(A\). If \(A\in\operatorname{SL}(n,\mathbf{Q})\) has distinct eigenvalues then \(Z_{A}\) is a connected algebraic group. Indeed, we have \(Z_{A}(\mathbf{C})\cong(\mathbf{C}^{*})^{n-1}\).
**Corollary 3.1.1**.: _If \(A\in\operatorname{SL}(n,\mathbf{Q})\) has distinct eigenvalues then \(Z_{A}(\mathbf{Q})\) is dense in \(Z_{A}(\mathbf{R})\)._
This corollary will allow us to approximate the matrices controlling the twist deformations by rational ones, which we can then leverage for extra control on the arithmetic of the individual representations. Next, we show that \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) is nonempty.
**Lemma 3.2**.: _For every \(n\geq 2\), there exists a Hitchin representation with image in \(\operatorname{SL}(n,\mathbf{Q})\)._
Proof.: Observe that there exist representations inside the Teichmuller space of \(S\) with image contained in \(\operatorname{SL}(2,\mathbf{Q})\). There are a number of constructions of such examples, such as ones due to Vinberg in [15], or from Takeuchi's result in [13]. One might also consider the following example originally due to Long and Reid, based on work of Magnus in [16]:
\[\rho_{0}(a)=\begin{pmatrix}3&\frac{2}{3}\\ 0&\frac{1}{3}\end{pmatrix}\quad\text{and}\quad\rho_{0}(b)=\begin{pmatrix}0&-2 \\ \frac{1}{2}&\frac{83}{8}\end{pmatrix}.\]
This is a discrete and faithful representation of the group \(\Gamma=\langle a,b\,|\,[a,b]^{2}\rangle\) into \(\operatorname{SL}(2,\mathbf{Q})\). \(\Gamma\) contains an index \(4\) subgroup isomorphic to the fundamental group of a genus \(2\) surface, hence contains finite-index surface subgroups of every genus. The restriction of \(\rho_{0}\) to one such subgroup isomorphic to \(\pi_{1}(S)\) gives a representation in the Teichmuller space of \(S\) with image in \(\operatorname{SL}(2,\mathbf{Q})\). Using the irreducible embedding of \(\operatorname{SL}(2,\mathbf{Q})\) into \(\operatorname{SL}(n,\mathbf{Q})\), the conclusion holds for every \(n\).
We can now prove the main result.
Proof of Theorem 1.1.: Fix a representation \(\rho_{0}:\pi_{1}(S)\to\operatorname{SL}(n,\mathbf{Q})\) coming from Lemma 3.2 and for each \(k\geq 1\), set
\[\mathcal{H}_{n}^{k}(S)=\left\{[\rho]\in\mathcal{H}_{n}(S)\,:\begin{array}{l} \text{there exist simple, nonseparating }\gamma_{1},\dots,\gamma_{k}\in\pi_{1}(S)\\ \text{and }t_{1},\dots,t_{k}\in\mathbf{R}\text{ so that }[\rho]=\xi_{\gamma_{k}}^{t_{k}}( \dots(\xi_{\gamma_{1}}^{t_{1}}([\rho_{0}]))\dots)\end{array}\right\}.\]
By Lemma 2.1, \(\mathcal{H}_{n}(S)=\bigcup_{k\geq 1}\mathcal{H}_{n}^{k}(S)\), thus to show the result, it suffices to show that the closure of \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) contains \(\mathcal{H}_{n}^{k}(S)\) for all \(k\).
For any \([\rho]\in\mathcal{H}^{1}_{n}(S)\), \(\rho\) is conjugate to \(\Xi^{t_{1}}_{\gamma_{1}}(\rho_{0})\) for some nonseparating \(\gamma_{1}\) and \(t_{1}\in\mathbf{R}\). By Corollary 3.1.1, we may take a sequence \(\{B_{j}\}_{j}\) of elements of \(Z_{\rho_{0}(\gamma_{1})}(\mathbf{Q})\) converging to \(\exp(t_{1}F(\rho_{0}(\gamma_{1})))\). Since \(\pi_{1}(S)\) is an HNN-extension of \(\pi_{1}(S\backslash\gamma_{1})\), we may define a sequence of representations \(\{\rho_{j}:\pi_{1}(S)\to\operatorname{SL}(n,\mathbf{Q})\}_{j}\) by
\[\rho_{j}(\alpha)=\begin{cases}\rho_{0}(\alpha)&\text{if }\alpha\in\pi_{1}(S \backslash\gamma_{1})\\ B_{j}\rho_{0}(\alpha)&\text{if }i(\alpha,\gamma_{k})=+1.\end{cases}\]
The sequence \(\{\rho_{j}\}_{j}\) converges to \(\Xi^{t_{1}}_{\gamma_{1}}(\rho_{0})\), hence the closure of \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) contains \(\mathcal{H}^{1}_{n}(S)\).
Now, suppose that the closure of \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) contains \(\mathcal{H}^{k-1}_{n}(S)\) for some \(k\geq 2\) and take \([\rho]\in\mathcal{H}^{k}_{n}(S)\). Then there exists a nonseparating \(\gamma_{k}\) and \(t_{k}\in\mathbf{R}\) so that \(\rho\) is conjugate to \(\Xi^{t_{k}}_{\gamma_{k}}(\sigma)\) for some \([\sigma]\in\mathcal{H}^{k-1}_{n}(S)\). Since \([\sigma]\in\mathcal{H}^{k-1}_{n}(S)\), we may let \(\{\sigma_{i}:\pi_{1}(S)\to\operatorname{SL}(n,\mathbf{Q})\}_{i}\) be a sequence of Hitchin representations with \([\sigma_{i}]\to[\sigma]\). By applying a conjugation by elements of \(\operatorname{SL}(n,\mathbf{Q})\), we may further assume that \(\sigma_{i}\to\sigma\).
For each \(i\), let \(\{B_{i,j}\}_{j}\) be a sequence in \(Z_{\sigma_{i}(\gamma_{k})}(\mathbf{Q})\) converging to \(\exp(t_{k}F(\sigma_{i}(\gamma_{k})))\) in \(Z_{\sigma_{i}(\gamma_{k})}(\mathbf{R})\), as given by Corollary 3.1.1. Fix a distance \(d\) on \(\operatorname{SL}(n,\mathbf{R})\) inducing its usual topology and for each \(m\geq 1\), let \(\phi(m)\) denote the smallest \(i\) such that
\[d(\exp(t_{k}F(\sigma_{i}(\gamma_{k}))),\exp(t_{k}F(\sigma(\gamma_{k}))))<\frac {1}{m}.\]
Similarly, define \(\psi(m)\) as the smallest \(j\) such that
\[d(B_{\phi(m),j},\exp(t_{k}F(\sigma_{\phi(m)}(\gamma_{k}))))<\frac{1}{m}.\]
By construction of \(\phi\) and \(\psi\), \(B_{\phi(m),\psi(m)}\) converges to \(\exp(t_{k}F(\sigma(\gamma_{k})))\) as \(m\to\infty\). We then define a new sequence of representations \(\{\rho_{m}:\pi_{1}(S)\to\operatorname{SL}(n,\mathbf{R})\}_{m}\) by noting that \(\pi_{1}(S)\) is an HNN-extension of \(\pi_{1}(S\backslash\gamma_{k})\) and setting
\[\rho_{m}(\alpha)=\begin{cases}\sigma_{\phi(m)}(\alpha)&\text{if }\alpha\in\pi_{1}(S \backslash\gamma_{k})\\ B_{\phi(m),\psi(m)}\sigma_{\phi(m)}(\alpha)&\text{if }i(\alpha,\gamma_{k})=+1. \end{cases}\]
By construction, \(\rho_{m}(\pi_{1}(S))\leqslant\operatorname{SL}(n,\mathbf{Q})\) for all \(m\). As \(\sigma_{\phi(m)}\to\sigma\) and \(B_{\phi(m),\psi(m)}\to\exp(t_{k}F(\sigma(\gamma_{k})))\), we see that \(\rho_{m}\to\Xi^{t_{k}}_{\gamma_{k}}(\sigma)\) as \(m\to\infty\). In particular, \(\{[\rho_{m}]\}_{m}\) is a sequence in \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) converging to \([\rho]\), so that the closure of \(\mathcal{H}_{n}(S)_{\mathbf{Q}}\) contains \(\mathcal{H}^{k}_{n}(S)\).
## 4. Generalizations of Theorem 1.1
We now discuss possible generalizations of Theorem 1.1.
### Changing the target group
Let \(G=\operatorname{Sp}(2n,\mathbf{R})\), \(\operatorname{G}_{2}\) or \(\operatorname{SO}(n,n+1)\). The proof can be adapted to the Hitchin component of \(G\), which lies in \(\mathcal{H}_{n}(S)\). We can restrict the trace functions to the Hitchin component of \(G\). Their differentials will be restrictions of the corresponding functions on \(\mathcal{H}_{n}(S)\). Hence Theorem 2.3 implies that they generate the cotangent space of the Hitchin component of \(G\). In this setting, Theorem 3.1 still applies to the \(G\)-centralizer, the only difference is in the use of Lemma 3.2 establishing the existence of a Hitchin representation with \(\mathbf{Q}\)-coefficients. Note that there exists Hitchin representations in \(\operatorname{Sp}(2n,\mathbf{Q})\) and \(\operatorname{G}_{2}(\mathbf{Q})\) because there exists Hitchin representations in
\(\operatorname{SL}(2,\mathbf{Q})\). For the group \(\operatorname{SO}(n,n+1)\), one needs to be precise on the bilinear form used to define the group.2
Footnote 2: Let \(q\) be a quadratic form over \(\mathbf{Q}\) of signature \((n,n+1)\). Provided that there exists Hitchin representations in \(\operatorname{SO}(q,\mathbf{Q})\), such representations are dense in the Hitchin component of \(\operatorname{SO}(n,n+1)\). See [1, Proposition 3.4] for a construction of Hitchin representations in some groups of the form \(\operatorname{SO}(q,\mathbf{Q})\).
**Theorem 4.1**.: _When the genus of \(S\) is at least \(3\), the set of equivalence classes of Hitchin representations with image in \(\operatorname{Sp}(2n,\mathbf{Q})\) is dense in the Hitchin component of \(\operatorname{Sp}(2n,\mathbf{R})\)._
_When the genus of \(S\) is at least \(3\), the set of equivalence classes of Hitchin representations with image in \(\operatorname{G}_{2}(\mathbf{Q})\) is dense in the Hitchin component of \(\operatorname{G}_{2}(\mathbf{R})\)._
Theorem 1.1 can also be extended as it is to other \(\mathbf{Q}\)-forms of \(\operatorname{SL}(n,\mathbf{R})\) (or to \(G\) as above) provided it contains the image of a Hitchin representations. This is known for some \(\mathbf{Q}\)-forms, see [1].
### Non-Hitchin components
These methods also indicate possible applications to other components of the character variety. Let \(G\) be a reductive algebraic group defined over \(\mathbf{Q}\), and \(\operatorname{Hom}(\pi_{1}(S),G(\mathbf{R}))\mathbin{/\!\!/}G(\mathbf{R})\) the \(G(\mathbf{R})\)-character variety of \(S\). In general, this algebraic variety is highly singular, but there is a Zariski-open subset \(\Omega\subset\operatorname{Hom}(\pi_{1}(S),G(\mathbf{R}))\) so that \(\Omega/G(\mathbf{R})\) is a smooth manifold. Goldman's symplectic form in [10] is still defined on \(\Omega/G(\mathbf{R})\) and turns it into a symplectic manifold.
Let \(X\subset\Omega/G(\mathbf{R})\) be a connected component and denote by \(X_{\mathbf{Q}}\) the set of representations in \(X\) conjugate to a subgroup of \(G(\mathbf{Q})\). Then there are two main missing ingredients to establishing that \(X_{\mathbf{Q}}\) is dense in \(X\). The first is an existence condition. One needs to understand when \(X_{\mathbf{Q}}\) is nonempty, which may be a nontrivial task. More general constructions than the ones given in Lemma 3.2 for \(\mathcal{H}_{n}(S)\) are needed.
The second missing condition is an infinitesimal one. To any conjugation invariant \(f:G(\mathbf{R})\to\mathbf{R}\) and \(\gamma\in\pi_{1}(S)\), one can form the function \(f_{\gamma}:X\to\mathbf{R}\) by taking \(f_{\gamma}([\rho])=f(\rho(\gamma))\). If \(\mathcal{S}\subset\pi_{1}(S)\) denotes the collection of elements corresponding to simple closed curves on \(S\), then one needs to ask the following.
**Question**.: _Is there a collection of conjugation invariant functions \(\mathcal{F}=\{f:G(\mathbf{R})\to\mathbf{R}\}\) so that the differentials_
\[\{df_{\gamma}\,:\,f\in\mathcal{F},\gamma\in\mathcal{S}\}\]
_span the cotangent space \(T^{*}_{[\rho]}X\) at every \([\rho]\in X\)?_
Theorem 2.3 shows that when \(X=\mathcal{H}_{n}(S)\) and the genus of \(S\) is \(3\) or more, then \(\mathcal{F}=\{\operatorname{Tr}\}\) suffices, but the proof of this result in [1] relies on a certain configuration of simple closed curves which can only exist when the genus of \(S\) is at least \(3\). We still expect that the result is true in genus \(2\), noting that one is allowed to consider more general classes of functions than just the trace. Nonetheless, an affirmative answer to this question for \(X\) implies that the Hamiltonian flows associated to the functions \(f_{\gamma}\) will still act transitively on \(X\). These flows still admit descriptions in terms of generalized twist flows at the level of representations, which one can approximate using Theorem 3.1. Thus the existence of a single element in \(X_{\mathbf{Q}}\) would imply its density in \(X\). |
2304.08951 | Anisotropic linear and non-linear excitonic optical properties of
buckled monolayer semiconductors | The optical properties of two-dimensional materials are exceptional in
several respects. They are highly anisotropic and frequently dominated by
excitonic effects. Dipole-allowed second order non-linear optical properties
require broken inversion symmetry. Hence, several two-dimensional materials
show strong in-plane (IP) non-linearity but negligible out-of-plane (OOP)
response due to vertical symmetry. By considering buckled hexagonal monolayers,
we analyze the critical role of broken vertical symmetry on their excitonic
optical response. Both linear as well as second order shift current and second
harmonic response are studied. We demonstrate that substantial OOP non-linear
response can be obtained, in particular, through off-diagonal tensor elements
coupling IP excitation to OOP response. Our findings are explained by excitonic
selection rules for OOP response and the impact of dielectric screening on
excitons is elucidated. | M. F. C. Martins Quintela, T. Garm Pedersen | 2023-04-18T12:42:44Z | http://arxiv.org/abs/2304.08951v1 | # Anisotropic linear and non-linear excitonic optical properties of buckled monolayer semiconductors
###### Abstract
The optical properties of two-dimensional materials are exceptional in several respects. They are highly anisotropic and frequently dominated by excitonic effects. Dipole-allowed second order non-linear optical properties require broken inversion symmetry. Hence, several two-dimensional materials show strong in-plane (IP) non-linearity but negligible out-of-plane (OOP) response due to vertical symmetry. By considering buckled hexagonal monolayers, we analyze the critical role of broken vertical symmetry on their excitonic optical response. Both linear as well as second order shift current and second harmonic response are studied. We demonstrate that substantial OOP non-linear response can be obtained, in particular, through off-diagonal tensor elements coupling IP excitation to OOP response. Our findings are explained by excitonic selection rules for OOP response and the impact of dielectric screening on excitons is elucidated.
## I Introduction
The recent interest in layered materials with broken vertical symmetry, such as Janus materials[1; 2; 3], buckled monolayers[4; 5; 6; 7], as well as heterobilayers and biased homobilayers[8; 9; 10; 11] makes the discussion on the effects of broken vertical symmetry on the optical response especially relevant[12; 13; 14; 15; 16; 17; 18]. The amplitude of both linear and non-linear OOP conductivities is expected to be greatly dependent on the asymmetry of the layer, with the even-order non-linear OOP response being identically zero (within the dipole approximation) when the OOP symmetry is not broken. Hence, the broken OOP symmetry is crucial when one wishes to consider potential applications beyond those allowed by symmetric structures. The OOP non-linear response in Janus monolayers has also been experimentally studied[19; 20], namely for both second- and third-harmonic. This study was performed via polarization-resolved spectroscopy, with the aim of mapping the full second-order susceptibility tensor[21; 22; 23] of MoSSe. These OOP non-linearities then lead to additional degrees of freedom in vertical photonics structures[24; 25], allowing for novel approaches in the design of ultrafast optical devices[26], such as miniaturized logic gates[27; 28], non-linear holograms[29], broadband ultrafast frequency converters[30; 31], among others.
The simplest family of materials with broken OOP symmetry is that of buckled mono
Figure 1: Schematic illustration of a buckled honeycomb lattice, highlighting the lattice constant \(a\) and the buckling \(h\).
layer structures, with theoretical predictions of both mono-elemental and binary graphene-like materials[4; 6; 7], and several buckled hexagonal sheets (see Fig. 1) have already been fabricated. Among these materials we mention specifically the mono-elemental silicene[4; 32], blue phosphorene[33], arsenene[34; 35], antimonene[36; 34], and bismuthene[37], as well as the binary CS, SiO, GeSe, SnTe, InSb, and GaAs[4; 38]. The mono-elemental structures preserve inversion symmetry even in the presence of buckling and, hence, possess negligible second-order non-linearities. The bandgaps of these materials can be both mechanically[39; 40] or electrically[40; 41] tuned, and they allow for potential applications is various fields, such as optoelectronics, spin-electronics, sensors and thermo-electrics[42; 43; 44; 45].
The aim of the present work is to understand the effects of both IP and OOP asymmetry on the excitonic optical response of honeycomb lattice structures. To this end, we consider a simple two-band model of gapped graphene near the so-called Dirac valleys[46; 47; 48] and then apply a small buckling to break OOP symmetry. To study IP even-order non-linear optical properties[49; 50; 51], such as second-harmonic generation (SHG)[51; 52] or shift-current (SC)[53; 54; 55], we include a quadratic (in \(k\)) contribution to the nearest neighbour hopping function[56], namely trigonal warping[57], plus distinct on-site potentials for the two sublattices. Including trigonal warping allows us to compute the IP even-order response, which then serves as a comparison against the OOP response.
This paper is organized as follows. In Section II, we will consider the single-particle Hamiltonian for gapped graphene while introducing trigonal warping before computing explicit matrix elements of the momentum and Berry connection. In Section III, we discuss the Bethe-Salpeter equation for the computation of the excitonic states. We also outline some of the approximations necessary for an efficient numerical solution of this equation. In Section IV, we briefly outline the general form of both the excitonic linear and non-linear optical response to linearly polarized light, discussing the momentum matrix elements between excitonic states. Finally, in Section V, we analyze the IP and OOP optical selection rules of a buckled graphene lattice structure (Fig. 1) which, in turn, leads to a non-zero OOP excitonic response in the monolayer. The non-linear response will be very sensitive to the scale of this buckling, quickly vanishing as the buckling decreases. We also compare both diagonal (\(\sigma_{zzz}\)) and non-diagonal (\(\sigma_{zxx/xzx}\)) components of the second order excitonic conductivity tensor against their IP counterparts, discussing both their relative magnitudes and the location of the excitonic resonances.
## II Single particle gapped graphene Hamiltonian
Throughout this paper, we will work with a two-band model of gapped graphene near the Dirac points \(K/K^{\prime}\), with the \(x\)-axis aligned with the unit cell of the honeycomb lattice and the \(z\)-axis perpendicular to the monolayer plane. The basis states are \(p_{z}\) orbitals on the two sublattices and the model Hamiltonian before Dirac point expansion then reads
\[\mathcal{H}\left(\mathbf{k}\right)=\left[\begin{array}{cc}\Delta&-\gamma f^ {*}\left(\mathbf{k}\right)\\ -\gamma f\left(\mathbf{k}\right)&-\Delta\end{array}\right], \tag{1}\]
where \(\pm\Delta\) is the staggered on-site energy and \(\gamma\) the effective hopping. While for planar gapped graphene the \(\pi\) and \(\sigma\) orbitals are decoupled, the vertical shift of the two sublattices in a buckled system means that the \(p_{z}\) orbitals are no longer in the same plane. Hence, the effective hopping will change as[58; 59]
\[-\gamma=V_{pp\pi}+\frac{1}{1+\frac{a^{2}}{12h^{2}}}\left(V_{pp\sigma}-V_{pp \pi}\right), \tag{2}\]
where \(a\) is the lattice parameter, \(h\) is the buckling parameter, and \(V_{pp\pi}\) and \(V_{pp\sigma}\) are the hopping integrals for \(\pi\) and \(\sigma\) orbitals, respectively. Additionally, we will ignore \(\pi-\sigma\) hybridization when computing the OOP response as we consider the OOP buckling to be much smaller than
the lattice constant. The wave-vector dependent function \(f\) is obtained from the honeycomb lattice geometry as
\[f\left(\mathbf{k}\right)=e^{i\frac{k_{x}a}{\sqrt{3}}}+2e^{-i\frac{k_{x}a}{2\sqrt {3}}}\cos\left(\frac{k_{y}a}{2}\right).\]
Expanding \(f\left(\mathbf{k}\right)\) near the Dirac points \(K/K^{\prime}\) up to linear order, we obtain the massive Dirac Hamiltonian that is usually employed to study gapped graphene and hBN systems. Considering now an expansion up to quadratic order in \(k\), we obtain [57]
\[f\left(\mathbf{k}\right)\approx\frac{\sqrt{3}a}{2}\left[\left(k_{x}+i\tau k_{y }\right)+i\zeta_{\mathrm{TW}}a\left(k_{x}-i\tau k_{y}\right)^{2}\right], \tag{3}\]
where \(\tau=\pm 1\) is the valley index and \(\zeta_{\mathrm{TW}}=\frac{\sqrt{3}}{12}\) is the trigonal warping strength. Although this trigonal warping strength is a fixed numerical factor, it is useful to keep it as a variable to enable systematic expansions in orders of \(\zeta_{\mathrm{TW}}\).
### Diagonalization
Diagonalizing the Hamiltonian Eq. (1), we obtain the band structure as \(\pm E\) with
\[E=\sqrt{\Delta^{2}+\gamma^{2}\left|f\left(\mathbf{k}\right)\right|^{2}}. \tag{4}\]
As we are interested in linear contributions from trigonal warping, we approximate \(E\) up to first order in \(\zeta_{\mathrm{TW}}\) as
\[E\approx\varepsilon+\tau\frac{\xi}{\varepsilon}\zeta_{\mathrm{TW}}, \tag{5}\]
where
\[\varepsilon =\sqrt{\Delta^{2}+\hbar^{2}v_{F}^{2}k^{2}},\] \[\xi =a\hbar^{2}v_{F}^{2}k^{3}\sin\left(3\theta\right), \tag{6}\]
and the Fermi velocity is defined \(v_{F}=\frac{1}{\hbar}\frac{\sqrt{3}a\gamma}{2}\). We then write the normalized eigenvectors as
\[\left|v_{\mathbf{k}}\right\rangle =\sqrt{\frac{E+\Delta}{2E}}\left[\begin{array}{c}e^{-i\tau \theta}(E-\Delta)\\ \hline\hbar v_{F}k(1+iak\zeta_{\mathrm{TW}}e^{-3i\theta\tau})\\ 1\end{array}\right], \tag{7}\] \[\left|c_{\mathbf{k}}\right\rangle =\sqrt{\frac{E-\Delta}{2E}}\left[\begin{array}{c}-E-\Delta\\ \hline\hbar v_{F}k(1+iak\zeta_{\mathrm{TW}}e^{-3i\theta\tau})\\ e^{i\tau\theta}\end{array}\right], \tag{8}\]
where \(v/c\) correspond to the valence and conduction band, respectively. From Eqs. (7-8) it is clear which components go to zero as \(k\to 0\), as \(E\approx\Delta+\mathcal{O}\left(k^{2}\right)\) for small \(k\), whereas the denominators of the fraction in square brackets are \(\mathcal{O}(k)\).
The presence of the phase terms in spinor components that go to zero as \(k\to 0\) in \(\left|v_{\mathbf{k}}\right\rangle\) and \(\left|c_{\mathbf{k}}\right\rangle\) will lead to a pseudo-spin angular quantum number \(m_{s}=0\)[60; 61; 62; 63; 64; 65]. This pseudo-spin angular quantum number is governed by the phase choice and allows a direct association with the usual Hydrogen-like states.
### Momentum Matrix Element and Berry Connection
The IP interband momentum matrix element in the \(i\)-direction is defined as
\[p_{v\mathbf{ck}}^{i}=\left\langle v_{\mathbf{k}}\middle|\frac{m}{\hbar}\frac{ \partial\mathcal{H}\left(\mathbf{k}\right)}{\partial k_{i}}\middle|c_{\mathbf{ k}}\right\rangle. \tag{9}\]
When considering IP properties, we will focus our attention solely on the \(x\)-direction as the inversion symmetry of the lattice along the \(y\)-direction means that the \(yyy\)-component of the non-linear conductivity tensor will be trivially zero after summing over valley index. Alongside with the momentum matrix elements, we will also require Berry connections, defined as
\[\Omega_{nm\mathbf{k}}^{\alpha}=i\left\langle n_{\mathbf{k}}\middle|\frac{ \partial}{\partial k_{\alpha}}\middle|m_{\mathbf{k}}\right\rangle \tag{10}\]
as their explicit expression will play an important part in computing generalized derivatives[66].
To obtain the non-linear conductivity tensor, we will consider incident fields with frequency \(\omega_{p}\) and \(\omega_{q}\). The indices for the current vector \(\mathbf{J}^{(2)}\left(\omega_{pq}\right)\) will contract as[51]
\[J_{i}^{(2)}(\omega_{pq})=\sum_{j,k}\sigma_{ijk}^{(2)}(\omega_{pq};\omega_{p}, \omega_{q})E_{j}(\omega_{p})E_{k}(\omega_{q}) \tag{11}\]
with \(\mathbf{E}\left(\omega\right)\) the external optical field and the frequency \(\omega_{pq}=\omega_{p}+\omega_{q}\). A simple symmetry
analysis[19] tells us that the relevant components of the non-linear conductivity tensor will be
\[\sigma^{(2)}=\left[\begin{array}{cccccc}\sigma_{xxx}^{(2)}&-\sigma_{xxx}^{(2)}& 0&0&\sigma_{xxx}^{(2)}&0\\ 0&0&0&\sigma_{xx}^{(2)}&0&-\sigma_{xxx}^{(2)}\\ \sigma_{xxx}^{(2)}&\sigma_{xxx}^{(2)}&\sigma_{zzz}^{(2)}&0&0&0\end{array} \right].\]
Compared to [57], we apply a simple relabeling of the two valleys \(\left(\tau\rightarrow-\tau\right)\) and the gauge change \(\left|c_{\mathbf{k}}\right\rangle\to e^{i\tau\theta}\left|c_{ \mathbf{k}}\right\rangle\). While this gauge change leads to a global phase in the momentum matrix elements, it is important to note that the \(k_{x}\)-derivative in the definition of the Berry connection will lead to a more complex transformation. Nonetheless, this is just a gauge choice and, therefore, both the free-carrier and the excitonic conductivity will be independent of this choice.
### Free-Carrier Conductivity
When discussing both linear and non-linear excitonic conductivities, we will include results for very large dielectric constants. In this limit, the excitonic response agrees with the free-carrier expression, obtained by computing the electronic conductivity in the free-carrier (single particle) regime.
The generic expression for the free-carrier linear electronic conductivity in a clean two-band semiconductor at \(T=0\) is given by [66; 67; 68; 69; 70]
\[\sigma_{\alpha\beta}\left(\omega\right) =\frac{e^{2}\hbar}{i\pi^{2}m^{2}}\left[\int\frac{p_{vck}^{\alpha} p_{cv\mathbf{k}}^{\beta}}{E_{cv\mathbf{k}}\left(E_{cv\mathbf{k}}-\hbar\omega \right)}d^{2}\mathbf{k}\right.\] \[\left.-\left(\omega\rightarrow-\omega\right)^{*}\right], \tag{12}\]
where \(E_{cv\mathbf{k}}=2E\) and the integration runs over the Brillouin zone. Analogously, the generic intraband non-linear electronic conductivity in a clean two-band semiconductor can be written as [66; 67; 68; 69; 70]
\[\sigma_{\alpha\beta\lambda}^{\left(\text{intra}\right)}\left(\omega_{p}, \omega_{q}\right)=\frac{e^{3}\hbar^{2}\left(\omega_{p}+\omega_{pq}\right)}{2 \pi^{2}m^{2}}\int\frac{p_{vck}^{\alpha}\left[p_{cv\mathbf{k}}^{\beta}\right]_ {;k_{\lambda}}}{\left(E_{cv\mathbf{k}}^{2}-\hbar^{2}\omega_{p}^{2}\right) \left(E_{cv\mathbf{k}}^{2}-\hbar^{2}\omega_{pq}^{2}\right)}d^{2}\mathbf{k}+(p \leftrightarrow q), \tag{13}\]
where \(\left[p_{cv\mathbf{k}}^{\beta}\right]_{;k_{\lambda}}\) is the generalized derivative[66] in the \(\lambda\)-direction of the momentum matrix element for the \(\beta\)-direction, defined as
\[\left[p_{cv\mathbf{k}}^{\beta}\right]_{;k_{\lambda}}=\frac{\partial p_{cv \mathbf{k}}^{\beta}}{\partial k_{\lambda}}-i\left(\Omega_{cc\mathbf{k}}^{ \lambda}-\Omega_{vv\mathbf{k}}^{\lambda}\right)p_{cv\mathbf{k}}^{\beta}. \tag{14}\]
When considering \(\lambda=z\), the \(k_{z}\)-derivative term in Eq. (14) is discarded as there is no dependence on \(k_{z}\) in the momentum matrix elements. The specific details for the calculation of both \(p_{v\mathbf{k}}^{z}\) and \(\Omega_{nm\mathbf{k}}^{z}\) will be discussed in Section V.
While the integrals of Eqs. (12-13) are over the entire Brillouin zone, performing the expansion around the Dirac points means that the integration is now over the infinite Dirac cone and that a sum over valleys must be made. Due to the smallness of \(\zeta_{\text{TW}}\), we are interested contributions up to \(\mathcal{O}(\zeta_{\text{TW}})\). The \(\zeta_{\text{TW}}\) factor must come from either \(E_{cv\mathbf{k}}\) or \(p_{v\mathbf{c}\mathbf{k}}^{\alpha}\left[p_{cv\mathbf{k}}^{\beta}\right]_{;k_{ \lambda}}\). Time reversal symmetry means that \(E_{cv\mathbf{k}}=E_{cv-\mathbf{k}}\). Any term containing \(p_{cv\mathbf{k}}^{\alpha}\left[p_{cv\mathbf{k}}^{\beta}\right]_{;k_{\lambda}}\) to zeroth order in \(\zeta_{\text{TW}}\) will vanish upon integration and summation over valley. This allows us to set \(E_{cv\mathbf{k}}=2\varepsilon\) while retaining \(\mathcal{O}\left(\zeta_{\text{TW}}\right)\) contribution
to \(p_{vc\mathbf{k}}^{\alpha}\left[p_{cv\mathbf{k}}^{\beta}\right]_{;k_{\lambda}}\) throughout this paper when computing the various transition amplitudes. These integrals can be computed analytically in our first-order approximation in \(\zeta_{\mathrm{TW}}\), with the exact expressions present in Appendix A for the various processes considered.
## III Bethe-Salpeter equation
Before discussing the excitonic conductivity, we must first compute the excitonic states for each \(\tau\) valley. To compute the excitonic wave functions and their binding energies, we will solve the Bethe-Salpeter equation[70, 71, 64, 72], given in momentum space by
\[E_{n}\psi_{cv\mathbf{k}}^{(n)}=E_{cv\mathbf{k}}\psi_{cv\mathbf{ k}}^{(n)}+\] \[+\sum_{\mathbf{q}}V\left(\left|\mathbf{k}-\mathbf{q}\right| \right)\left\langle c_{\mathbf{k}}\right|\!\left|c_{\mathbf{q}}\right\rangle \left\langle v_{\mathbf{q}}\right|\!v_{\mathbf{k}}\right\rangle\psi_{cv\mathbf{ q}}^{(n)}, \tag{15}\]
where \(E_{n}\) is the exciton energy of state \(n\), \(V\left(k\right)\) is the attractive electrostatic potential coupling electrons and holes, and \(\psi_{cv\mathbf{k}}^{(n)}\) is the wave function of the exciton. For notational simplicity, the \(\tau\) dependence of energy and wave function is omitted from the list of arguments. In Eq. (III), the valley dependence is present in the form factor \(\left\langle c_{\mathbf{k}}\right|\!\left|c_{\mathbf{q}}\right\rangle\left\langle v _{\mathbf{q}}\right|\!v_{\mathbf{k}}\right\rangle\). For our system, we consider \(V\left(k\right)\) to be the Rytova-Keldysh potential[73, 72], given in momentum space by
\[V\left(k\right)=-2\pi\hbar c\alpha\frac{1}{k\left(\epsilon+r_{0}k\right)}, \tag{16}\]
with \(\alpha\) the fine-structure constant, \(\epsilon\) the mean dielectric constant of the media surrounding the monolayer, and \(r_{0}\) an IP screening length[74] related to the polarizability of the material and usually obtained from DFT calculations[75]. From the analysis of Fig. 2C of Ref. [75] for graphene-like materials with a bandgap of \(E_{g}=2\,\)eV, we set \(r_{0}=40\,\)A.
Considering the excitonic wave function to have a well-defined angular momentum \(\ell_{n}\), we write it as \(\psi_{cv\mathbf{k}}^{(n)}=f_{cv\mathbf{k}}^{(n)}e^{i\ell_{n}\theta_{k}}\) and, defining \(\varphi=\theta_{q}-\theta_{k}\), rewrite the Bethe-Salpeter equation by converting the sum into an integral as
\[E_{n}f_{cv\mathbf{k}}^{(n)}=2\varepsilon f_{cv\mathbf{k}}^{(n)}+\frac{1}{4\pi ^{2}}\sum_{\lambda=0}^{2}\int_{0}^{\infty}\int_{0}^{2\pi}\,V\left(\left| \mathbf{k}-\mathbf{q}\right|\right)\mathcal{A}_{\lambda}\left(k,q\right)e^{i \lambda\tau\varphi}f_{cv\mathbf{q}}^{(n)}e^{i\ell_{n}\varphi}d\varphi\,qdq, \tag{17}\]
where \(E_{cv\mathbf{k}}\) became \(2\varepsilon\) as we are neglecting the effects of trigonal warping on the band structure for simplicity. This approximation removes all coupling of states with different angular momentum.
The radial component of the form factor is obtained directly from the expansion of \(\left\langle c_{\mathbf{k}}\right|\!\left|c_{\mathbf{q}}\right\rangle\left\langle v _{\mathbf{q}}\right|\!v_{\mathbf{k}}\rangle\) while again neglecting trigonal warping in the definition of the eigenvectors. Under this approximation, the eigenvectors read
\[\left|v_{\mathbf{k}}\right\rangle=\left[\begin{array}{cc}e^{-i\tau\theta} \sin\frac{x_{k}}{2}\\ \cos\frac{x_{k}}{2}\end{array}\right],\quad\left|c_{\mathbf{k}}\right\rangle= \left[\begin{array}{cc}-\cos\frac{x_{k}}{2}\\ e^{i\tau\theta}\sin\frac{x_{k}}{2}\end{array}\right],\]
where \(x_{k}=\tan^{-1}\left[\frac{\hbar v_{F}k}{\Delta}\right]\). The radial component of the form factor can then be written as
\[\mathcal{A}_{\lambda}\left(k,q\right)=\begin{cases}\frac{1}{4}\left(1+\cos x _{k}\right)\left(1+\cos x_{q}\right),&\lambda=0\\ \frac{1}{2}\sin x_{k}\sin x_{q},&\lambda=1\\ \frac{1}{4}\left(1-\cos x_{k}\right)\left(1-\cos x_{q}\right),&\lambda=2\end{cases},\]
where \(\lambda\) denotes the angular dependence present in the \(e^{i\lambda\tau\varphi}\) factor in Eq. (17).
As is evident from Eq. (17), the degeneracy in angular momentum \(\ell_{n}\leftrightarrow-\ell_{n}\) is immediately lifted within the same valley. However, a degeneracy between \(\left(\ell_{n}+m_{s},\tau\right)\) and \(\left(-\ell_{n}-m_{s},-\tau\right)\) excitons is still present, stemming from time reversal symmetry in the system[76]. Finally, Eq.
(17) is solved numerically via a simple numerical quadrature using a tangent grid \(k=\tan\left(x\frac{\pi}{2}\right)\) with 1000 points \(x\in[0,1]\), following the procedure already outlined several times in the literature, namely in Refs. [77; 78; 79; 61; 79].
When discussing excitonic states, we will use nomenclature similar to the 2D Hydrogen atom to distinguish the different angular momentum states (_i.e._, \(s\), \(p_{\pm}\), \(d_{\pm}\) states). As the pseudo-spin contribution \(m_{s}=0\), \(s\)-states will have \(\ell=0\), \(p_{\pm}\) will have \(\ell=\pm 1\), and analogously to higher angular momentum states.
\[\frac{\sigma_{\alpha\beta\gamma}^{\text{SHG}}\left(\omega\right)}{\sigma_{2} }=\frac{-iE_{g}\hbar^{2}}{2a\pi^{3}m^{2}}\sum_{n,m}\left[\frac{E_{n}X_{0n}^{ \alpha}Q_{nm}^{\beta}X_{m0}^{\gamma}}{\left(E_{n}-2\hbar\omega\right)\left(E_ {m}-\hbar\omega\right)}-\frac{E_{n}X_{n0}^{\alpha}Q_{mn}^{\beta}X_{0m}^{ \gamma}}{\left(E_{n}+2\hbar\omega\right)\left(E_{m}+\hbar\omega\right)}-\frac {\left(E_{n}-E_{m}\right)X_{0n}^{\alpha}Q_{nm}^{\beta}X_{m0}^{\gamma}}{\left(E _{n}+\hbar\omega\right)\left(E_{m}-\hbar\omega\right)}\right]. \tag{19}\]
In these expressions, \(E_{n}\) is the energy of the excitonic state \(n\), and the one- and two-state excitonic matrix elements are defined as [70; 57]
\[X_{0n}^{\alpha}=i\int\,\psi_{cv\mathbf{k}}^{(n)}\frac{p_{cv\mathbf{k}}^{\alpha }}{E_{cv\mathbf{k}}}d^{2}\mathbf{k}, \tag{20}\]
and
\[Q_{nm}^{\alpha}=i\int\,\psi_{cv\mathbf{k}}^{(n)*}\left[\psi_{cv\mathbf{k}}^{( m)}\right]_{;k_{\alpha}}d^{2}\mathbf{k}, \tag{21}\]
where \(\left[\psi_{cv\mathbf{k}}^{(m)}\right]_{;k_{\alpha}}\) is the generalized derivative[66] in the \(\alpha\)-direction of the exciton wave function for the state \(m\) given in terms of the Berry connection \(\Omega_{ij\mathbf{k}}^{\alpha}\), defined as[70]
\[\left[\psi_{cv\mathbf{k}}^{(m)}\right]_{;k_{\alpha}}=\frac{\partial\psi_{cv \mathbf{k}}^{(m)}}{\partial k_{\alpha}}-i\left(\Omega_{cc\mathbf{k}}^{\alpha }-\Omega_{vv\mathbf{k}}^{\alpha}\right)\psi_{cv\mathbf{k}}^{(m)}. \tag{22}\]
Analogously to what was discussed regarding Eq. (14), the excitonic wave function will be independent of \(k_{z}\) and, as such, the \(\frac{\partial}{\partial k_{z}}\psi_{cv\mathbf{k}}^{(m)}\) term is dropped, meaning that \(Q_{nm}^{z}\) reads
\[Q_{nm}^{z}=\int\,\psi_{cv\mathbf{k}}^{(n)*}\left(\Omega_{cc\mathbf{k}}^{z}- \Omega_{vv\mathbf{k}}^{z}\right)\psi_{cv\mathbf{k}}^{(m)}d^{2}\mathbf{k}. \tag{23}\]
Additionally, one can easily convert Eqs. (18-19) into formulas for the associated susceptibility as \(\chi_{\alpha\beta}=\frac{i}{\omega\epsilon_{0}}\sigma_{\alpha\beta}\) and \(\chi_{\alpha\beta\gamma}^{\text{SHG}}=\frac{i}{2\omega\epsilon_{0}}\sigma_{ \alpha\beta\gamma}^{\text{SHG}}\), respectively[70].
## V Optical response of buckled gapped graphene
We will now quickly outline the IP optical selection rules for gapped graphene with trigonal warping, already discussed in the literature[57],
before focusing our attention on the OOP linear and non-linear optical response of buckled gapped graphene.
As discussed in Sec. II, we will be ignoring \(\pi-\sigma\) hybridization by assuming the buckling is much smaller than the lattice constant. Therefore, this model will be identical to the unbuckled monolayer discussed previously, apart from the alternating \(z\)-positions of the individual sublattices. More importantly, the eigenstates will remain those given by Eqs. (7-8), meaning that no changes to either the momentum matrix elements or to the Bethe-Salpeter equation are needed.
Throughout, we will consider a buckling parameter \(h=a/4\), where \(a\) is the lattice parameter. This matches approximately what is present in the literature[82, 83, 84, 6] where, depending on the material in question, the buckling parameter \(h\) takes values between \(a/2.5\) and \(a/8.6\). Additionally, as will be evident, the presence of trigonal warping is not necessary to obtain finite linear and non-linear OOP conductivities.
### In-Plane Optical Selection Rules
To obtain the optical selection rules, we must compute the angular integrals present in the excitonic matrix elements \(X^{x}_{0n}\) and \(Q^{x}_{nm}\). These optical selection rules are relevant not only for the IP linear and non-linear response, but also for the non-diagonal OOP response. For clarity, we separate this subsection into discussion on linear and non-linear response. For the linear optical response, we focus on the angular integral present in the definition of \(X^{x}_{0n}\) following Eq. (20). To zeroth order in \(\zeta_{\rm TW}\), the angular integral in Eq. (20) then reads
\[\int_{0}^{2\pi}\,e^{i\ell_{n}\theta}p^{x}_{v{\bf c}{\bf k}}d \theta\propto\left[\frac{\Delta}{\varepsilon}+\frac{\ell_{n}+\tau}{|\ell_{n}+ \tau|}\tau\right]\delta_{|\ell_{n}+\tau|,1}. \tag{24}\]
The presence of the Kronecker delta in Eq. (24) immediately gives rise to the well-known valley-dependent selection rules in gapped graphene, hexagonal Boron Nitride, and other monolayer materials with an hexagonal lattice [85, 86, 57]when one takes into account the valley-dependent pseudo-spin contribution. Including trigonal warping effects would lead to a quadratic correction allowing for transitions with \(|\ell_{n}+\tau|=2\) or \(|\ell_{n}+\tau|=4\).
For the non-linear optical response, we first focus our attention on the angular integral in the definition of \(Q^{x}_{nm}\) following Eq. (21). Performing the necessary angular integral, we write the matrix elements in a somewhat abusive but concise form as
\[Q^{x}_{nm}=Q^{x}_{|\ell_{m,n}|=1}+\zeta_{\rm TW}\left[Q^{x}_{| \ell_{m,n}|=2}+Q^{x}_{|\ell_{m,n}|=4}\right], \tag{25}\]
where the new indices restrict each term to the Kronecker deltas resulting from the different angular integrals and we defined \(\ell_{m,n}=\ell_{m}-\ell_{n}\) for conciseness. Besides the \(Q^{x}_{nm}\) matrix element, a linear contribution in \(\zeta_{\rm TW}\) is also present in the expansion of \(X^{x}_{0n}\). In the same notation, we write this contribution as
\[X^{x}_{0n}=X^{x}_{|\ell_{n}+\tau|=1}+\zeta_{\rm TW}\left[X^{x}_{| \ell_{n}+\tau|=2}+X^{x}_{|\ell_{n}+\tau|=4}\right]. \tag{26}\]
As we are considering contributions only up to first order in \(\zeta_{\rm TW}\), we must carefully analyze the matrix product \(X^{x}_{0n}Q^{x}_{nm}X^{x}_{m0}\) to understand which states are to be included. Knowing the simplified forms for the matrix elements, we can expand the oscillator strength \(X^{x}_{0n}Q^{x}_{nm}X^{x}_{m0}\) up to linear order in \(\zeta_{\rm TW}\). The non-zero contributions to the non-linear second order conductivity then read
\[\zeta_{\rm TW} \left[X^{x}_{|\ell_{n}+\tau|=2}Q^{x}_{|\ell_{m,n}|=1}X^{x,*}_{| \ell_{m}+\tau|=1}+\right.\] \[+X^{x}_{|\ell_{n}+\tau|=1}Q^{x}_{|\ell_{m,n}|=2}X^{x,*}_{|\ell_{m }+\tau|=1}+\] \[\left.+X^{x}_{|\ell_{n}+\tau|=1}Q^{x}_{|\ell_{m,n}|=1}X^{x,*}_{| \ell_{m}+\tau|=2}\right], \tag{27}\]
where the importance of including trigonal warping in order to obtain a non-zero second-order response is evident. Defining the oscillator strength \(\sigma_{\ell_{n};\ell_{m}}\equiv X^{x}_{\ell_{n}}Q^{x}_{\ell_{n},\ell_{m}}X^{x} _{\ell_{m}}\) from the allowed transitions of Eq. (27), the dominant
matrix elements correspond to the \(\sigma_{p_{+};s}\) and \(\sigma_{s;p_{+}}\), in perfect agreement with Fig. (2-c) of Ref. [57].
### Out-of-Plane Momentum and Berry Connection
The matrix elements of \(z\) are given by \(h\sigma_{z}\), with \(\sigma_{z}\) the diagonal Pauli matrix, and can be easily computed between bands \(n\) and \(m\) as
\[z_{nm\mathbf{k}}=\left\langle n_{\mathbf{k}}\middle|\left[\begin{array}{cc}h& 0\\ 0&-h\end{array}\right]\middle|m_{\mathbf{k}}\right\rangle. \tag{28}\]
Under the same linear approximation in \(\zeta_{\mathrm{TW}}\) for the band structure as discussed in Eq. (5) and considering only terms up to \(\mathcal{O}\left(\zeta_{\mathrm{TW}}^{1}\right)\), Eq. (28) reads
\[z_{vc\mathbf{k}} =-he^{i\tau\theta}\sqrt{1-\frac{\Delta^{2}}{\varepsilon^{2}}} \left[1+\zeta_{\mathrm{TW}}\tau\frac{\Delta^{2}}{\varepsilon^{2}}ak\sin 3 \theta\right], \tag{29}\] \[z_{cc\mathbf{k}} =h\frac{\Delta}{\varepsilon}\left[1-\zeta_{\mathrm{TW}}\tau \left(1-\frac{\Delta^{2}}{\varepsilon^{2}}\right)ak\sin 3\theta\right]\] \[=-z_{vv\mathbf{k}} \tag{30}\]
for the different band pairs.
Knowing the \(z_{ij\mathbf{k}}\) matrix elements, we can finally write the OOP component of the momentum and Berry connections as
\[p_{vc\mathbf{k}}^{z} =\frac{m}{i\hbar}2\varepsilon z_{vc\mathbf{k}}\] \[=2ih\frac{m}{\hbar}e^{i\tau\theta}\sqrt{\varepsilon^{2}-\Delta^{ 2}}\left[1+\zeta_{\mathrm{TW}}\tau\frac{\Delta^{2}}{\varepsilon^{2}}ak\sin 3 \theta\right] \tag{31}\]
and, following from [66],
\[\Omega_{cc\mathbf{k}}^{z}-\Omega_{vv\mathbf{k}}^{z}=z_{cc\mathbf{ k}}-z_{vv\mathbf{k}}\] \[=2h\frac{\Delta}{\varepsilon}\left[1-\zeta_{\mathrm{TW}}ak\tau \left(1-\frac{\Delta^{2}}{\varepsilon^{2}}\right)\sin 3\theta\right]. \tag{32}\]
The jump from \(\frac{\partial}{\partial k_{z}}\) to \(iz\) can be understood by considering the buckled monolayer as a repeated structure in the \(z\)-direction. This means that the wavefunctions carry a \(e^{ik_{z}z}\) factor, while the periodic parts (_i.e._, the eigenvectors in Eqs. (7-8)) are independent of \(k_{z}\). Finally, the period of this repeated structure is taken to infinity.
While the IP momentum in Eq. (24) goes to zero as \(\Delta/\varepsilon\approx k^{-1}\) for large \(k\), the dominant term of \(p_{vc\mathbf{k}}^{z}\) is linear in \(k\). As a consequence, contributions from continuum states (_i.e._, states where \(E_{n}>2\Delta\)) will quickly increase with \(\omega\).
### Out-of-Plane Excitonic Linear Conductivity
We will now analyze the OOP excitonic conductivity. Considering only the zeroth order contribution from \(\zeta_{\mathrm{TW}}\), it is immediately evident from the OOP momentum of Eq. (31) that only transitions to excitonic states with \(\ell_{n}=-\tau\) are allowed, meaning that \(X_{0n}^{z}\) reads
\[X_{0n}^{z}=-\frac{2\pi hm}{\hbar}\delta_{\ell_{n},-\tau}\int_{0} ^{\infty}\ f_{cv\mathbf{k}}^{(n)}\sqrt{1-\frac{\Delta^{2}}{\varepsilon^{2}}} kdk. \tag{33}\]
Including trigonal warping effects would allow for transitions where \(|\ell_{n}+\tau|=3\) and the correction would be quadratic in \(\zeta_{\mathrm{TW}}\).
The real part of the linear excitonic optical conductivity of the buckled monolayer for \(\epsilon=1\) is plotted in Fig. (2), with the top panel the IP response and the bottom panel the OOP response. Right side diagrams represent the transitions allowed for each component in the \(\tau=1\) valley. Considering the \(\tau=-1\) valley would imply a sign flip of the diagrams (_e.g._, exchanging \(p_{-}\) by \(p_{+}\), etc), as evident from the selection rules of Eqs. (26) and (33). As expected from the form of the momentum operator \(p_{vc}^{z}\), we observe an ever-increasing linear optical conductivity when accounting for continuum states (see Appendix A). The optical response present in the bottom panel of Fig. (2) also qualitatively matches the measured optical conductivity of anisotropic materials, such as ZrSiS, ZrGeS, and ZrGeSe, found in the current literature[15; 18].
### Out-of-Plane Excitonic Non-Linear Conductivity
Focusing now on the non-linear regime and considering only zeroth order in \(\zeta_{\rm TW}\) contributions, the \(Q_{nm}^{z}\) matrix element reads
\[Q_{nm}^{z}=4\pi h\delta_{\ell_{m},\ell_{n}}\int_{0}^{\infty}\,f_{cvk}^{(n)\ast} \frac{\Delta}{\varepsilon}f_{cvk}^{(m)}kdk, \tag{34}\]
allowing transitions between states with the same angular momentum. However, as \(X_{0n}^{z}\) only allows \(\ell_{n}=-\tau\) to zeroth order in \(\zeta_{\rm TW}\), we arrive at the fact that only \(\ell_{n}=\ell_{m}=-1\) states contribute in the \(\tau=1\) valley.
Including trigonal warping effects in \(Q_{nm}^{z}\) would allow for transitions where \(\left|\ell_{m}-\ell_{n}\right|=3\). Considering this extra term together with the selection rules present in \(X_{0n}^{z}\) leads to a vanishing first order contribution from \(\zeta_{\rm TW}\) to the SHG conductivity. Additionally, as each momentum matrix element will carry a factor of \(h\), the SHG conductivity will therefore be proportional to \(\left(h/a\right)^{3}\). The SHG conductivity is plotted in the middle panel of Fig. (3) for \(\epsilon=1\). Apart from the much smaller amplitude due to
Figure 2: (Left) Real part of the linear IP (top) and OOP (bottom) optical response for \(\epsilon=1\). Orange curve corresponds to the excitonic bound states, while blue line also includes continuum states. Vertical dashed line represents the bandgap. (Right) Diagram of dominant excitonic selection rules in the \(\tau=1\) valley for linear IP (top) and OOP (bottom) optical response.
Figure 3: (Left) Real part of the SHG optical response with diagonal IP (top), diagonal OOP (middle) and non–diagonal OOP (bottom) conductivity for \(\epsilon=1\), \(h=a/4\). Orange curve corresponds to only excitonic bound states, while blue line also includes continuum states. Vertical (dotted) dashed black lines represent (half) the bandgap of the system. (Right) Diagram of dominant excitonic selection rules in the \(\tau=1\) valley for each component. Dashed line means the transition is allowed by trigonal warping, solid lines are transitions allowed without trigonal warping. Arrow direction and colour represent the specific resonance when multiple contributions are present.
the cubic dependence on \(h/a\), it is also noteworthy that the response above \(\hbar\omega=2\,\)eV remains remarkably close to its maximum value. This is very different from what occurs for the IP response, where the response above \(\hbar\omega=2\,\)eV is much smaller than its maximum value.
### Non-Diagonal Out-of-Plane Response
Finally, we will consider the non-diagonal OOP response in buckled gapped graphene. Considering again only the \(x\)-direction for the IP response, we have three different components which can prove interesting: \(\sigma_{zxx}\), \(\sigma_{xzx}\) and \(\sigma_{zxx}\)
Looking more carefully at the selection rules of the system, we can immediately tell that \(\sigma_{zxx}=0\) when recalling Eqs. (26,33,34): while \(X_{0n}^{z}\) and \(Q_{nm}^{z}\) only allow \(\ell_{n}=\ell_{m}=-1\) states, \(X_{0n}^{x}\) explicitly forbids these states. As such, we focus our attention only on \(\sigma_{zxx}\). Although we will not be discussing \(\sigma_{xzx}\), a quick analysis of the various selection rules discussed previously shows that the dominant response shall be a zeroth order contribution in \(\zeta_{\rm TW}\) of the form \(X_{\ell_{n}=0}^{x}Q_{nm}^{z}X_{\ell_{m}=0}^{x}\). Under Kleinman symmetry[87], \(\sigma_{xzx}\) will be approximately equal to \(\sigma_{zxx}\).
Now explicitly computing the selection rules for \(\sigma_{zxx}\), \(X_{0n}^{z}\) again immediately forces \(\ell_{n}=-1\). Recalling Eqs. (25,26), the dominant transition will be associated with the matrix elements \(Q_{|\ell_{m,n}|=1}^{x}X_{|\ell_{m}+\tau|=1}^{x}\) (_i.e._, zeroth order contribution in \(\zeta_{\rm TW}\)), meaning that \(\ell_{m}\) is restricted to \(|\ell_{m}+\tau|=1\). Excluding all other contributions, we can immediately expect that this off-diagonal term will be significantly larger than \(\sigma_{zzz}^{(2)}\), as the dependence on \(h/a<1\) will be linear instead of cubic. Additionally, \(X_{m0}^{x}\) is much larger than \(X_{m0}^{z}\), which will also contribute to this trend.
This excitonic non-linear conductivity is then plotted in the bottom panel of Fig. (3) for \(\epsilon=1\). As expected from the qualitative analysis of the matrix elements, the relative magnitude of the off-diagonal OOP contribution is much larger than the diagonal OOP response present in the top panel of Fig. (3). As discussed previously, this mainly stems from the lower order dependence in \(h/a\). Additionally, and as expected from the general form \(\sigma_{zxx}\), we can also observe that the bound state peaks corresponding to \(2\hbar\omega=E_{n}\) (_i.e._, states below \(\hbar\omega=\Delta\)) match exactly with the corresponding regime in \(\sigma_{zzz}\), while those corresponding to \(\hbar\omega=E_{m}\) match exactly with the same regime in \(\sigma_{zxx}\).
Figure 4: Non–linear SC IP (top), diagonal OOP (middle) and non–diagonal OOP (bottom) optical for \(\epsilon=1\), \(h=a/4\). Orange curve corresponds to only excitonic bound states, while blue line also includes continuum states. Vertical dashed black lines represents the bandgap of the system.
We also observe that the magnitude of \(\sigma^{\rm SHG}_{zxx}\) is remarkably close to that of \(\sigma^{\rm SHG}_{xxx}\) for the buckling parameter chosen. This will, of course, be dictated by the ratio \(h/a\), meaning that for a larger buckling parameter the non-diagonal OOP SHG response will be larger than the diagonal IP SHG response. Additionally, in Fig. (4), we plot the SC for the three different tensor elements discussed previously. As the selection rules are the same as presented in Fig. (3), they are not included in the panels.
Finally, we present the \(xxx\), \(zzz\) and \(zxx\) components of the absolute value of the SHG non-linear optical susceptibilities. These can be directly computed from the conductivity as
\[\chi^{\rm SHG}_{\alpha\beta\gamma}=\frac{i}{2\omega\epsilon_{0}}\sigma^{\rm SHG }_{\alpha\beta\gamma} \tag{35}\]
and their absolute value is presented in Fig. (5). Due to the inclusion of a finite broadening \(\hbar\Gamma=0.05\,\rm eV\), the three considered tensor elements of the conductivity take a small but non-zero value at \(\hbar\omega=0\), with its magnitude less than one percent of the maximum of each tensor component. Still, the presence of this finite value at \(\hbar\omega=0\) means that the broadening must also be considered in the \(1/\omega\) factor present in Eq. (35).
The relative amplitudes of the different components can be easily compared, with \(\chi_{zxx}\) presenting a very similar amplitude to \(\chi_{xxx}\). Additionally, \(\chi_{zzz}\) is roughly a factor of 1/20 smaller than either \(\chi_{xxx}\) or \(\chi_{zxx}\) within the bandgap of the system, as expected from the cubic dependence on the ratio \(h/a\). The different dependence on \(h/a\) in each component means that, as discussed previously, choosing a larger buckling parameter will lead to a comparatively greater OOP susceptibility. Notably, the left-most peak of \(\chi_{xxx}\) is not present in \(\chi_{zxx}\). This is due to the different selection rules for the two components of the SHG non-linear susceptibility, where certain transitions present in \(\chi_{xxx}\) are no longer allowed for \(\chi_{zxx}\).
## VI Summary
In this paper, we studied the excitonic linear and non-linear optical properties of anisotropic buckled monolayer semiconductors. To this end, we began by considering the gapped Dirac model with trigonal warping. The excitonic states were computed by numerical diagonalization of the Bethe-Salpeter equation, allowing us to explicitly discuss the excitonic selection rules of the system.
Introducing a small buckling in the lattice structure of the monolayer, we then obtained the OOP momentum matrix elements and Berry connections, discussing the resulting OOP excitonic optical selection rules. We then analyzed the \(xxx\), \(zzz\) and \(zxx\) tensor elements of both SHG and SC optical response, discussing the differences and similarities between the three components.
Finally, we computed the absolute value of the non-linear optical susceptibility, directly comparing the amplitudes of the \(\chi_{xxx}\), \(\chi_{zzz}\) and \(\chi_{zxx}\) matrix elements. The OOP magnitudes are, of course, dictated by the ratio between the buckling parameter (\(h\)) and the lattice constant (\(a\)), meaning that a structure with a different buckling parameter will present greatly different relative magnitudes. While the OOP diagonal component had a much smaller maximum am
Figure 5: Magnitude of three different components (\(xxx\), \(zzz\) and \(zxx\)) of SHG non–linear optical susceptibilities for \(\epsilon=1\), \(h=a/4\). Vertical (dotted) dashed black lines represent (half) the bandgap of the system.
plitude, stemming from the cubic dependence on the ratio \(h/a\), the non-diagonal OOP component had a very similar amplitude to that of the diagonal IP component.
## Acknowledgements
M.F.C.M.Q. acknowledges the International Iberian Nanotechnology Laboratory (INL) and the Portuguese Foundation for Science and Technology (FCT) for the Quantum Portugal Initiative (QPI) grant SFRH/BD/151114/2021.
## Appendix A Electronic Linear and Non-Linear Conductivity Expressions
In this appendix, we will present the expressions for the free-carrier conductivity in our monolayer system. These were computed directly from the definitions in Eqs. (12-13) while considering only contributions up to first order in \(\zeta_{\rm TW}\). For this effect, we recall the definitions of the in- and OOP momentum matrix elements and Berry connections from Ref. [57] (with the appropriate gauge transformation) and Eqs. (31,32).
In the following expressions \(E_{cv\mathbf{k}}=E_{cvk}\), as including the contribution from trigonal warping in the bandstructure would introduce contributions one order higher in \(\zeta_{\rm TW}\) which would then vanish upon integration and summation over valley. As such, the band structure now only depends on the radial component \(k\), meaning that \(E_{cvk}=2\varepsilon\), where \(\varepsilon\) is as defined in Eq. (6). As an example, the non-linear IP response would, up to first order, include an extra term originating from \(p_{cv\mathbf{k}}^{\alpha}\left[p_{cv\mathbf{k}}^{\beta}\right]_{;k_{\lambda}}\) expanded to zeroth order, which vanishes upon integration.
Starting with the diagonal linear response described in Eq. (12), it follows that
\[\frac{\sigma_{xx}(\omega)}{\sigma_{0}}=\frac{2i}{\pi}\left[\frac{E_{g}}{\hbar \omega}-\left(1+\frac{E_{g}^{2}}{\hbar^{2}\omega^{2}}\right)\tanh^{-1}\left( \frac{\hbar\omega}{E_{g}}\right)\right], \tag{16}\]
where \(E_{g}=2\Delta\) and \(\tanh^{-1}\) is the inverse hyperbolic tangent.
Under a similar analysis, we compute the angular integral present in Eq. (13) and obtain the generic radial integral form of the diagonal second order response as
\[\frac{\sigma_{xxx}^{\rm(intra)}\left(\omega_{p},\omega_{q}\right)}{\sigma_{2} }=i\frac{4\zeta_{\rm TW}}{\pi}E_{g}^{2}\int_{E_{g}}^{\infty}\frac{\hbar\omega _{p}+\hbar\omega_{pq}}{\left(E_{cvk}^{2}-\hbar^{2}\omega_{q}^{2}\right)\left( E_{cvk}^{2}-\hbar^{2}\omega_{pq}^{2}\right)}dE_{cvk}+(p\leftrightarrow q). \tag{17}\]
Choosing specifically SHG and SC processes, we obtain
\[\frac{\sigma_{xxx}^{\rm SHG}(\omega)}{\sigma_{2}}\equiv\frac{\sigma_{xxx}^{ \rm(intra)}(\omega,\omega)}{\sigma_{2}}=i\frac{8\zeta_{\rm TW}}{\pi}\left( \frac{E_{g}}{\hbar\omega}\right)^{2}\left[\tanh^{-1}\left(\frac{\hbar\omega}{E _{g}}\right)-\frac{1}{2}\tanh^{-1}\left(\frac{2\hbar\omega}{E_{g}}\right)\right] \tag{18}\]
and
\[\frac{\sigma_{xxx}^{\rm SC}(\omega)}{\sigma_{2}}\equiv\frac{\sigma_{xxx}^{ \rm(intra)}\left(\omega,-\omega^{*}\right)}{\sigma_{2}}=\frac{16\zeta_{\rm TW }}{\pi}\Im\left[\left(\frac{E_{g}}{\hbar\omega}\right)^{2}\tanh^{-1}\left( \frac{\hbar\omega}{E_{g}}\right)-\frac{E_{g}}{\hbar\omega}\right], \tag{19}\]
where \(\Im\) denotes the imaginary part.
A similar analysis can be done for the diagonal OOP linear and non-linear response, although one must compute \(\sigma_{zz}\) carefully as the integration leads to a divergent result if the infinite \(k\)-space is considered. This is, however, only true for the imaginary part. Restricting our analysis to the real part, we find a finite result reading
\[\Re\left[\frac{\sigma_{zz}(\omega)}{\sigma_{0}}\right]\approx\frac{8}{3\gamma^ {2}}\left(\frac{h}{a}\right)^{2}\left(\hbar^{2}\omega^{2}-E_{g}^{2}\right)H \left(\hbar\omega-E_{g}\right), \tag{10}\]
where \(H(x)\) represents the Heaviside step function. For the diagonal non-linear response, no convergence issues are present and the integral can be considered over the infinite \(k\)-space
\[\frac{\sigma_{zzz}^{\rm(intra)}\left(\omega_{p},\omega_{q}\right)}{\sigma_{2}} =\frac{32\hbar\left(\omega_{p}+\omega_{pq}\right)E_{g}^{2}}{3i\pi\gamma^{2}} \left(\frac{h}{a}\right)^{3}\int_{E_{g}}^{\infty}\frac{E_{cvk}^{2}-E_{g}^{2}}{ \left(E_{cvk}^{2}-\hbar^{2}\omega_{p}^{2}\right)\left(E_{cvk}^{2}-\hbar^{2} \omega_{pq}^{2}\right)}dE_{cvk}+(p\leftrightarrow q). \tag{11}\]
Again restricting our analysis to SHG and SC, we obtain
\[\frac{\sigma_{zzz}^{\rm SHG}(\omega)}{\sigma_{2}}=\frac{32E_{g}^{2}}{3i\pi \gamma^{2}}\left(\frac{h}{a}\right)^{3}\left[\left(\frac{E_{g}^{2}}{\hbar^{2} \omega^{2}}-1\right)\tanh^{-1}\left(\frac{\hbar\omega}{E_{g}}\right)-\frac{1}{ 2}\left(\frac{E_{g}^{2}}{\hbar^{2}\omega^{2}}-4\right)\tanh^{-1}\left(\frac{2 \hbar\omega}{E_{g}}\right)\right] \tag{12}\]
and
\[\frac{\sigma_{zzz}^{\rm SC}(\omega)}{\sigma_{2}}=\frac{32E_{g}^{2}}{\pi\gamma ^{2}}\left(\frac{h}{a}\right)^{3}\Im\left[\frac{E_{g}}{\hbar\omega}+\left(1- \frac{E_{g}^{2}}{\hbar^{2}\omega^{2}}\right)\tanh^{-1}\left(\frac{\hbar\omega }{E_{g}}\right)\right]. \tag{13}\]
Figure 6: Convergence of the real part of the (left) linear, (middle) SHG and (right) SC IP optical response towards the free carrier limit as the dielectric constant \(\epsilon\) increases. Vertical axis is in units of \(\sigma_{0}\) for the linear response and \(\sigma_{2}\) for the non–linear response.
Finally, we consider the off-diagonal OOP response \(\sigma_{zxx}\), where we can also consider the integral over infinite \(k\)-space, reading
\[\frac{\sigma_{zxx}^{\rm(intra)}\left(\omega_{p},\omega_{q}\right)}{\sigma_{2}}=16 i\frac{\left(\hbar\omega_{p}+\hbar\omega_{pq}\right)}{\pi}\Delta^{2}\frac{h}{a} \int_{E_{g}}^{\infty}\frac{E_{g}^{2}-E_{cvk}^{2}}{E_{cvk}^{2}\left(E_{cvk}^{2} -\hbar^{2}\omega_{q}^{2}\right)\left(E_{cvk}^{2}-\hbar^{2}\omega_{pq}^{2} \right)}dE_{cvk}+(p\leftrightarrow q). \tag{100}\]
Looking again at SHG and SC, we obtain
\[\frac{\sigma_{zxx}^{\rm SHG}(\omega)}{\sigma_{2}}=i\frac{1}{\pi} \frac{h}{a}\frac{E_{g}^{2}}{\hbar^{2}\omega^{2}}\left[3\frac{E_{g}}{\hbar \omega}-2\left(\frac{E_{g}^{2}}{\hbar^{2}\omega^{2}}-1\right)\tanh^{-1}\left( \frac{\hbar\omega}{E_{g}}\right)+\frac{1}{2}\left(\frac{E_{g}^{2}}{\hbar^{2} \omega^{2}}-4\right)\tanh^{-1}\left(\frac{2\hbar\omega}{E_{g}}\right)\right] \tag{101}\]
and
\[\frac{\sigma_{zxx}^{\rm SC}(\omega)}{\sigma_{2}}=\frac{4}{\pi} \frac{h}{a}\Im\left[\frac{E_{g}^{3}}{\hbar^{3}\omega^{3}}-\frac{2E_{g}}{3 \hbar\omega}+\frac{E_{g}^{2}}{\hbar^{2}\omega^{2}}\left(1-\frac{E_{g}^{2}}{ \hbar^{2}\omega^{2}}\right)\tanh^{-1}\left(\frac{\hbar\omega}{E_{g}}\right) \right]. \tag{102}\]
Knowing these expressions, we can study the convergence of the excitonic conductivities towards the free-carrier regime as the dielectric constant increases. This is plotted in Figs. (6,7,8) for dielectric constant \(\epsilon\) between 1 and 20, as well as the free-carrier limit (in black). In these plots, we can see the excitonic conductivity converging towards the free-carrier regime as the dielectric constant of the medium surrounding the monolayer increases, as expected from the fast drop of binding energies and number of bound states.
Figure 7: Same as Fig. (6), except for the diagonal OOP response. |
2308.10213 | The boundary of Rauzy fractal and discrete tilings | The Rauzy fractal is a domain in the two-dimensional plane constructed by the
Rauzy substitution, a substitution rule on three letters. The Rauzy fractal has
a fractal-like boundary, and the currently known its constructions is not only
for its boundary but also for the entire domain. In this paper, we show that
all points in the Rauzy fractal have a layered structure. We propose two
methods of constructing the Rauzy fractal using layered structures. We show how
such layered structures can be used to construct the boundary of the Rauzy
fractal with less computation than conventional methods. There is a
self-replicating pattern in one of the layered structure in the Rauzy fractal.
We introduce a notion of self-replicating word and visualize how some
self-replicating words on three letters creates discrete tiling of the two
dimensional plane. | Woojin Choi, Hyosang Kang, Jeonghoon Rhee, Youchan Oh | 2023-08-20T09:45:22Z | http://arxiv.org/abs/2308.10213v1 | # The boundary of Rauzy fractal and discrete tilings
###### Abstract.
The Rauzy fractal is a domain in the two-dimensional plane constructed by the Rauzy substitution, a substitution rule on three letters. The Rauzy fractal has a fractal-like boundary, and the currently known its constructions is not only for its boundary but also for the entire domain. In this paper, we show that all points in the Rauzy fractal have a layered structure. We propose two methods of constructing the Rauzy fractal using layered structures. We show how such layered structures can be used to construct the boundary of the Rauzy fractal with less computation than conventional methods. There is a self-replicating pattern in one of the layered structure in the Rauzy fractal. We introduce a notion of self-replicating word and visualize how some self-replicating words on three letters creates discrete tiling of the two dimensional plane.
\({}^{1}\)Daequ Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, South Korea. \({}^{2}\)Gyeonggi Science High School (GSHS), Gyeonggl-do 16297, South Korea. \({}^{3}\)Seoul Science High School (SSHS), Seoul 03066, South Korea. \({}^{*}\)Corresponding author _E-mail addresses_: [email protected]
**Key words.** Rauzy fractal, fractal boundary, discrete tiling
## 1. Introduction
In [9], Rauzy proposed a method of constructing a compact region called the **Rauzy fractal** (Figure 1a). There are two characteristics in the Rauzy fractal. One is that the Rauzy fractal has a fractal-like boundary (as its name indicates), and another is that it discretely tiles the two-dimensional plane (Figure 1b). The tiling characteristic of the Rauzy fractal generalizes to the Pisot conjecture, which states that every Pisot substitution (SS2) on \(d\) letters gives a discrete tiling of the \(d\)-dimensional space \(\mathbb{R}^{d}\)[2, 3, 6, 10, 13]. Pisot substitutions are also studied in relationship with Pisot numbers [7, 14]. In symbolic dynamics, Pisot substitution plays one of key role in understanding the substitutive systems [1, 8]. Pisot conjecture further generalizes to the pure discrete spectrum conjecture which states that every dynamical system defined by a Pisot substitution on \(d\) letters has a pure discrete spectrum, which has been proved only for \(d\leq 2\)[4, 5, 11, 12].
There are two ways known to construct Rauzy fractal. One uses a convergent sequence of 3-dimensional points [9], and the other uses the exductive method [2]. Both methods construct the entire Rauzy fractal, and increase the resolution of the Rauzy fractal with larger computations for its boundary and interior at the same time. Since the Rauzy fractal is a simply connected domain, so the higher
Introduction
The **Rauzy fractal** is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_, which is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_. The _Rauzy fractal_ is a generalization of the _Rauzy fractal_.
shows few examples of words \([\mathbf{w}_{n}]\) and their word vectors \(\mathbf{v}_{n}\)
\[[\mathbf{w}_{0}] =[0], \mathbf{v}_{0} =(1,0,0),\] \[=[01], \mathbf{v}_{1} =(1,1,0),\] \[=[0102], \mathbf{v}_{2} =(2,1,1), \tag{2}\] \[=[0102010], \mathbf{v}_{3} =(4,2,1),\] (3) \[=[0102010010201], \mathbf{v}_{4} =(7,4,2). \tag{1}\]
A **subword** of \([w_{0}\cdots w_{l}]\) is a substring of the form \([w_{0}\cdots w_{l^{\prime}}]\), \(0\leq l^{\prime}\leq l\). For example, from Equations (2) - (2), \([\mathbf{w}_{i}]\) is a subword for \([\mathbf{w}_{j}]\) for \(i<j\). We consider the empty word as a subword of any word. We can concatenate words to make another word. For example, from Equations (2) - (2),
\[[\mathbf{w}_{3}]=[\mathbf{w}_{2}][\mathbf{w}_{1}][\mathbf{w}_{0}]=[0102010].\]
A **substitution on \(d\) letters** is a map that transforms single-character words \([0],\cdots,[d-1]\) to words on \(d\) letters. For example, \(\sigma_{i}\), \(i=0,1,2,3\), defined below are substitutions on 3 letters:
\[\sigma_{0}([0]) =[01], \sigma_{0}([1]) =[02], \sigma_{0}([2]) =[0]\] \[\sigma_{1}([0]) =[12], \sigma_{1}([1]) =[2], \sigma_{1}([2]) =[0]\] \[\sigma_{2}([0]) =[0102], \sigma_{1}([1]) =[2], \sigma_{1}([2]) =[0] \tag{5}\] \[\sigma_{3}([0]) =[01], \sigma_{2}([1]) =[2], \sigma_{2}([2]) =[0] \tag{4}\]
A substitution can transforms any words as follows
\[\sigma([w_{0}\cdots w_{l}])=\sigma([w_{0}])\cdots\sigma([w_{l}]).\]
With the initial word \([\mathbf{w}_{0}]=[0]\), any substitution \(\sigma\) generates the sequence of words \([\mathbf{w}_{n}]\) recursively as follows.
\[[\mathbf{w}_{n+1}]=\sigma([\mathbf{w}_{n}])\text{ for }n\geq 0. \tag{6}\]
For example, \([\mathbf{w}_{n}]\), \(n=0,\cdots,4\), in Equations (2) - (2) are the first five sequence generated by the substitution \(\sigma_{0}\) in Equation (2). The substitution \(\sigma_{0}\) in Equation (2) is called the **Rauzy substitution**. The sequence of words \([\mathbf{w}_{n}]\) in Equation (2) obtained by the Rauzy substitution is called the **tribonacci words**. Let us denote the tribonacci words as \([\mathbf{a}_{n}]\) to distinguish them from general notation for words The tribonacci words satisfies the following recursive formula.
\[[\mathbf{a}_{n+3}]=[\mathbf{a}_{n+2}][\mathbf{a}_{n+1}][\mathbf{a}_{n}]\text{ for }n\geq 0. \tag{7}\]
Let \(\mathbf{v}_{n}\) be the word vector for the word \([\mathbf{w}_{n}]\) in Equation (2). We can associate a unique \(d\times d\) matrix \(M\) for each substitution \(\sigma\) that satisfies
\[\mathbf{v}_{n+1}=M\mathbf{v}_{n}\text{ for }n\geq 0 \tag{8}\]
A \(d\times d\) matrix \(M\) is called a **Pisot matrix** if its characteristic polynomial has a unique real root \(\lambda\) greater than 1 and all the other (complex) roots have absolute values less than 1. A substitution \(\sigma\) is called a **Pisot substitution** if the corresponding matrix \(M\) in Equation (2) is a Pisot matrix. The unique real root \(\lambda\) is called a **Pisot number**.
Table 1 shows the Pisot matrix and Pisot number for the substitutions \(\sigma_{i}\) in Equation (2) - (2). The Pisot matrix has a real eigenvector \(\mathbf{v}_{\infty}\) whose eigenvalue is the Pisot number \(\lambda\).
\[M\mathbf{v}_{\infty}=\lambda\mathbf{v}_{\infty}.\]
Indeed, the limit \(\lim\mathbf{v}_{n}/\|\mathbf{v}_{n}\|\) is the eigenvector with the eigenvalue \(\lambda\), where \(\mathbf{v}_{n}\) is the word vector for \([\mathbf{w}_{n}]\) in Equation (2). The hyperplane \(P\) in \(\mathbb{R}^{d}\) orthogonal to \(\mathbf{v}_{\infty}\) is call the **contracting plane**. For each \([\mathbf{w}_{n}]\), let \(\mathbf{v}_{l}\) be the word vector for the subword of \([\mathbf{w}_{n}]\) with the length \(l\). Let \(\pi:\mathbb{R}^{d}\to P\) be the orthogonal projection and define the set \(R_{n}\) as
\[R_{n}=\{\mathbf{0}\}\cup\{\pi(\mathbf{v}_{l})\,|\,0\leq l\leq L\}.\]
We will call the set \(R=\bigcup_{n=0}^{\infty}R_{n}\) the **Pisot domain** for \(\sigma\).
Figure 0(a) is the Pisot domain for the Rauzy substitution \(\sigma_{0}\), and Figure 2 are the Pisot domains for the substitutions \(\sigma_{i}\), \(i=1,2,3\). Here we explain how to obtain such figures. Let
\[\mathbf{e}_{0}=(1,0,0),\mathbf{e}_{1}=(0,1,0),\mathbf{e}_{2}=(0,0,1) \tag{9}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Pisot & Pisot & characteristic & Pisot \\ substitution & matrix & polynomial & number \\ \hline \hline \(\sigma_{0}\) & \(\begin{bmatrix}1&1&1\\ 1&0&0\\ 0&1&0\end{bmatrix}\) & \(1+\lambda+\lambda^{2}=\lambda^{3}\) & 1.8393 \\ \hline \(\sigma_{1}\) & \(\begin{bmatrix}0&1&1\\ 0&0&1\\ 1&0&0\end{bmatrix}\) & \(1+\lambda=\lambda^{3}\) & 1.3247 \\ \hline \(\sigma_{2}\) & \(\begin{bmatrix}2&1&1\\ 0&0&1\\ 1&0&0\end{bmatrix}\) & \(1+\lambda+2\lambda^{2}=\lambda^{3}\) & 2.5468 \\ \hline \(\sigma_{3}\) & \(\begin{bmatrix}1&1&0\\ 0&0&1\\ 1&0&0\end{bmatrix}\) & \(1+\lambda^{2}=\lambda^{3}\) & 1.4656 \\ \hline \end{tabular}
\end{table}
Table 1. Matrices associated with Pisot substitution
Figure 2. Pisot domains for Pisot substitutions in three colors.
be the standard orthogonal basis for \(\mathbb{R}^{3}\). We choose sufficiently long word \([\mathbf{w}_{n}]=[w_{0}\cdots w_{L}]\), and its word vector \(\mathbf{v}_{n}\). We approximate \(\mathbf{v}_{\infty}\approx\mathbf{v}_{n}\), to define the contracting plane \(P\) to be the hyperbolic plane orthogonal to \(\mathbf{v}_{n}\). (The size of \(n\) is determined by how "dense" the figure obtained at the end looks.) We run the Gram-Schmidt process on the set \(\{\mathbf{v}_{n}/\|\mathbf{v}_{n}\|,\mathbf{e}_{0},\mathbf{e}_{1}\}\) to obtain two orthonormal basis \(\mathbf{r}_{0}\), \(\mathbf{r}_{1}\) on \(P\). For each subword \([w_{0}\cdots w_{l}]\), \(0\leq l\leq L\), of \([\mathbf{w}_{n}]\), let \(\mathbf{v}_{l}\) be its word vector, and define the \(2\)-dimensional vector \(\mathbf{x}_{l}\) as follows.
\[\mathbf{x}_{l}=(\pi(\mathbf{v}_{l})\cdot\mathbf{r}_{0},\pi(\mathbf{v}_{l}) \cdot\mathbf{r}_{1}).\]
We then plot the dot at \(\mathbf{x}_{l}\) for all \(0\leq l\leq L\) with the color depending on the letter of the character \(w_{l}\).
## 3. Rauzy fractal construction A
Our prime concern is to understand the Rauzy fractal (Figure 0(a)) and its boundary. We will consider the Rauzy substitution \(\sigma_{0}\) only from now on. We will call a \(2\)-dimensional point \(\mathbf{x}\) to be **Rauzy** if \(\mathbf{x}\in R_{n}\) for some \(n\geq 0\). Equation (2) implies that \(R_{n}\subset R_{n+1}\). Thus we can identify Rauzy points by the lengths of corresponding subwords uniquely.
Let \(\mathbf{e}_{0},\mathbf{e}_{1},\mathbf{e}_{2}\) be the standard basis on \(\mathbb{R}^{3}\) defined in Equation (2). Let \(\mathbf{u}_{0}\), \(\mathbf{u}_{1}\), and \(\mathbf{u}_{2}\) be the \(2\)-dimensional vectors defined by
\[\mathbf{u}_{i}=(\pi(\mathbf{e}_{i})\circ\mathbf{r}_{0},\pi(\mathbf{e}_{i}) \circ\mathbf{r}_{1}),\text{ for }i=0,1,2, \tag{10}\]
where \(\mathbf{r}_{0},\mathbf{r}_{1}\) are orthonormal basis on the contracting plane \(P\) for the Rauzy substitution.
Let \(\mathbf{b}_{n}\) be the Rauzy point for the tribonacci words \([\mathbf{a}_{n}]\). The first four \(\mathbf{b}_{n}\) are
\[\mathbf{b}_{0} =\mathbf{u}_{0},\] \[\mathbf{b}_{1} =\mathbf{u}_{0}+\mathbf{u}_{1},\] \[\mathbf{b}_{2} =2\mathbf{u}_{0}+\mathbf{u}_{1}+\mathbf{u}_{2},\] \[\mathbf{b}_{3} =4\mathbf{u}_{0}+2\mathbf{u}_{1}+\mathbf{u}_{2},\] \[\mathbf{b}_{4} =7\mathbf{u}_{0}+4\mathbf{u}_{1}+2\mathbf{u}_{2}.\]
For a semantic reason, let us denote \(\mathbf{b}_{i}^{(j)}=\mathbf{b}_{3i+j}\). The **A-layer at the level \(i\)** is the set \(V_{i}\) of Rauzy points defined inductively as follows:
\[V_{-1} =\{\mathbf{0}\},\text{ and for }i\geq 0,\] \[V_{i}^{(0)} =\{\mathbf{x}+\mathbf{b}_{i}^{(0)}\,|\,\mathbf{x}\in\bigcup_{j=- 1}^{i-1}V_{j}\},\] \[V_{i}^{(1)} =\{\mathbf{x}+\mathbf{b}_{i}^{(1)},\mathbf{x}+\mathbf{b}_{i}^{(0) }+\mathbf{b}_{i}^{(1)}\,|\,\mathbf{x}\in\bigcup_{j=-1}^{i-1}V_{j}\}, \tag{12}\] \[V_{i}^{(2)} =\{\mathbf{x}+\mathbf{b}_{i}^{(2)},\mathbf{x}+\mathbf{b}_{i}^{(0 )}+\mathbf{b}_{i}^{(2)},\mathbf{x}+\mathbf{b}_{i}^{(1)}+\mathbf{b}_{i}^{(2)}\,| \,\mathbf{x}\in\bigcup_{j=-1}^{i-1}V_{j}\},\] \[V_{i} =\bigcup_{j=0}^{2}V_{i}^{(j)}. \tag{11}\]
We have the following result.
**Theorem 1**.: _The set of all Rauzy points is the union of all A-layers at all levels._
Before we prove the theorem, let us show how we can use this theorem to construct Rauzy fractal and its boundary. For any \(\mathbf{x}\in V_{i-1}\), the following six points are in \(V_{i}\):
\[\mathbf{x}^{(0)} =\mathbf{x}+\mathbf{b}_{i}^{(0)} \in V_{i}^{(0)}, \tag{14}\] \[\mathbf{x}^{(1)} =\mathbf{x}+\mathbf{b}_{i}^{(1)} \in V_{i}^{(1)},\] (15) \[\mathbf{x}^{(2)} =\mathbf{x}+\mathbf{b}_{i}^{(0)}+\mathbf{b}_{i}^{(1)} \in V_{i}^{(1)},\] (16) \[\mathbf{x}^{(3)} =\mathbf{x}+\mathbf{b}_{i}^{(2)} \in V_{i}^{(2)},\] (17) \[\mathbf{x}^{(4)} =\mathbf{x}+\mathbf{b}_{i}^{(0)}+\mathbf{b}_{i}^{(2)} \in V_{i}^{(2)},\] (18) \[\mathbf{x}^{(5)} =\mathbf{x}+\mathbf{b}_{i}^{(1)}+\mathbf{b}_{i}^{(2)} \in V_{i}^{(2)}. \tag{13}\]
These points forms a hexagon around \(\mathbf{x}\) (Figure 2(a)). We will call this hexagon as the **cell** of \(\mathbf{x}\). The point \(\mathbf{x}^{(j)}\), \(j=0,\cdots,5\), is a **child** of its **parent**\(\mathbf{x}\). Theorem 1 implies that Rauzy fractal can be filled recursively by tracing the genealogy of the Rauzy points, starting from the origin \(\mathbf{0}\). To plot the boundary of Rauzy fractal, we trace only the descendents that lie outside of the cells of their ancestors. These points are called the **boundary** points. (To speed up the computation, we can only check whether a points lie inside of the cell of its grandparent.) At each level of layers, we produce the children only for the boundary points, and they are clustered into cells that are uniformly distributed. Figure 2(b) shows all points in A-layers up to the level \(3\). In fact, they are the first \(274\) Rauzy points, including the origin. The cells are only drawn for the points in A-layer at the level \(3\). The interior points in cells are all the points in A-layer up to the level \(2\). We can see that there is no conflict on the geneology of points: no two points in the same level of A-layer never enjoy a child-parent relationship.
Figure 3. Rauzy points in Rauzy fractal. The \(x\), \(y\)-axis are equally scaled.
Figure 3(a) shows the Rauzy points (filled-dots) in A-layers at the level 3. The hexagons are the cells of boundary points in the previous layers. We can plot only the boundary points at each layer to determine the boundary of Rauzy fractal. Figure 2(b) shows all boundary points in the A-layer at the level 4 and the cells of all boundary points in the previous layers. One can see that the boundary point at the top layer forms the boundary of the Rauzy fractal. Figure 3(c) shows the boundary points at the level 5, and Figure 3(d) shows them with Rauzy fractal. One might wonder why there seems to be several _estuaries_ (rivers that meet the ocean), if we put an analogy of Rauzy fractal as the "land" and its complement as an "ocean". This phenomenon does not mean that the Rauzy fractal has a vacuous "valley". It is a consequence of discarding non-boundary points at each level. To reduce the computational complexity, it is inevitable to disregard some cells when checking the interior points. As we observed in Figure 2(b), all Rauzy points are almost-uniformly distributed. The shape of the boundary of the Rauzy fractal in Figure 3(c) is optimal in our setting. To remove all _estuaries_, we should consider all cells at each level of
Figure 4. The boundary Rauzy points in A-layers and the cells.
A-layers. This is essentially the same as the conventional way of drawing the Rauzy fractal, and requires the same amount of computational complexity.
Let us prove Theorem 1. We will prove the following holds for all \(i\geq 0\):
\[V_{i}=R_{3i+3}-\{\mathbf{b}_{3i+3}\}. \tag{19}\]
Since every Rauzy point \(\mathbf{x}_{l}\) is uniquely identified with the length of corresponding substring \([\mathbf{w}_{l}]\) of a tribonacci word, we define the **length** of the Rauzy point \(\mathbf{x}_{l}\) as the length of its subword \([\mathbf{w}_{l}]\):
\[L(\mathbf{x}_{l})=L([\mathbf{w}_{l}]).\]
The following lemma is the key to the proof of Equation (3).
Lemma 1.: _If \(\mathbf{x}\) is a Rauzy point satisfying_
\[L(\mathbf{x})\leq L(\mathbf{b}_{n-1})+L(\mathbf{b}_{n-2}), \tag{20}\]
_then \(\mathbf{x}+\mathbf{b}_{n}\) is also a Rauzy point._
Proof.: Let \([\mathbf{w}]\) be the word corresponding to the Rauzy point \(\mathbf{x}\). From the condition of the lemma, \([\mathbf{w}]\) is a substring of the word \([\mathbf{a}_{n-1}][\mathbf{a}_{n-2}]\). Thus the word \([\mathbf{a}_{n}][\mathbf{w}]\) is a substring of \([\mathbf{a}_{n+1}]\), and it contains \([\mathbf{a}_{n}]\). Therefore \(\mathbf{x}+\mathbf{b}_{n}\) is a Rauzy point.
Let us prove Equation (3) by the induction. Obviously, \(V_{-1}=R_{0}-\{\mathbf{b}_{0}\}=\{\mathbf{0}\}\). We get the first six Rauzy points by letting \(\mathbf{x}=\mathbf{0}\) in Equations (3)-(3). Thus \(V_{0}=R_{3}-\{\mathbf{b}_{3}\}\). This shows that Equation (3) holds for \(i=-1,0\).
Let us assume that Equation (3) holds for \(i-2\) and \(i-1\) for \(i\geq 1\). We will show that each set \(V_{i}^{(j)}\) is a subset of \(R_{3i+3}-\{\mathbf{b}_{3i+3}\}\), and all Rauzy points \(\mathbf{x}\) satisfying \(L(\mathbf{b}_{3i})\leq L(\mathbf{x})<L(\mathbf{b}_{3i+3})\) appears in one of \(V_{i}^{(j)}\).
To help understanding, we will use Figure 5, which shows the first 51 Rauzy points with the origin \(\mathbf{x}\). It shows the general configurations of children and grand
Figure 5. The first 43 Rauzy points in A-layers up to the level 2 are shown with numbered indices. The cells are represented by hexagons.
children of \(\mathbf{x}\). We can see that some points have two parents. For any choice of \(\mathbf{x}\in V_{i-2}\), the ancestries of all points are identical. The inclusions and exclusions of points to the cells of their ancestors may differ by the choice of \(\mathbf{x}\), but our proof does not depend on it. We assume \(\mathbf{x}\in V_{i-2}\) and \(\mathbf{x}_{0},\cdots,\mathbf{x}_{5}\in V_{i-1}\) be the children of \(\mathbf{x}\) obtained by Equations (3)-(3). We also assume that \(\mathbf{x}_{0},\cdots,\mathbf{x}_{5}\) are Rauzy points. We will show that
\[\mathbf{x}_{6},\cdots,\mathbf{x}_{12} \in V_{i}^{(0)}, \tag{22}\] \[\mathbf{x}_{13},\cdots,\mathbf{x}_{25} \in V_{i}^{(1)},\] (23) \[\mathbf{x}_{26},\cdots,\mathbf{x}_{42} \in V_{i}^{(2)}. \tag{21}\]
The points in Equation (3) are obtained by adding \(\mathbf{b}_{i}^{(0)}\):
\[\mathbf{x}_{6}=\mathbf{x}+\mathbf{b}_{i}^{(0)},\quad\mathbf{x}_{j+7}= \mathbf{x}_{j}+\mathbf{b}_{i}^{(0)}\text{ for }0\leq j\leq 5.\]
The assumption \(\mathbf{x}\in V_{i-1}\) implies \(L(\mathbf{x})<L(\mathbf{b}_{i-1}^{(0)})\). Thus we can use Equations (3)-(3) to show that \(\mathbf{x}\) and \(\mathbf{x}_{0},\cdots,\mathbf{x}_{4}\) satisfies the inequality (3):
\[L(\mathbf{x}) <L(\mathbf{b}_{3i-3}) <L(\mathbf{b}_{3i-2}),\] \[L(\mathbf{x}_{0}) <L(\mathbf{b}_{3i-3})+L(\mathbf{b}_{3i-3}) <L(\mathbf{b}_{3i-1}),\] \[L(\mathbf{x}_{1}) <L(\mathbf{b}_{3i-3})+L(\mathbf{b}_{3i-2}) <L(\mathbf{b}_{3i-1}),\] \[L(\mathbf{x}_{2}) <L(\mathbf{b}_{3i-3})+L(\mathbf{b}_{3i-2})+L(\mathbf{b}_{3i-3}) <L(\mathbf{b}_{3i-1})+L(\mathbf{b}_{3i-2}),\] \[L(\mathbf{x}_{3}) <L(\mathbf{b}_{3i-3})+L(\mathbf{b}_{3i-1}) <L(\mathbf{b}_{3i-1})+L(\mathbf{b}_{3i-2}),\] \[L(\mathbf{x}_{4}) <L(\mathbf{b}_{3i-3})+L(\mathbf{b}_{3i-1})+L(\mathbf{b}_{3i-3}) \leq L(\mathbf{b}_{3i-1})+L([\mathbf{a}_{3i-2}]).\]
Therefore, \(\mathbf{x}_{6},\cdots,\mathbf{x}_{11}\) are Rauzy points by the lemma. Equation (3) shows that
\[L(\mathbf{x}_{5})\leq L(\mathbf{b}_{3i-6})+L(\mathbf{b}_{3i-1})+L(\mathbf{b}_{ 3i-2}).\]
This does not immediately imply the inequality (3). Meanwhile, \(\mathbf{x}_{12}\) also satisfies
\[\mathbf{x}_{12}=\mathbf{x}+\mathbf{b}_{i-1}^{(1)}+\mathbf{b}_{i-1}^{(2)}+ \mathbf{b}_{i}^{(0)}=\mathbf{x}+\mathbf{b}_{i}^{(1)}.\]
Therefore \(\mathbf{x}_{12}\) is also a Rauzy point by the lemma. The point \(\mathbf{x}_{12}\) is the longest Rauzy point among its siblings. Since the longest length for \(\mathbf{x}\) is \(L(\mathbf{b}_{i-1}^{(0)})-1\),
\[\max L(\mathbf{x}_{12})=L(\mathbf{b}_{i-1}^{(0)})+L(\mathbf{b}_{i}^{(1)})-1<L( \mathbf{b}_{i}^{(2)}).\]
Thus we proved that \(V_{i}^{(0)}\subset R_{3i+2}\subset R_{3i+3}-\{\mathbf{b}_{3i+3}\}\).
Next, we show that points in Equation (3) are Rauzy points. On the other hands, they are obtained by adding \(\mathbf{b}_{i}^{(1)}\) to the previous points:
\[\mathbf{x}_{j+13}=\mathbf{x}_{j}+\mathbf{b}_{i}^{(1)},\quad 0\leq j\leq 12.\]
From Equations (3) - (3), we can easily check that
\[L(\mathbf{x}_{j})\leq L(\mathbf{b}_{i-1}^{(2)})+L(\mathbf{b}_{i}^{(0)})\text{ for }0\leq j\leq 9.\]
However, this inequality is not immediate obvious for the following three points.
\[L(\mathbf{x}_{10}) =L(\mathbf{x})+L(\mathbf{b}_{i-1}^{(2)})+L(\mathbf{b}_{i}^{(0)}),\] \[L(\mathbf{x}_{11}) =L(\mathbf{x})+L(\mathbf{b}_{i-1}^{(0)})+L(\mathbf{b}_{i-1}^{(2)} )+L(\mathbf{b}_{i}^{(0)}),\] \[L(\mathbf{x}_{12}) =L(\mathbf{x})+L(\mathbf{b}_{i-1}^{(1)})+L(\mathbf{b}_{i-1}^{(2)} )+L(\mathbf{b}_{i}^{(0)}).\]
In fact, these points are obtained by adding \(\mathbf{b}_{i}^{(2)}\) to the previous Rauzy points.
\[\mathbf{x}_{23} =\mathbf{x}+\mathbf{b}_{i}^{(2)},\] \[\mathbf{x}_{24} =\mathbf{x}+\mathbf{b}_{i-1}^{(0)}+\mathbf{b}_{i}^{(2)},\] \[\mathbf{x}_{25} =\mathbf{x}+\mathbf{b}_{i-1}^{(1)}+\mathbf{b}_{i}^{(2)}.\]
Therefore \(\mathbf{x}_{23},\mathbf{x}_{24},\mathbf{x}_{25}\) are Rauzy points by the lemma. The maximum length of \(\mathbf{x}_{25}\) is
\[\max L(\mathbf{x}_{25})=L(\mathbf{b}_{i-1}^{(0)})+L(\mathbf{b}_{i-1}^{(1)})+L (\mathbf{b}_{i}^{(2)})-1,\]
and it is strictly less than \(L(\mathbf{b}_{i+1}^{(0)})\). Thus \(V_{i}^{(1)}\subset R_{3i+3}-\{\mathbf{b}_{3i+3}\}\).
Using Equations (3) - (3), we can apply the lemma to the points in Equation (3) without any exception. The maximum length of the longest Rauzy point is \(\mathbf{x}_{42}\) is
\[\max L(\mathbf{x}_{42}) =L(\mathbf{b}_{i-1}^{(0)})+L(\mathbf{b}_{i-1}^{(1)})+L(\mathbf{b} _{i-1}^{(2)})+L(\mathbf{b}_{i}^{(1)})+L(\mathbf{b}_{i}^{(2)})-1\] \[=L(\mathbf{b}_{i}^{(0)})+L(\mathbf{b}_{i}^{(1)})+L(\mathbf{b}_{i }^{(2)})-1\] \[=L(\mathbf{b}_{i+1}^{(0)})-1.\]
Therefore, \(V_{i}^{(2)}\subset R_{3i+3}-\{\mathbf{b}_{3i+3}\}\).
So far, we showed that
\[V_{i}^{(0)}\cup V_{i}^{(1)}\cup V_{i}^{(2)}\subset(R_{3i+3}-\{ \mathbf{b}_{3i+3}\})-(R_{3i}-\{\mathbf{b}_{3i}\})\]
Let us show that the opposite holds too. First, suppose that \(\mathbf{x}\) is a Rauzy point satisfying
\[L(\mathbf{b}_{i}^{(0)})\leq L(\mathbf{x})<L(\mathbf{b}_{i}^{(1)}).\]
Then the word \([\mathbf{w}]\) corresponding to \(\mathbf{x}\) is of the form
\[[\mathbf{w}]=[\mathbf{a}_{3i}][\mathbf{w}^{\prime}] \tag{24}\]
where \([\mathbf{w}^{\prime}]\) is a substring of the word \([\mathbf{a}_{3i-1}][\mathbf{a}_{3i-2}]\). Therefore the word \([\mathbf{w}^{\prime}]\) is a subword in the tribonacci word \([\mathbf{a}_{3i}]\), and the corresponding Rauzy points \(\mathbf{x}^{\prime}\) lies in \(V_{i-1}\). Since Equation (3) implies
\[\mathbf{x}=\mathbf{x}^{\prime}+\mathbf{b}_{i}^{(0)},\]
we havev \(\mathbf{x}\in V_{i}^{(0)}\). This is the point \(\mathbf{x}^{(0)}\) in Equation (3)
Next, suppose that \(\mathbf{x}\) is a Rauzy point satisfying
\[L(\mathbf{b}_{i}^{(1)})\leq L(\mathbf{x})<L(\mathbf{b}_{i}^{(2)}).\]
Then the word \([\mathbf{w}]\) corresponding \(\mathbf{x}\) is of the form
\[[\mathbf{w}]=[\mathbf{a}_{3i+1}][\mathbf{w}^{\prime}]\]
where \([\mathbf{w}^{\prime}]\) is a substring of \([\mathbf{a}_{3i}][\mathbf{a}_{3i-1}]\). If \(L([\mathbf{w}^{\prime}])<L([\mathbf{a}_{3i}])\), then the Rauzy point \(\mathbf{x}^{\prime}\) corresponding \([\mathbf{w}^{\prime}]\) lies in \(V_{i-1}\), and
\[\mathbf{x}=\mathbf{x}^{\prime}+\mathbf{b}_{i}^{(1)}.\]
This implies that \(\mathbf{x}\in V_{i}^{(1)}\). In fact, this point is \(\mathbf{x}^{(1)}\) in Equation (3). On the other hand, if \(L([\mathbf{a}_{3i}])\leq L([\mathbf{w}^{\prime}])\), then \([\mathbf{w}^{\prime}]\) is of the form
\[\mathbf{w}^{\prime}=[\mathbf{a}_{3i}][\mathbf{w}^{\prime\prime}],\]
where \([\mathbf{w}^{\prime\prime}]\) is a substring of \([\mathbf{a}_{3i-1}]\). In this case, the Rauzy point \(\mathbf{x}^{\prime\prime}\) for \(\mathbf{w}^{\prime\prime}\) lies in \(V_{i-1}^{(2)}\subset V_{i-1}\). Since
\[\mathbf{x}=\mathbf{x}^{\prime\prime}+\mathbf{b}_{i}^{(0)}+\mathbf{b}_{i}^{(1)},\]
we have \(\mathbf{x}\in V_{i}^{(1)}\). This point is \(\mathbf{x}^{(2)}\) in Equation (3).
Finally, suppose that \(\mathbf{x}\) satisfies
\[L(\mathbf{b}_{i}^{(2)})\leq L(\mathbf{x})<L(\mathbf{b}_{i+1}^{(0)}).\]
Then the Rauzy word \(\mathbf{w}\) for \(\mathbf{x}\) is of the form
\[[\mathbf{w}]=[\mathbf{a}_{3i+2}][\mathbf{w}^{\prime}]\]
where \([\mathbf{w}^{\prime}]\) is a substring of \([\mathbf{a}_{3i+1}][\mathbf{a}_{3i}]\). If \([\mathbf{w}^{\prime}]\) is a substring of \([\mathbf{a}_{3i+1}]\), then \(\mathbf{x}\) is the point \(\mathbf{x}^{(4)}\) or \(\mathbf{x}^{(5)}\) in Equations (3) and (3). If \(\mathbf{w}^{\prime}\) is a substring that is longer than \([\mathbf{a}_{3i+1}]\), then \(\mathbf{x}\) is the point \(\mathbf{x}^{(6)}\) in Equation (3). In either case, we have \(\mathbf{x}\in V_{i}^{(2)}\).
## 4. Rauzy fractal construction B
Now we present yet another way to construct the Rauzy fractal. Let \(\mathbf{u}_{i}\) be the 2-dimensional vector defined in Equation (3). We define the vector \(\mathbf{s}_{i}^{(j)}\) for \(i\geq 0\) and \(j=0,\ldots,6\) as follows:
\[\mathbf{s}_{0}^{(0)} =\mathbf{s}_{0}^{(2)}=\mathbf{s}_{0}^{(4)}=\mathbf{s}_{0}^{(6)}= \mathbf{u}_{0}, \tag{26}\] \[\mathbf{s}_{0}^{(1)} =\mathbf{s}_{0}^{(5)}=\mathbf{u}_{1},\quad\mathbf{s}_{0}^{(3)}= \mathbf{u}_{2},\text{ and for }i\geq 0,\] (27) \[\mathbf{s}_{i+1}^{(0)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)}+\mathbf{s}_{i} ^{(6)},\] \[\mathbf{s}_{i+1}^{(1)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)},\] \[\mathbf{s}_{i+1}^{(2)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)}+\mathbf{s}_{i} ^{(6)},\] \[\mathbf{s}_{i+1}^{(3)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}\] \[\mathbf{s}_{i+1}^{(4)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)}+\mathbf{s}_{i} ^{(6)},\] \[\mathbf{s}_{i+1}^{(5)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)},\] (28) \[\mathbf{s}_{i+1}^{(6)} =\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)}+\mathbf{s}_{i} ^{(6)}. \tag{25}\]
Let \(W_{-1}=\{\mathbf{0}\}\) and for each \(i\geq 0\), define the set \(W_{i}\) as follows:
\[W_{i}=\{\mathbf{x}+\sum_{k=0}^{j}\mathbf{s}_{i}^{(k)}\,|\,\mathbf{x}\in\bigcup_ {k=-1}^{i-1}W_{k},0\leq j\leq 6\}. \tag{29}\]
Then we have the following result.
**Theorem 2**.: _The set of all Rauzy points is the union all \(W_{i}\)._
Before we procede to the proof, let us visualize how Theorem 2 contruct Rauzy fractal and its boundary. To help understanding, we will use the terminologies that we defined in previous sections here in similar ways. The set \(W_{i}\) is the \(i\)**-layer of type B**, the point \(\mathbf{x}+\sum\mathbf{s}_{i}^{(k)}\) is the **child** of \(\mathbf{x}\), and the hexagon bounded by the outer six children is the **cell** of \(\mathbf{x}\). Figure 5(a) and 5(b) shows the points in the 1-layer and 2-layer respectively. Unlike to the layers of type A, each point in the \((i-1)\)-layer of type A is the parent to the _seven_ children points in the \(i\)-layer of type A.
The morphic pattern in Equations (4) - (4) is obtained by the following observation. The tribonacci word \([\mathbf{a}_{6}]\) is obtained from concatenating copies of \([\mathbf{a}_{3}]=[0102010]\) with trimmings at the right end:
\[[\mathbf{a}_{6}]=[\underbrace{0102010}_{0\text{ trim}}\Big{|}\underbrace{[010201 ]}_{1\text{ trim}}\Big{|}\underbrace{[0102010]}_{0\text{ trim}}\Big{|}\big{[} \underbrace{0102}_{3\text{ trim}}\big{]}\Big{|}\big{[}\underbrace{0102010}_{0 \text{ trim}}\Big{]}\Big{|}\big{[}\underbrace{0102010}_{1\text{ trim}}\Big{]} \big{|}\big{[}\underbrace{0102010}_{0\text{ trim}}\big{]} \tag{30}\]
The numbers of trims follows the pattern of letters in \([\mathbf{a}_{3}]\), except for the case of letter 2: we trim 3 letters instead of 2. That means, we follow the sequence 0102010 from left to right and trim the last 0, 1, or 3 characters from \([\mathbf{a}_{3}]\) to obtain substrings and concatenating them. The tribonacci \([\mathbf{a}_{9}]\) satisfies the same pattern. Each substring in \([\mathbf{a}_{6}]\) separated by \(|\) in Equation (4) become a new unit for the next trimming. For example, the first substring of \([\mathbf{a}_{9}]\) is obtained by trimming the last \([0102010]\) from \([\mathbf{a}_{6}]\) (not \([0]\)).
The exception at the letter 2 is essential to obtain the Rauzy fractal. Because of the exception at 2, we will call this pattern as _pseudo self-replicating_. Interestingly, removing this exception seems to give discrete tiling of \(\mathbb{R}^{2}\) with a compact domain. Tilings obtained by fully self-replicating pattern will be explored in the next section.
Theorem 2 implies that we can produce Rauzy points by the simple pattern from the sequence 0102010 using the three vectors \(\mathbf{u}_{0}\), \(\mathbf{u}_{1}\), \(\mathbf{u}_{2}\). The construction B also gives the boundary of Rauzy fractal. In fact, it gives the same figures as in Figure 4. We take only the boundary points at each B-layers and produce children points at the next layer. The only difference between the A-layers and the B-layers is that the B-layers contains children that are lie interior to the cells of their parent. For example the point \(\mathbf{x}_{6}\) in Figure 5(a) lies at the level 1 for the A-layer, but it appears at the level 0 for the B-layer.
The upshot for using the construction B is that the ingredients to produce Rauzy fractal and its boundary are essentially the three vectors \(\mathbf{u}_{0},\mathbf{u}_{1},\mathbf{u}_{2}\) and the self-replicating sequence 0102010. Although we need an exception at the letter 2 to draw the Rauzy fractal, it is much simpler than having six rules in Equations (3) -
Figure 6. Rauzy points generated by sets \(W_{i}\).
(3). Moreover, the construction A requires to prepare the Rauzy points \(\mathbf{b}_{i}^{(j)}\), where as the construction B requires none.
Now let us prove Theorem 2. We will show that the following holds for all \(i\geq 0\):
\[V_{i}\subset W_{i}\subset V_{i+1}^{(0)}. \tag{31}\]
In particular, we will show that the following equalities hold for all \(i\geq 0\):
\[\mathbf{s}_{i}^{(0)}=\mathbf{s}_{i}^{(2)}=\mathbf{s}_{i}^{(4)} =\mathbf{s}_{i}^{(6)} =\mathbf{b}_{i}^{(0)}, \tag{33}\] \[\mathbf{s}_{i}^{(1)}=\mathbf{s}_{i}^{(5)} =\mathbf{b}_{i}^{(1)}-\mathbf{b}_{i}^{(0)},\] \[\mathbf{s}_{i}^{(3)} =\mathbf{b}_{i}^{(2)}-\mathbf{b}_{i}^{(0)}-\mathbf{b}_{i}^{(1)}. \tag{32}\]
From Equations (4) - (4), we have
\[\mathbf{s}_{0}^{(0)}=\mathbf{s}_{0}^{(2)}=\mathbf{s}_{0}^{(4)} =\mathbf{s}_{0}^{(6)} =\mathbf{u}_{0}=\mathbf{b}_{0}^{(0)},\] \[\mathbf{s}_{0}^{(1)}=\mathbf{s}_{0}^{(5)} =\mathbf{u}_{1}=\mathbf{b}_{0}^{(1)}-\mathbf{b}_{0}^{(0)},\] \[\mathbf{s}_{0}^{(3)} =\mathbf{u}_{2}=\mathbf{b}_{0}^{(2)}-\mathbf{b}_{0}^{(0)}- \mathbf{b}_{0}^{(1)}.\]
Suppose that Equations (4) - (4) hold for all \(i=k\). Then from Equations (4) - (4), we have
\[\mathbf{s}_{k+1}^{(0)}=\mathbf{s}_{k+1}^{(2)}=\mathbf{s}_{k+1}^{ (4)}+\mathbf{s}_{k+1}^{(6)} =\mathbf{b}_{k}^{(0)}+\mathbf{b}_{k}^{(1)}+\mathbf{b}_{k}^{(2)} =\mathbf{b}_{k+1}^{(0)},\] \[\mathbf{s}_{k+1}^{(1)}=\mathbf{s}_{k+1}^{(5)} =\mathbf{b}_{k}^{(1)}+\mathbf{b}_{k}^{(2)} =\mathbf{b}_{k+1}^{(0)}-\mathbf{b}_{k+1}^{(0)},\] \[\mathbf{s}_{k+1}^{(3)} =\mathbf{b}_{k}^{(2)} =\mathbf{b}_{k+1}^{(2)}-\mathbf{b}_{k+1}^{(0)}-\mathbf{b}_{k+1}^{ (1)}\]
Therefore, we prove Equations (4) - (4). As a consequence, we have
\[\mathbf{s}_{i}^{(0)} =\mathbf{b}_{i}^{(0)},\] \[\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)} =\mathbf{b}_{i}^{(1)},\] \[\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)} =\mathbf{b}_{i}^{(0)}+\mathbf{b}_{i}^{(1)},\] \[\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)} =\mathbf{b}_{i}^{(2)},\] \[\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)} =\mathbf{b}_{i}^{(0)}+\mathbf{b}_{i}^{(2)},\] \[\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)} +\mathbf{s}_{i}^{(5)} =\mathbf{b}_{i}^{(1)}+\mathbf{b}_{i}^{(2)},\] \[\mathbf{s}_{i}^{(0)}+\mathbf{s}_{i}^{(1)}+\mathbf{s}_{i}^{(2)}+ \mathbf{s}_{i}^{(3)}+\mathbf{s}_{i}^{(4)}+\mathbf{s}_{i}^{(5)} +\mathbf{s}_{i}^{(6)} =\mathbf{b}_{i}^{(0)}+\mathbf{b}_{i}^{(1)}+\mathbf{b}_{i}^{(2)}= \mathbf{b}_{i+1}^{(0)}.\]
This proves Equation (4) and thus Theorem 2.
## 5. Self-replicating words and discrete tilings
Let \([\mathbf{w}]=[w_{0}\cdots w_{l}]\) be a word on \(d\)-letters. Let us denote \([\mathbf{w}_{0}]=[\mathbf{w}]\) and for \(0\leq i\leq l\),
\[[\mathbf{w}_{0}^{(i)}]=[w_{i}].\]
For \(n\geq 0\), the \(n\)**-th replicate** of \([\mathbf{w}]\), denoted by \([\mathbf{w}_{n}]\), is the word defined by
\[[\mathbf{w}_{n}]=[\mathbf{w}_{n}^{(0)}]\cdots[\mathbf{w}_{n}^{(l)}],\]
where for \(0\leq i\leq l\),
\[[\mathbf{w}_{n}^{(i)}]=[\mathbf{w}_{n-1}^{(0)}]\cdots[\mathbf{w}_{n-1}^{(l-w_{ i})}].\]
**Definition 3**.: Let \(\mathbf{v}_{n}\) be the word vector for \(\mathbf{w}_{n}\). A word \(\mathbf{w}\) is called **self-replicating** if the limit \(\mathbf{v}_{\infty}=\lim\mathbf{v}_{n}/\|\mathbf{v}_{n}\|\) exists.
We will define the domains for self-replicating words as follows, in a similar way to Pisot domains, but using the ideas from the construction B. Let \(P\) the hyperplane orthogonal to \(\mathbf{v}_{\infty}\). Let \(\mathbf{r}_{0},\cdots,\mathbf{r}_{d-2}\) be the orthonormal basis on \(P\) obtained by Gram-Schmidt process on \(\mathbf{v}_{\infty},\mathbf{e}_{0},\cdots,\mathbf{e}_{d-2}\). Let \(\pi:\mathbb{R}^{d}\to P\) be the orthogonal projection, and \(\mathbf{u}_{i}\), \(i=0,\cdots,n-1\), be the \((d-1)\)-dimensional vectors as follows.
\[\mathbf{u}_{i}=(\pi(\mathbf{e}_{i})\circ\mathbf{r}_{0},\cdots,\pi(\mathbf{e}_ {i})\circ\mathbf{r}_{n-2}). \tag{34}\]
We define the vectors \(\mathbf{s}_{i}^{(j)}\) similar to Equations (4) - (4): for \(j=0,\cdots,l\),
\[\mathbf{s}_{0}^{(j)}=\mathbf{u}_{j},\quad\mathbf{s}_{i+1}^{(j)}=\sum_{k=0}^{ l-a_{j}}\mathbf{s}_{i}^{(k)}\text{ for }i\geq 0. \tag{35}\]
Finally, using the same definition of the set \(W_{i}\) in Equation (4), define the \((d-1)\)-dimensional domain \(W\) as
\[W=\bigcup_{i=-1}^{\infty}W_{i}. \tag{36}\]
Table 2 shows some examples of self-replicating words on 3 letters and their limits \(\mathbf{v}_{\infty}\). Figure 7 shows the domain \(W\) for each elf-replicating words. One can see that all domains are compact, and tiles \(\mathbb{R}^{2}\) discretely. Once we have \((n-1)\)-dimensional orthonormal basis \(\mathbf{u}_{i}\), \(i=0,\cdots,n-2\), all points in such domain can be obtained by arithmetic summations of vectors as in Equation (5).
The domains in Figure 7 tiles \(\mathbb{R}^{2}\). From \(\mathbf{u}_{0},\mathbf{u}_{1},\mathbf{u}_{2}\) in Equation (5), the three vectors
\[\mathbf{u}_{01}=\mathbf{u}_{0}-\mathbf{u}_{1},\quad\mathbf{u}_{12}=\mathbf{u }_{1}-\mathbf{u}_{2},\quad\mathbf{u}_{02}=\mathbf{u}_{0}-\mathbf{u}_{2} \tag{37}\]
are the vectors that translates the domains into hexagonal tilings. For example, we can translate "The Fish" in Figure 7a by linear combinations of these three vectors \((1.27,-0.48),(-0.34,1.36),(0.93,0.88)\). Figure 8a shows the discrete tilings of Figures 7a, 7e. At this point, we can conjecture that the following statement is true.
**Conjecture**.: _Let \([\mathbf{w}]\) be a self-replicating word on three letters. Then the domain \(W\) defined in Equation (5) tiles \(\mathbb{R}^{2}\) discretely in the following sense: for the vectors \(\mathbf{u}_{01}\), \(\mathbf{u}_{12}\), and \(\mathbf{u}_{02}\) in Equation (5) and for \(3\)-dimensional integral vector
\begin{table}
\begin{tabular}{|c|c|} \hline
**w** & \(\mathbf{v}_{\infty}\) \\ \hline \hline
0120 & (0.756, 0.521, 0.397) \\ \hline
0102 & (0.850, 0.462, 0.251) \\ \hline
0201 & (0.831, 0.259, 0.492) \\ \hline
0102010 & (0.861, 0.447, 0.242) \\ \hline
1201 & (0.381, 0.717, 0.584) \\ \hline
2010 & (0.771, 0.359, 0.526) \\ \hline \end{tabular}
\end{table}
Table 2. Self replicating words and their limits
\(\mathbf{c}=(c_{01},c_{12},c_{02})\), define \(W_{\mathbf{c}}=W+\sum_{i<j}c_{ij}\mathbf{u}_{ij}\). Then \(W_{\mathbf{c}_{1}}\cap W_{\mathbf{c}_{2}}=\emptyset\) unless \(\mathbf{c}_{1}=\mathbf{c}_{2}\) and \(\mathbb{R}^{2}=\bigcup_{\mathbf{c}\in\mathbb{Z}^{3}}W_{\mathbf{c}}\)._
Figure 8. The discrete tiling of self-replicating domains,
Figure 7. The domains for self-replicating words
## 6. Conclusion
The Rauzy fractal attracted mathematical interests for many years, and its characteristics have been generalized into many areas of advanced research. We studied yet another characteristics of the Rauzy fractal with an elementary view point, and showed that this characteristic can be generalized for creating another tiling scheme for the two dimensional Euclidean space. We expect that our final conjecture can be further generalized to higher dimensions.
|
2306.10967 | Simulation of the dynamics of gas mixtures during plasma processing in
the C75 Cavity | Plasma processing using a mixture of noble gas and oxygen is a technique that
is currently being used to reduce field emission and multipacting in
accelerating cavities. Plasma is created inside the cavity when the gas mixture
is exposed to an electromagnetic field that is generated by applying RF power
through the fundamental power or higher-order mode couplers. Oxygen ions and
atomic oxygen are created in the plasma which breaks down the hydrocarbons on
the surface of the cavity and the residuals from this process are removed as
part of the process gas flow. Removal of hydrocarbons from the surface
increases the work function and reduces the secondary emission coefficient.
This work describes the initial results of plasma simulation, which provides
insight into the ignition process, distribution of different species, and
interactions of free oxygen and oxygen ions with the cavity surfaces. The
simulations have been done with an Ar/O2 plasma using COMSOL multiphysics.
These simulations help in understanding the dynamics and control of plasma
inside the cavity and the exploration of different gas mixtures. | N. K. Raut, T. Ganey, P. Dhakal, T. Powers | 2023-06-19T14:30:43Z | http://arxiv.org/abs/2306.10967v1 | # Simulation of the Dynamics of Gas Mixtures During Plasma Processing in the C75 Cavity +
###### Abstract
Plasma processing using a mixture of noble gas and oxygen is a technique that is currently being used to reduce field emission and multipacting in accelerating cavities. Plasma is created inside the cavity when the gas mixture is exposed to an electromagnetic field that is generated by applying RF power through the fundamental power or higher-order mode couplers. Oxygen ions and atomic oxygen are created in the plasma which breaks down the hydrocarbons on the surface of the cavity and the residuals from this process are removed as part of the process gas flow. Removal of hydrocarbons from the surface increases the work function and reduces the secondary emission coefficient [1]. This work describes the initial results of plasma simulation, which provides insight into the ignition process, distribution of different species, and interactions of free oxygen and oxygen ions with the cavity surfaces. The simulations have been done with an Ar/\(O_{2}\) plasma using COMSOL(r) multiphysics. These simulations help in understanding the dynamics and control of plasma inside the cavity and the exploration of different gas mixtures.
## 1 Introduction
Field emission in superconducting radio-frequency (SRF) cavities leads to thermal instability and is one of the prime factors in limiting the performance of accelerating cavities [2]. Hydrocarbons (C\({}_{x}\)H\({}_{y}\)) build-up on the surface of the cavity enhance multimetors and field emission [3]. Particulate contamination is the major cause of field emission. Plasma helps to break down organic bonds (C=C, C-C, C-O, C-H) from the contamination [4] and increases the work function (\(\phi\)) and secondary emission yield (\(<SEY>\)) of the niobium [5]. Recently, promising improvement on the onset of field emission and increase in usable accelerating gradient on SRF cavities [3, 6, 7].
In the plasma processing of the cavity, the reactive ions and species such as O\({}^{-}\), O\({}^{+}\), O, O\({}_{2}^{+}\) play an essential role to crack the hydrocarbons from the surface of the cavity forming residual byproducts such as CO, H\({}_{2}\)O, CO\({}_{2}\), and etc. In the experimental settings, it is challenging to get the information about plasma and other species' growth in a fraction of a second as well as the interaction between the species with the cavity's surface. Furthermore, the generation of plasma with the optimum proportion of the gases mixture (n% of inert gas & (100-n)% of oxygen) requires careful control of the gas mixture and plasma dynamics. Experimentally, optical camera attached on cavity opening is used to observe the plasma ignition and its evolution. Simulation of plasma ignition, dynamics with respect to the partial pressure of gas and rf power could be a useful tool to design the experimental setup in complex cryomodules where visual observations are not available.
## 2 Computational Model
In this study, we have chosen two quadrupole mode resonating at 2656 MHz and 2724 MHz of the C75 cavity [8] to ignite plasma on the center and end cells of the cavity. COMSOL Multiphysics has been implemented to study the interaction between the cavity's modes and the gaseous mix
Figure 1: Electric field profile of two TE211 modes of interest on the axis of the C75 cavity, (a) 2656 MHz and (b) 2724 MHz.
ture. Here, we report simulation results on electron number density (N\({}_{e}\)), temperature (T\({}_{e}\)), S-parameters of the cavity (\(S11\&S21\)), and dynamics of the oxygen species on the axis & surface of the cavity.
Figure 1 shows the electric field distribution of two TE211 modes on the axis of the C75 cavity used for the plasma simulation. A detailed discussion about the C75 cavity is done in Ref. [8]. Figure 1 (a) shows the 2656 MHz mode profile with most of its field around the center cells of the cavity. The second mode is the 2724 MHz mode (see Fig. 1 (b)), which has the highest field on the end cell. In the simulation, the electric field of the cavity is excited via a coaxial input port. For the sake of simulation time, 2D axis-symmetric part of the cavity has been used.
Oxygen plasma is highly reactive due to high concentrations of active particles and electronically excited metastable states. In this simulation, a gas mixture of 94 % Ar and 6 % of O\({}_{2}\) is set within the cavity domain. When there is an interaction between the electromagnetic field and gas molecules, electrons absorb energy from the electric field and lose it to the gas molecules. During this repetitive collision process between the excited electrons and the gas molecules, highly reactive ions and metastable species along with the electrons and neutral atoms or molecules are produced. This phase of gas inside the cavity is called plasma ignition. The E-field profiles that are shown in Fig. 1 (a) & (b) ignite the plasma, respectively, at the center and end cell of the cavity.
The COMSOL Multiphysics for plasma simulation solves drift-diffusion equations to calculate the transport properties of the electron and non-electronic species. The plasma chemistry is described and represented by the different reactions and divided into Tables 1 and 2, representing the electron impact reactions of Ar and O\({}_{2}\). We have used 5 such reactions of Ar and 35 of O\({}_{2}\). These reactions are taken from the database system LxCat [10]. The reactions are mainly elastic, attachment, excitation, and ionization. The attachment, excitation, and ionization reactions are used in plasma processing to react with the hydrocarbons on the cavity surface.
In the elastic reactions, there is an energy exchange between the electron and Ar or O\({}_{2}\) molecules. However, no new species are created in this process. In the attachment reactions, electrons can be taken by the species to form such reactive species. In addition, reactions like ionization can also produce reactive species. In these reactions, an interacting electron knocks out an electron from participating species by releasing energy. Ionic species like Ar\({}^{\star}\), O\({}_{2}^{+}\), and O\({}^{+}\) are created in the processes. Moreover, the electrons provide their energy to the ground state Ar and O\({}_{2}\) resulting in new excited species such as Ars, O\({}_{2}a1d\), O\({}_{2}b1s\), O\({}_{2}\)(45), O1d, O1s, and O.
## 3 Results and Discussion
### Electron Number Density and Temperature
Two of the important parameters of the plasma simulation are growth in the electron number density (N\({}_{e}\) ) and heating of the gas molecules (T\({}_{e}\)). An increase in the interaction between the gas molecules and the electric field of the cavity results in a growth of the number of free electrons and a rise in temperature. Figure 2 shows the change in the N\({}_{e}\) and T\({}_{e}\)
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**\#** & **Formula** & **Type** & \(\Delta\epsilon\) (eV) \\ \hline
1 & \(e+Ar=>e+Ar\) & Elas.\({}^{1}\) & 0 \\
2 & \(e+Ar=>e+Ar\) & Ext.\({}^{2}\) & 11.5 \\
3 & \(e+Ars=>e+Ar\) & Ext.\({}^{2}\) & -11.5 \\
4 & \(e+Ar=>2e+Ar^{+}\) & Ion.\({}^{3}\) & 15.8 \\
5 & \(e+Ars=>2e+Ar^{+}\) & Ion.\({}^{3}\) & 4.427 \\ \hline \end{tabular}
\({}^{1}\)Elastic, \({}^{2}\)Excitation, \({}^{3}\)Ionization
\end{table}
Table 1: Argon Reactions [10]
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**\#** & **Formula** & **Types** & \(\Delta\epsilon\) (eV) \\ \hline
1 & \(e+O_{2}=>e+O_{2}\) & Elas.\({}^{1}\) & 0 \\
2 & \(e+O_{2}=>O+O^{-}\) & Att.\({}^{4}\) & – \\
3 & \(e+O_{2}=>e+O_{2}\) & Ext.\({}^{2}\) & 0.02 \\
4 & \(e+O_{2}=>e+O_{2}\) & Ext.\({}^{2}\) & 0.19 \\
5 & \(e+O_{2}=>e+O_{2}\) & Ext.\({}^{2}\) & 0.38 \\
6 & \(e+O_{2}=>e+O_{2}\) & Ext.\({}^{2}\) & 0.57 \\
7 & \(e+O_{2}=>e+O_{2}\) & Ext.\({}^{2}\) & 0.75 \\
8 & \(e+O_{2}=>e+O_{2}a1d\) & Ext.\({}^{2}\) & 0.977 \\
9 & \(e+O_{2}a1d=>e+O_{2}\) & Ext.\({}^{2}\) & -0.977 \\
10 & \(e+O_{2}=>e+O_{2}b1s\) & Ext.\({}^{2}\) & 1.627 \\
11 & \(e+O_{2}b1s\) & \(>e+O_{2}\) & Ext.\({}^{2}\) & -1.627 \\
12 & \(e+O_{2}=>e+O_{2}(45)\) & Ext.\({}^{2}\) & 4.5 \\
13 & \(e+O_{2}(45)=>e+O_{2}\) & Ext.\({}^{2}\) & -4.5 \\
14 & \(e+O_{2}=>e+O+O\) & Ext.\({}^{2}\) & 6.0 \\
15 & \(e+O_{2}=>e+O+O1d\) & Ext.\({}^{2}\) & 8.4 \\
16 & \(e+O_{2}=>e+O+O1s\) & Ext.\({}^{2}\) & 9.97 \\
17 & \(e+O_{2}=>2e+O_{2}^{+}\) & Ion.\({}^{3}\) & 12.06 \\
18 & \(e+O_{2}a1d\) & \(>e+O_{2}a1d\) & Elas.\({}^{1}\) & 0 \\
19 & \(e+O_{2}a1d=>e+O+O\) & Ext.\({}^{2}\) & 5.02 \\
20 & \(e+O_{2}a1d=>2e+O_{2}^{+}\) & Ion.\({}^{3}\) & 11.09 \\
21 & \(e+O_{2}b1s=>e+O+O\) & Ext.\({}^{2}\) & 4.38 \\
23 & \(e+O_{2}b1s=>2e+O_{2}^{+}\) & Ion.\({}^{3}\) & 10.39 \\
24 & \(e+O_{2}(45)=>e+O+O\) & Ext.\({}^{2}\) & 1.5 \\
25 & \(e+O_{2}(45)=>2e+O_{2}^{+}\) & Ion.\({}^{3}\) & 7.58 \\
26 & \(e+O=>e+O\) & Elas.\({}^{1}\) & 0 \\
27 & \(e+O=>e+O1d\) & Ext.\({}^{2}\) & 1.968 \\
28 & \(e+O1d=>e+O\) & Ext.\({}^{2}\) & -1.968 \\
29 & \(e+O=>e+O1s\) & Ext.\({}^{2}\) & 4.192 \\
30 & \(e+O1s=>e+O\) & Ext.\({}^{2}\) & -4.192 \\
31 & \(e+O=>2e+O^{+}\) & Ion.\({}^{3}\) & 13.192 \\
32 & \(e+O1d=>e+O1s\) & Ext.\({}^{2}\) & 2.224 \\
33 & \(e+O1d=>2e+O^{+}\) & Ion.\({}^{3}\) & 11.224 \\
34 & \(e+O1s=>2e+O^{+}\) & Ion.\({}^{3}\) & 9 \\
35 & \(e+O_{2}+O_{2}=>O_{2}+O_{2}^{-}\) & Att.\({}^{4}\) & 0 \\ \hline \end{tabular}
\({}^{1}\)Elastic, \({}^{2}\)Excitation, \({}^{3}\)Ionization, \({}^{4}\)Attachment
\end{table}
Table 2: Oxygen Electron Impact Reactions [10]
as functions of time. Here, the data are extracted on the axis of the cavity. To increase the resolution of the results only cell-to-cell calculations are included in the plots.
Figure 2 (a) and (b) displays (N\({}_{e}\), T\({}_{e}\)), for 2656 MHz and 2724 MHz mode of the cavity, respectively. In both cases, a significant increase in N\({}_{e}\) and T\({}_{e}\) is observed. In case of 2656 MHz, N\({}_{e}\) and T\({}_{e}\) increased from 10\({}^{14}\) m\({}^{-3}\) and 2.7 V to the maximum value of 1.4\(\times\)10\({}^{15}\) m\({}^{-3}\) and 6.6 V at the center cell of the cavity. However, at 2724 MHz, there is a rise in (N\({}_{e}\), T\({}_{e}\)) not only at the end cell but also in the first cell of the cavity (see Figure 2 (b)). In this mode, for a time greater than 1 microsecond, the growth of N\({}_{e}\) in the first cell is higher than that of the last cell. Similar built-up behavior in T\({}_{e}\) is also observed.
To understand the shifting behavior of T\({}_{e}\) from the end cell to the first cell at 2724 MHz, we have done simulations at different input powers from 2 - 10 W as shown in Fig. 3. The heating of the gas molecules on the end and the first cell of the cavity and their neighboring cells was observed. However, there is negligible change in T\({}_{e}\) on the center cell.
### S-parameters
For plasma to propagate inside the cavity, the plasma density (\(n_{p}\)) should be less than the critical density (\(n_{c}\)) due to the Debye shielding [11]. The value of the critical density is determined by the angular frequency of the cavity (\(\omega\)), electronic charge (\(e\)), and mass (\(m_{e}\)) and is calculated as;
\[n_{c}=\frac{\epsilon_{0}m_{e}^{2}\omega^{2}}{e^{2}} \tag{1}\]
In the experimental setting, one of the important parameters to track during plasma ignition is the change in the S-parameters (S11 and S21) of the cavity [12]. Plasma development changes the dielectric constant of the medium as [11]:
\[\epsilon_{r}=1-\frac{\omega_{p}^{2}}{\omega^{2}(1-i\frac{r}{\omega})} \tag{2}\]
where \(\omega_{p}\) and \(\omega\) are the frequency of the plasma and the electromagnetic field, respectively and \(\nu\) is the wavenumber. An increase in the electron number density (N\({}_{e}\)) was observed as the plasma starts to form, which increases plasma conductivity and hence the frequency. The increase in the
Figure 3: Electron temperature at center points of five cells of the cavity for 2724 MHz for powers 2 - 10 W. The red dots show the locations of the data point inside the cavity.
Figure 2: Electron number density (\(N_{e}\)) as a function of time for (a) 2656 MHz, and (b) 2724 MHz mode of the cavity. For visualization purpose the 2D-axis symmetric cavity is inserted in both plots.
plasma frequency in turn changes the dielectric constant of the cavity resulting in the increase of the reflection of the electromagnetic waves inside the cavity.
Figure 4 shows the cavity's S-parameters (S11 and S21) as a function of time for both modes used in this simulation. In both modes decrease in S21 by 5 dB during the plasma ignition is observed. During the ignition, as expected, there was an increase in the S11 parameters of the cavity. A similar trend of change in S-parameters of the C100 cavity is reported by [6]. The C100 cavity is 7 cells cavity and the change in S21 was 10-20 dB.
### Species Dynamics
In plasma processing, free oxygen radicals and ions play a crucial role in the breakdown of hydrocarbons from the surface of the cavity to molecules such as CO, CO\({}_{2}\), and H\({}_{2}\)O because of their highly reactive nature. An oxygen molecule reacts with a hydrocarbon as O\({}_{2}\) + C\({}_{x}\)H\({}_{y}\)\(\to CO_{2}\) + \(CO\)+\(H_{2}O\).
Figure 5 shows a change in O molecule, O\({}^{-}\), O\({}^{+}\), & O\({}_{2}^{+}\) ions and (O1d, O2a1Dg) metastable states of the oxygen at 10 ms of time for the two modes of interest. In both modes, we have observed an increase in all of the species mentioned above. Among all, there is a substantial increase in O\({}^{-}\) ions. Interestingly, in 2724 MHz mode, we have seen simultaneous growth of the species at the end and first cell of the cavity.
Calculations of species dynamics along the inner surface of the cavity are done and shown in Fig. 6. As expected, for the center cell plasma ignition, there was the growth of the (O, O1d, O\({}^{+}\), O\({}_{2}a1Dg\), and O\({}_{2}^{+}\)) along the circumference of the center cell of the cavity (see Fig. 6 (a)). Here, two peaks of species at two ends of the same cell of the cavity is due to the feature of the quadrupole mode. Similar to the shifting feature of species seen on the axis of the cavity, for 2724 MHz mode, the species was created on the circumferences of both the end and first cell of the cavity. This feature suggests that plasma processing at the end cell is more effective to clean cavities than that of center cell.
## 5 Conclusion
COMSOL multiphysics has been used to study plasma ignition and species growth in the C75 cavity. A mixture of 94:6 Ar and \(O_{2}\) is set inside the cavity domain and two TE211 modes are used to ignite plasma on the center and end cell of the cavity, respectively. A significant increase in the free electron number density and their heating shows the creation of the plasma. Moreover, we find a 5 dB decrease in the S21 parameter of the cavity during the plasma ignition. The study of oxygen molecules, ions, and their metastable
Figure 4: Change in the S-parameters of the cavity as a function of time for (a) 2656 MHz and (b) 2724 MHz.
Figure 5: Different species of oxygen (molecules, ions, & metastable states) on the axis of the cavity for (a) 2556 MHz and (b) 2772 MHz.
states on the axis and inner surface of the cavity suggests the end cell plasma ignition could be more effective than the center cell plasma ignition because of the simultaneous growth of species in both the first and end cell of the cavity. Our next step is to study plasma ignition on the second and fourth cell of the cavity and remodeling the EM fields due to the change in dielectric constant then repeat the plasma simulation with an initial condition of the final condition of the previous simulation.
This work could guide to study plasma ignition and control in any shape and size of accelerating cavities. It could also be a platform to understand species growth and their dynamics during plasma ignition.
## Acknowledgements
We would like to acknowledge Jefferson Lab SRF S&T department for support.
|
2310.03743 | The Un-Kidnappable Robot: Acoustic Localization of Sneaking People | How easy is it to sneak up on a robot? We examine whether we can detect
people using only the incidental sounds they produce as they move, even when
they try to be quiet. We collect a robotic dataset of high-quality 4-channel
audio paired with 360 degree RGB data of people moving in different indoor
settings. We train models that predict if there is a moving person nearby and
their location using only audio. We implement our method on a robot, allowing
it to track a single person moving quietly with only passive audio sensing. For
demonstration videos, see our project page:
https://sites.google.com/view/unkidnappable-robot | Mengyu Yang, Patrick Grady, Samarth Brahmbhatt, Arun Balajee Vasudevan, Charles C. Kemp, James Hays | 2023-10-05T17:59:55Z | http://arxiv.org/abs/2310.03743v2 | # The Un-Kidnappable Robot: Acoustic Localization of Sneaking People
###### Abstract
How easy is it to sneak up on a robot? We examine whether we can detect people using only the incidental sounds they produce as they, even when they try to be quiet. We collect a robotic dataset of high-quality 4-channel audio paired with 360\({}^{\circ}\) RGB data of people moving in different indoor settings. We train models that predict if there is a moving person nearby and their location using only audio. We implement our method on a robot, allowing it to track a single person moving quietly with only passive audio sensing. For demonstration videos, see our project page.
## I Introduction
Advances in mobile robots have led to such platforms becoming increasingly common in everyday settings. With this popularity comes a rise in the coexistence of robots with people. Nowadays, it is not uncommon to see a last-mile delivery robot roaming a city sidewalk, an industrial robot navigating a warehouse floor, or a cleaning robot vacuuming in a home. As demand for robotic applications grows, being able to recognize people in the robots' proximity is a vital task to ensure safety. Object recognition in general has been well examined for image data [1, 2, 3], where humans are one of the object categories. Previous works have also specifically investigated person detection [4, 5, 6, 7, 8] by detecting the presence of people as well as localizing them. The large amount of existing research in this topic highlights the universality of person detection across different robotic applications.
Most person detection methods use vision- and spatial-based sensors. These include RGB [11], depth [12], 2D lasers [13, 14], 3D LIDAR [7], or combinations for multi-modal methods [15, 16]. While multi-modal models are beneficial for domain adaptation and improved performance over uni-modals, these methods learn a joint representation across all sensors that do not allow flexibility for variable sensor inputs. These models are therefore not robust against failures in real-world applications where sensors can fail. Aside from these common sensors, fewer works have examined the use of audio for person detection. Existing works only use audio to estimate the direction of arrival (DOA) [17, 18, 19, 20] which does not satisfy our definition of person detection. Others that do perform full localization of sounds in general rely on an _active_ sound source like talking or a loudspeaker [21, 22, 23, 24].
We argue that the acoustics incidentally produced by people as they move around are an under-leveraged source of information that can be used for person detection. Unlike other sound localization methods, ours relies on _passive observation_ only. The person is not required to actively produce extraneous sounds like speaking or clapping and we do not employ sensors that create additional sounds like ultrasonic sensors [25] or use echolocation [26]. The incidental sounds we focus on are by nature noisy and are weak signals, which we demonstrate by showing how other sound localization methods [27, 28] often fail to detect the sound source on this type of audio.
Having an audio-only based method for person detection is an important step in the development of multi-modal person detection systems that are robust to failures. Should the sensors that many frameworks rely on fail or become unavailable (low-lit environments, occlusion handling, etc.), our method allows robots to fall back solely onto audio, a readily obtainable signal which is usually already onboard
Fig. 1: Can we detect where people are based only on the subtle sounds they incidentally produce when they move, even when they try to be quiet? We collect a dataset of high-quality audio paired with 360\({}^{\circ}\) RGB data with different participants in multiple indoor scenes. We train models to localize a moving person based on audio only and implement it on a robot.
most hardware setups. And when interacting with robots, people should not be expected to intentionally create extra sounds to ensure nearby robots are aware of their location.
To evaluate our claims, we first collected a real-world dataset of different people moving around a robot in various indoor settings. Onboard the robot, we record 4-channel, high-quality audio along with paired 360\({}^{\circ}\) RGB data, which we process to obtain pseudo-labels for the person's location relative to the robot. We name this the Robot Kidnapper dataset (Fig. 2, 3a) and provide more details in Section III.
We then use this dataset to learn person detection based on the incidental and often subtle sounds created by people as they move around. We show that our models are able to localize people both when the robot is stationary and when the robot is moving, a more difficult task due to the additional self-noise of the robot. We also implement our model on a real robot to demonstrate robotic human awareness using only audio. Overall, we present the following contributions:
1. A public dataset of synchronized, high-quality, 4-channel audio and 360\({}^{\circ}\) RGB data of different participants in multiple indoor scenes
2. Experimental evaluations of person detection using the subtle, incidental sounds of people moving around
3. Allowing real robots to track people using only the sounds of them moving
## II Related Work
### _Human Detection with Visual Perception_
Since perceiving humans enables a large number of downstream tasks, 'person' or 'human' is included as a category in most image-based object detection [29, 30] and segmentation [31] models. Autonomous cars use LIDAR sensors optionally combined with RGB cameras to detect pedestrians [32, 33] and forecast their future behaviour [34]. Additionally, surveillance systems use RGB or infrared images to detect [35, 36, 37], identify [38], and localize humans [39] and objects [40]. In settings where it can be assumed that any disturbances are caused by humans, human detection is performed through anomaly detection algorithms. For example, laser/ultraviolet beam breakers or proximity sensors are used on factory floor automated assembly lines [41], while some intrusion detection security systems [42] use audio. In contrast to these works, our paper focuses solely on passive audio signals to not only detect, but also localize humans.
### _Audio-Based Perception for Robots_
Audio has been used for robotic tasks involving both non-human and human interactions. Robots have used audio for various tasks like self-localization [43, 44], robotic pouring [45], and navigation using ambient sounds [46]. Sound source localization systems like [47, 48, 49] usually assume that the source emits loud, obvious sounds (_e.g._ beeps, music, speech). A common human-based task involves human detection, but these works tend to only use audio to estimate the direction of arrival of humans [17, 18, 19, 20]. In contrast, we focus on detection _and_ localization of a person using much quieter sounds, like those incidentally made by a person trying to walk quietly. And while Sasaki et al. [49] performs 3D localization, albeit of distinctive sounds played from a speaker, they assume a static sound source with a moving robot. In our work, we assume both the robot and sound source can be moving at the same time.
## III Dataset
To train our models to detect people based on the incidental sounds that they produce, we collected the Robot Kidnapper dataset. This dataset contains high-quality 4-channel audio recordings paired with 360\({}^{\circ}\) RGB video from the robot's egocentric point of view (Fig. 2). The person's position was annotated in coordinates relative to the robot. We collected data in 8 rooms across 4 buildings. To account for the potential impacts that physical properties of a room may have, the selected rooms vary in terms of size (small study room, large lecture hall, etc.) and material (concrete floors, carpeted floors, glass walls, etc.).
### _Human Presence Recordings_
The Robot Kidnapper dataset captures 12 participants in a range of environments performing a variety of actions. This _stress-tests_ the performance of our algorithm across diverse behaviors. Participants were prompted to perform 4 different actions during data capture to capture a wide range of sounds:
* _Stand still:_ Participants were asked to stand in place for 5 seconds before taking 1-2 steps to a different spot and repeating the procedure.
* _Walk quietly:_ Participants were prompted to move during the entire recording but to focus on minimizing any sounds that they produced.
* _Walk normally:_ Participants were prompted to walk at their normal speed and volume
Fig. 2: Frames from the Robot Kidnapper dataset (static robot). The participant wears a hat with ArUco markers [9] used to calculate ground truth radial distance. The RGB frames are used to calculate the ground truth centroid of the person using DeepLabv3+ [10]. Only the audio is used during training. The vertical red lines are the angles predicted by our model in an unseen room. The participant is walking normally in these frames.
* _Walk loudly:_ Here, participants were prompted to walk more loudly, which they accomplished by dragging their feet or stomping.
During data collection, the hardware was mounted to a Stretch RE-1 mobile manipulator robot. However, the robot produces sounds during movement, such as humming and clicking from wheel motors or rustling as the robot traverses bumps on the ground. We examine whether our methods can learn under the more difficult setting of detecting a moving person with these additional noises. For all 4 actions, we collected data under a _static robot_ and _dynamic robot_ condition. During the _static_ recordings, the robot was turned on but remained stationary. In the _dynamic_ recordings, the robot was teleoperated from outside the room and driven (translation and rotation) around the room with the participant. Participants were alone in the room. As our intention is to examine how well our models can detect people from only incidental sounds, non-incidental sounds such as talking were cropped out during post-processing. All recordings contain a single participant only. The data collection was approved by an Institutional Review Board (IRB) and participants gave informed consent and were compensated for their time. All combined, the human presence recordings total to approximately 8 hours, evenly split between all actions, robot conditions, and rooms.
### _Empty Room Recordings_
In addition to recording the sounds of human presence, we also recorded audio of the 8 rooms used in the dataset when it was empty. This empty room data helps our model be able to distinguish whether or not there is a moving person in the robot's vicinity. The empty room recordings are collected on the same robot setup as the human recordings. It is also split between _static_ and _dynamic_. This empty room dataset is approximately 5 hours in length.
We then collected a secondary empty room dataset without the Stretch RE-1 consisting only of short recordings. The _Empty Augmentation_ dataset was collected in 26 rooms across 6 buildings on Georgia Tech's campus. In each room, audio of the empty room was recorded from 2 different positions for 2 minutes each, resulting in around 1.5 total hours of audio. The audio from this dataset is used for data augmentation, which is described in Section IV-B.
### _Person Location Labels_
Training a person detection model requires ground truth labels for the location of the person relative to the robot. In our dataset, we annotate the person's position on the ground plane in polar coordinates. Specifically, we annotate the azimuthal angle \(\theta\) of the person relative to the robot's forward vector, and radial distance \(r\) of the person relative to the robot's origin. To label \(\theta\), we developed an approach based on a semantic segmentation model. Specifically, we use a pre-trained DeepLabv3+ [10] model to generate a mask of the person from an RGB frame and calculate the centroid of the person \((x,y)\). We then encode \(x\) using cyclical features, where \(W\) is the width of the frame.
\[\theta_{sin} =sin(2\pi x/W) \tag{1}\] \[\theta_{cos} =cos(2\pi x/W) \tag{2}\]
To label \(r\), participants wore a hat with ArUco markers [9]. Segments of the 360\({}^{\circ}\) RGB frame were re-projected using a pinhole camera model, then an ArUco pose estimator was run on these frames. Due to factors such as motion blur, lighting, and low resolution at further distances, ArUco markers were detected in only 74% of all the frames in the dataset. The distribution of \(r\) is shown in Fig. 2(b). Each dataset sample consists of a 1s clip of audio and corresponding video. We use the first frame of the video clip to extract person location labels. We sample overlapping clips at 4Hz. We note that while robust RGB-D cameras are readily available, they are limited to a narrow field of view and not suitable for our 360\({}^{\circ}\) data.
### _Hardware_
All audio recordings were done at 44.1 kHz using 4 RODE NT2-A microphones connected to a MOTU M6 audio interface. The polar pattern for all 4 microphones was set to cardioid with the front side facing outwards. To ensure that we capture the maximum amount of information from the audio, the microphone gain was adjusted to the highest setting, the PAD (passive attenuation device) was set to 0 dB and the high-pass filter was set to flat. An Insta360 ONE X2 was used for the 360\({}^{\circ}\) recordings. The hardware was mounted on the Stretch RE-1 robot (Fig. 2(a)). For the Empty Augmentation dataset, a standard tripod was used.
## IV Methodology
We examine acoustic localization of people using only the incidental sounds produced by their moving presence. We design and train models on our dataset of high-quality multi-channel audio paired with 360\({}^{\circ}\) RGB data from which we extract location labels. While we collected data of people standing still as well, our paper only analyzes the moving actions (quiet, normal, loud). We train our models using leave-one-out cross validation across all 8 rooms in the dataset. All results are from averaging test performance across the 8 unseen rooms.
Fig. 3: **(a) Dataset capture setup. (b) Distribution of radial distances between the robot and person in the dataset.**
### _Background Subtraction_
Given the weak-signal nature of the data, our models must be robust to the ambient noises in different rooms. As seen in Fig. 5, aside from loud walking, the other actions are difficult to distinguish from that of an empty room. This shows that most of the audio signals consist of background noise that our model must learn to ignore. To do so, we use a simple form of spectral subtraction. Right before recording in each room, we first collected 20s of empty room audio with either a static or dynamic robot. We split that audio into non-overlapping 1s clips, compute their spectrograms, and then take the average spectrogram. This average spectrogram, \(S_{empty}\), is the empty room profile. Now given an input spectrogram of audio from the same room, \(S_{in}\), we perform _background subtraction_ by computing \(S_{final}=S_{in}-w_{backsub}*S_{empty}\). \(w_{backsub}\) is a scalar weight that the empty spectrogram encoder learns (Fig. 4). We clamp the values to [0, 1].
### _Empty Room Augmentation_
Training our models to adapt to different background noises requires a wide variety of rooms to be seen during training. To supplement the 8 rooms we collected data in, we synthetically create additional rooms during training. Since audio is additive, we can inject additional noise into an audio recording and place the sounds of a moving person in a new room, one which contains a combination of ambient noises from two different rooms. This is the purpose of the Empty Augmentation dataset from Section III-B. Given an audio clip from the Robot Kidnapper dataset, \(x_{r}(t)\), and an audio clip of equal length of an empty room from the Empty Augmentation dataset, \(x_{aug}(t)\), we first normalize both clips to an RMS of \(0.02\). Then, we calculate a linear combination of the two waveforms: \(x_{syn}(t)=(1-w_{aug})*x_{r}(t)+w_{aug}*x_{aug}(t)\). \(w_{aug}\) is a scalar which we tune as a hyperparameter. We normalize \(x_{syn}(t)\) again before processing its spectrogram as usual.
For background subtraction, the same procedure described in Section IV-A is used to calculate empty room profiles for both the natural and synthetic empty room, resulting in \(S_{empty}^{nat}\) and \(S_{empty}^{syn}\). To compute the final empty room profile, the same weighting factor \(w_{aug}\) is used: \(S_{empty}=(1-w_{aug})*S_{empty}^{nat}+w_{aug}*S_{empty}^{syn}\). \(S_{empty}\) is then used for background subtraction.
### _Models_
**Architecture:** We adapt the audio encoder architecture from Vasudevan et al. [50] for our person detection task. The network takes a 1s clip of audio in the form of a spectrogram. To generate the spectrogram, we first normalize the raw waveform to a constant RMS value of \(0.02\). The waveform is then fed through a Short-Time Fourier Transform (STFT) with a window size of 512 and a hop length of 128 and then converted to the log scale. This results in a \([2,257,345]\) spectrogram for each microphone, where the 2 channels correspond to the real and complex components.
As seen in the architecture diagram in Fig. 4, we first subtract the empty room profile spectrogram from the input spectrogram (Section IV-A) for each microphone before passing them individually through a spectrogram encoder with shared weights, consisting of 4 strided convolutional layers. The output is a \([256,60,120]\) feature map for each spectrogram. These features are then concatenated in the channel dimension to form a \([256n,60,120]\) feature map, where \(n\) is the number of microphones being used, before being passed through the feature encoder. The feature encoder is an Atrous Spatial Pyramid Pooling (ASPP) module [10], which [50] found to be a powerful audio encoder for spatial audio tasks. We refer readers to [10, 50] for a more detailed description of the architecture. The output of the feature encoder is a \([1,240,480]\) feature map. The flattened feature map is passed through 4 task-specific decoders, each being a linear layer. Each decoder predicts one of the following:
**Azimuthal angle prediction**: \(\hat{\theta}_{sin}\) and \(\hat{\theta}_{cos}\) are each predicted by a decoder. We clamp the predictions to \([-1,1]\) and then decode back to the pixel coordinate \(\hat{x}=tan^{-1}(\hat{\theta}_{sin}/\hat{\theta}_{cos})\). We then apply a L1 loss between \(\hat{x}\) and \(x\). The loss values for empty room training samples are ignored so the model does not try to learn to predict the location of a non-existent person.
**Radial distance prediction**: We frame radial distance \(r\) estimation as a binary classification task by predicting if a person is within 1.7m of the robot, which is the median of the distribution in Fig. 2(b). We train using a binary cross-entropy loss and losses for empty room samples are ignored.
Fig. 4: Diagram of our model architecture. We perform background subtraction (Sec. IV-A) on input spectrograms before passing them through a spectrogram encoder with shared weights. The resulting features are concatenated and passed through the feature encoder based on the ASPP module [10]. The output is fed to 4 linear layer heads for the prediction tasks.
**Motion presence prediction**: The model learns a binary classification task of whether or not a person is moving in the room. A binary-cross entropy loss is used.
**Training:** We use the Adam [51] optimizer with a learning rate of \(10^{-4}\), momentum of \(0.9\), and weight decay of \(10^{-3}\). We train the entire model using a multi-task framework on a single NVIDIA A40 GPU. The model has 8.37M parameters.
## V Experiments
We present the person detection performance of our model trained on the Robot Kidnapper dataset. Performance is broken down into the 3 tasks that constitute person detection: \(\theta\), \(r\), and moving presence prediction. All results are the average performance across the 8 test room folds. We then compare against other methods and perform ablation studies to validate our model design. Finally, we demonstrate our model in the real world by implementing it on the Stretch RE-1 robot.
### _Model Comparisons_
For the angle prediction, we compare our model with GCC-PHAT [27], a commonly used handcrafted feature, and StereoCRW [28], an unsupervised method that learns spectrogram representations. Both methods estimate the time delay between two stereo channels from which the DOA can be calculated. For StereoCRW, we run inference with the provided pre-trained model weights which had been trained on significantly more data and demonstrates generalization capabilities. Both comparison models are designed for 2 microphones which can only predict the direction of sound within the range [-90\({}^{\circ}\), 90\({}^{\circ}\)]. To compare with our 360\({}^{\circ}\) method, we use an oracle which always selects the pair of microphones (front 2 or back 2 microphones) facing the person. We also compare against a naive oracle method, constant front, which always predicts 0\({}^{\circ}\) (straight ahead) relative to the microphone pair selected by the oracle.
Both radial distance and moving presence prediction are binary classification tasks, which we compare against chance. While Chen et al. [46] examines a related task of estimating distance to nearby walls based on ambient sounds of the room, they focus on non-sound producing objects. Meanwhile, we treat ambient sounds as noise and instead focus on the subtle sounds that are present within.
### _Azimuthal Angle Prediction_
We evaluate 3 variations of our model, each trained on a different number of microphones: all 4 microphones, the front 2 microphones, and a single front microphone. We calculate the mean absolute error (MAE) in degrees to measure angle prediction performance. We show separate metrics for inference on static and dynamic robot recordings.
Looking at Table I, we notice that both GCC-PHAT and StereoCRW have similar performance as the naive constant front method. This supports our claim that the subtle sounds we focus on are difficult for previous sound localization methods. An exception is StereoCRW on the loud category, which performs noticeably better than the other methods. Looking at Fig. 5, the waveform of the loud category has similar characteristics as the talking waveform. Since other sound localization methods tend to focus on relatively prominent sources of sounds like talking, it makes sense that a similar category like loud walking is detectable as well. For the more subtle actions, other methods are unable to pick out the useful sounds from the background noise.
Moving on to our models' performance, our 4-microphone model significantly outperforms all other methods, with both GCC-PHAT and StereoCRW having approximately twice the MAE across all categories. This shows that the incidental sounds created by a moving person provides a rich source of cues for person detection. Performance on static robot recordings are generally better than dynamic robot, suggesting that the added self-noise of a moving robot complicates the already difficult task of detecting these subtle sounds.
Looking next at our 2-microphone model, it outperforms the other methods for all static recordings. Recall that our 2-microphone model is at a disadvantage compared to the non-random methods, since those models have access to all 4 microphones and always picks the ideal pair facing the person. They only need to predict angles within the range [-90\({}^{\circ}\), 90\({}^{\circ}\)] while our model, with access to only a fixed pair of microphones, has to predict the entire 360\({}^{\circ}\) range. This again highlights our model's ability to detect and localize the subtle, incidental sounds produced by people when they move, even under constrained input settings.
While methods using time difference estimation can only unambiguously predict the direction of sound within 180
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & \multicolumn{2}{c}{_Owiet_} & \multicolumn{2}{c}{_Normal_} & \multicolumn{2}{c}{_Load_} \\ Category & \multirow{2}{*}{Model} & Sta. & Dyn. & Sta. & Dyn. & Sta. & Dyn. \\ \hline \hline Random & Uniform 360\({}^{\circ}\) & 90 & 90 & 90 & 90 & 90 & 90 & 90 \\ \cline{2-7} & Constant Front & 50 & 43 & 50 & 46 & 50 & 43 \\ Oracle Mix Pair & GCC-PHAT & 44 & 47 & 45 & 43 & 46 & 47 \\ & StereoCRW & 52 & 46 & 51 & 48 & 37 & 34 \\ & 1 Mic & 67 & 75 & 64 & 71 & 64 & 74 \\ Ours & 2 Mics & 37 & 54 & 37 & 48 & 36 & 47 \\ & Base 4 Mics & 47 & 55 & 50 & 48 & 49 & 47 \\ & 4 Mics & **21** & **26** & **22** & **24** & **19** & **22** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Mean absolute error (MAE) in degrees for azimuthal angle prediction of our model and comparison methods across the 3 actions divided by static (Sta.) and dynamic (Dyn.) robot. Our Base 4 Mics model is trained without background subtraction (Sec. IV-A) and empty room augmentation (Sec. IV-B).
Fig. 5: Comparison of waveforms from all dataset actions and regular talking. No talking is used in our work, we show the waveform as a reference for a type of common sound source. All recordings were taken in the same room during the same recording session.
due to symmetry, the configuration of our microphones allows us to predict a wider angle range. We postulate that the cardioid polar pattern on our microphones breaks this symmetry by being more sensitive to the sounds coming from the front of the microphone versus the back. The model is able to learn from this subtle difference and determine which side the sound is coming from, even if the time difference between the two microphones is the same.
Finally, we also train our model on audio from only 1 fixed microphone. Understandably, this variation performs worse than the non-random methods. But the 1-microphone model still performs better than chance, suggesting that it is able to pick up some cues from the mono audio.
### _Radial Distance Prediction_
We also estimate the radial distance \(r\) which, combined with \(\theta\), gives us the location of the person. We frame this as a binary classification problem by setting the threshold to the median (1.7m) of the distribution (Fig. 2(b)) and predicting if the person is above or below that threshold. The model is trained on 4 microphones and performs better than chance (50%), shown in Table IIa. Quiet walking is the most difficult action for this task since it has the least amount of signal. However, performance seems to saturate at normal and loud walking, with similar performance between both actions.
### _Moving Presence Prediction_
We also want to detect the presence of a moving person in addition to localizing them. We evaluate our models on the binary classification task of differentiating between the audio of a person present and moving in the room (positive) and that of an empty room (negative) in both static and dynamic robot recordings. The model is trained on 4 microphones and performs significantly better than chance (Table IIb). As expected, the louder actions have better performance, since there are more obvious signals for the model to detect.
### _Ablation Studies_
We validate our model design by training a base, 4-microphone model with neither background subtraction nor empty room augmentation, shown in Table I. The base model has twice the MAE, demonstrating the necessity of these two features and the difficulty of the sounds we are learning from. Without additional methods to remove background noise, deep learning models have trouble detecting these subtle sounds. We do not decouple background subtraction and empty room augmentation because they complement each other, with the former allowing the model to adapt to different types of background noises while the latter provides more diverse training data for the model to actually learn the conditioning. Also, the 2- and 1-microphone models discussed in Section V-B can be seen as an ablation on the number of microphones.
### _Robotic Human Awareness_
We implement our trained model on the Stretch RE-1 to demonstrate robotic human awareness. Using the same hardware setup and in an unseen room, we input the most recent 1s clip of audio into the model and use the predicted angle to pan the RealSense camera on the Stretch to face the person. During the pan, the model does not perform inference to avoid the sound of the motor interfering with angle estimation. On a RTX 2080, the model runs at 142Hz. Fig. 6 provides a demonstration of the robot pointing at the person. We do not use the RealSense data in any way to estimate the person's direction. Given the narrow field of view of the RealSense (58\({}^{\circ}\)), we evaluate our algorithm's performance by determining the success rate of the person being present in the frame right after panning. We asked a participant to move at each of the 3 speeds for 4 minutes and obtained the following success rate: Quiet 80%, Normal 79%, Loud 82%.
## VI Conclusion
We demonstrate the ability to localize people using only the sounds they incidentally produce as they move. We present the Robot Kidnapper dataset and the resulting person detection models that can be implemented on robots to track a person as they move quietly. Our work opens up an avenue of exploration for how robots can learn human awareness with only passive audio sensing and without nearby humans needing to intentionally produce additional sounds.
**Limitations:** Our dataset only contains instances of a single person in a room up to a maximum distance of 6m. We also have not tested our method on different types of microphones. Additionally, we are unable to localize a person if they are standing completely still.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \multicolumn{3}{c}{**Overall Accuracy: 67\%**} \\ \hline \multicolumn{2}{c}{_Quiet_} & \multicolumn{2}{c}{_Normal_} & \multicolumn{2}{c}{_Loud_} \\ Sta. & Dyn. & Sta. & Dyn. & Sta. & Dyn. & Sta. & Dyn. \\ \hline \hline
67 & 61 & 71 & 66 & 71 & 66 \\ \hline \multicolumn{8}{c}{**(a)**} \\ \multicolumn{8}{c}{**Overall Accuracy: 87\%**} \\ \hline \multicolumn{8}{c}{**Negative**} \\ \multicolumn{8}{c}{_Empty_} & \multicolumn{2}{c}{_Quiet_} & \multicolumn{2}{c}{_Normal_} & \multicolumn{2}{c}{_Loud_} \\ Sta. & Dyn. & Sta. & Dyn. & Sta. & Dyn. & Sta. & Dyn. \\ \hline \hline
81 & 85 & 89 & 80 & 95 & 92 & 96 & 94 \\ \hline \end{tabular}
\end{table} TABLE II: **(a) Binary classification accuracy (%) of predicting if a moving person’s radial distance is above or below 1.7m. Results are separated by action and static (Sta.) and dynamic (Dyn.) robot recordings. Chance is 50%. (b) Binary (negative vs positive) classification accuracy (%) of predicting if there is a moving person in the room. We also separate results by action, static (Sta.), and dynamic (Dyn.) robot recordings within each class. Chance is 50%.**
Fig. 6: We implement our trained model on the Stretch RE-1 robot to track a person using only the incidental sounds created as they move quietly. The robot pans the RealSense camera, with green arrow attached, to face where the model estimates the person to be (zoom in for best results). |
2305.07793 | Control of ternary alloy composition during remote epitaxy on graphene | Understanding the sticking coefficient $\sigma$, i.e., the probability of an
adatom sticking to a surface, is essential for controlling the stoichiometry
during epitaxial film growth. However, $\sigma$ on monolayer graphene-covered
surfaces and its impact on remote epitaxy are not understood. Here, using
molecular-beam epitaxial (MBE) growth of the magnetic shape memory alloy
Ni$_2$MnGa, we show that the sticking coefficients for metals on
graphene-covered MgO (001) are less than one and are temperature and element
dependent, as revealed by ion backscattering spectrometry (IBS) and energy
dispersive x-ray spectroscopy (EDS). This lies in stark contrast with most
transition metals sticking on semiconductor and oxide substrates, for which
$\sigma$ is near unity at typical growth temperatures ($T<800\degree$C). By
initiating growth below $400 \degree$ C, where the sticking coefficients are
closer to unity and wetting on the graphene surface is improved, we demonstrate
epitaxy of Ni$_2$MnGa films with controlled stoichiometry that can be
exfoliated to produce freestanding membranes. Straining these membranes tunes
the magnetic coercive field. Our results provide a route to synthesize
membranes with complex stoichiometries whose properties can be manipulated via
strain. | Zach LaDuca, Katherine Su, Sebastian Manzo, Michael S. Arnold, Jason K. Kawasaki | 2023-05-12T23:00:37Z | http://arxiv.org/abs/2305.07793v1 | # Control of ternary alloy composition during remote epitaxy on graphene
###### Abstract
Understanding the sticking coefficient \(\sigma\), i.e., the probability of an adatom sticking to a surface, is essential for controlling the stoichiometry during epitaxial film growth. However, \(\sigma\) on monolayer graphene-covered surfaces and its impact on remote epitaxy are not understood. Here, using molecular-beam epitaxial (MBE) growth of the magnetic shape memory alloy Ni\({}_{2}\)MnGa, we show that the sticking coefficients for metals on graphene-covered MgO (001) are less than one and are temperature and element dependent, as revealed by ion backscattering spectrometry (IBS) and energy dispersive x-ray spectroscopy (EDS). This lies in stark contrast with most transition metals sticking on semiconductor and oxide substrates, for which \(\sigma\) is near unity at typical growth temperatures (\(T<800^{\circ}\)C). By initiating growth below \(400^{\circ}\) C, where the sticking coefficients are closer to unity and wetting on the graphene surface is improved, we demonstrate epitaxy of Ni\({}_{2}\)MnGa films with controlled stoichiometry that can be exfoliated to produce freestanding membranes. Straining these membranes tunes the magnetic coercive field. Our results provide a route to synthesize membranes with complex stoichiometries whose properties can be manipulated via strain.
pacs: Remote [1; 2] and van der Waals [3; 4; 5; 6] epitaxy on monolayer graphene-covered substrates are promising strategies for synthesizing single crystalline films that are mechanically decoupled from the substrate. In remote epitaxy, films are thought to grow on graphene-covered substrates with epitaxial registry to the substrate, due to the "remote" lattice potential of the substrate potential that permeates through graphene [1; 7]. Applications include lattice mismatched epitaxy with reduced dislocation densities [8; 9], etch-free exfoliation of membranes for flexible electronics and re-use of substrates [1], and discovery of new properties induced by extreme strain and strain gradients in membranes [10; 11; 12].
A fundamental challenge, however, is controlling the film stoichiometry during growth on graphene. Due to the weak van der Waals interactions, the sticking coefficients \(\sigma\) for metals on multilayer graphite are typically \(\sigma<0.1\) at room temperature as measured by desorption spectroscopy [13], x-ray photoemission spectroscopy (XPS) [14; 15], and scanning tunneling microscopy (STM) [14; 15]. This lies in stark contrast with the typical \(\sigma\sim 1\) for metals on semiconductor, oxide, and metal surfaces [16], which enables a simple one-to-one correspondence between film stoichiometry and incident flux ratios. Although \(\sigma\) on monolayer graphene-covered surfaces is anticipated to be closer to unity [16], due to the "remote" substrate interactions that permeate through graphene [1; 7; 17], sticking on graphene is less understood and unlikely to be exactly 1. Moreover, the "remote" argument suggests that \(\sigma\) on graphene-covered substrates should depend on the identity of the substrate. The impact of element-dependent, non unity sticking coefficients during growth on monolayer graphene is generally overlooked, in part, because remote epitaxy has focused on compound semiconductors like GaAs for which the stoichiometry is self-limited by growth within an adsorption-controlled window [18; 19]. But for more complex materials like ternary transition metal oxides or intermetallic Heusler compounds adsorption-controlled growth windows are only accessible in select cases [2; 20; 21; 22; 23]. Controlling the stoichiometry of these materials during remote epitaxy or van der Waals epitaxy on graphene [24; 25; 26; 2; 2; 10; 12] in the ultrathin limit will require understanding the sticking coefficients on graphene.
Here, using MBE growth of the magnetic shape memory alloy Ni\({}_{2}\)MnGa, we show that the sticking coefficients for transition metals on graphene-covered substrates are non unity and both element and temperature dependent. Our measurements of the stoichiometry by ion backscattering spectrometry (IBS) and energy dispersive spectroscopy (EDS) for films with thickness 20-80 nm provide upper bounds for the sticking coefficients of Ni and Mn on graphene/MgO, which are less than 0.6 at \(600^{\circ}\)C. Controlling the stoichiometry requires compensating for the nonunity sticking coefficient on graphene, or initiating growth at low temperatures where \(\sigma\) on graphene is near unity. We demonstrate epitaxial Ni\({}_{2}\)MnGa films that can be mechanically exfoliated, and show how externally applied strain in Ni\({}_{2}\)MnGa membranes tunes the magnetic coercive field.
Ni\({}_{2}\)MnGa films were grown by molecular beam epitaxy (MBE) on graphene-covered MgO (001) substrates. The graphene was grown by chemical vapor deposition on polycrystalline Cu foils and wet transferred to the MgO (001) substrate using a poly(methyl methacrylate) (PMMA) handle, Cu etch, and scoop from deionized water, as described in Refs. [5; 10]. Films with nominal composition Ni\({}_{2}\)MnGa and nominal thickness 20-80 nm were grown by MBE using elemental effusion cell sources, with typically fluxes of \(2.2\times 10^{13}\) atoms / (cm\({}^{2}\)-s) for Ni and \(1.1\times 10^{13}\) atoms / (cm\({}^{2}\)-s) each for Mn and Ga. We use the term "nominal" to indicate the composition and thickness if all of the incident Ni, Mn, and Ga atomic fluxes stuck to the surface (\(\sigma=1\)). All samples were capped with \(\sim 20\) nm of Au at room temperature before removal from the MBE system, to avoid oxidation. Fluxes were measured in situ using a quartz crystal microbalance and calibrated to absolute fluxes via ex situ
Ion Backscattering Spectrometry (IBS) measurements on calibration samples, grown at room temperature on Si. Energy dispersive x-ray spectroscopy (EDS, beam energy 10-20 keV, interaction depth of a few microns) was used to measure relative differences in Ni\({}_{2}\)MnGa film composition. The increased depth sampling of IBS and EDS, compared to more surface sensitive XPS, allows us to sum over all species that stick, including those that may intercalate [27, 28, 29] or diffuse [30, 31] beneath graphene, rather than primarily detecting species that reside at the surface.
Fig. 1(a) compares IBS measurements (He\({}^{+}\), 4.9 MeV, \(\theta=8^{\circ}\)) of a nominally 20 nm thick Ni\({}_{2}\)MnGa film grown on graphene/MgO with a film grown directly on MgO. Growth was performed at 600\({}^{\circ}\) C on an MgO substrate that is half covered with graphene, such that both sides of the sample are exposed to the same incident atomic fluxes of Ni, Mn, and Ga. We find that the areal density of Ga on graphene/MgO and on MgO are nearly equal. In contrast, the areal densities of Ni and Mn on graphene/MgO are only 50 to 60% of the Ni and Mn on the MgO surface. Similar results are found for EDS measurements of thicker films. Fig. 1(b) compares EDS measurements for nominally 80 nm thick Ni\({}_{2}\)MnGa films grown on graphene/MgO and MgO at 625\({}^{\circ}\)C, where again we observe similar sticking for Ga on the graphene/MgO and MgO sides, and a \(\sim 50\%\) reduction of the sticking for Ni and Mn on graphene/MgO compared to MgO.
The large stoichiometry differences in films with nominal thickness 20-80 nm are at first surprising, since differences in sticking coefficient are expected to be limited to within the first few atomic layers of growth. For planar film growth, after a layer of Ni\({}_{2}\)MnGa covers the graphene or MgO surface, subsequent Ni\({}_{2}\)MnGa film growth in both cases should be Ni\({}_{2}\)MnGa on Ni\({}_{2}\)MnGa, and thus the stoichiometries of thick films should converge. We attribute the large observed stoichiometry differences to a combination island morphology and reduced sticking coefficients on graphene. Scanning electron micrographs (SEM) reveal that the nominally 80 nm thick film grown on graphene/MgO at 625\({}^{\circ}\)C has a disconnected island morphology (Fig. 1(f)), which we attribute to poor wetting on low surface energy graphene. Similar poor wetting has been observed for other films on monolayer graphene-covered surfaces [30, 32]. In contrast, films grown directly on MgO at the same temperature have a smoother and more connected morphology (Fig.1(g)). This morphology on graphene suggests that even after tens of nanometers of nominal growth, some exposed regions of the graphene remain. Thus our IBS and EDS measurements result from combined sticking on exposed graphene regions (where \(\sigma<1\)) and on Ni\({}_{2}\)MnGa islands (where \(\sigma\sim 1\)). This _cumulative_ sticking coefficient \(\sigma^{\prime}\) is therefor an upper bound for the true sticking coefficient \(\sigma\) on graphene in the atomic layer limit.
Importantly, we find the cumulative sticking on graphene/MgO is highly temperature and element dependent. Fig. 1(c) plots the EDS intensity ratio on the graphene/MgO versus on MgO (\(I_{graphene}/I_{MgO}\)), for a series of nominally 80 nm thick Ni\({}_{2}\)MnGa films as a function of growth temperature. We normalize to the intensity on the MgO side, since the sticking coefficients for metals directly on MgO are nominally 1. Thus the ratio \(I_{graphene}/I_{MgO}\) is approximately equal to the cumula
Figure 1: (a) Ion Beam Scattering (IBS) for nominally 20 nm thick Ni\({}_{2}\)MnGa films grown on graphene/MgO and MgO at 600\({}^{\circ}\) C, showing reduced sticking for Ni and Mn on graphene. (b) Energy dispersive X-ray spectroscopy (EDS) measurements for Ni\({}_{2}\)MnGa films with nominal thickness 80 nm on gr/MgO and on MgO. Both IBS and EDS sample was were capped with a protective layer of Au. (c) EDS intensity ratios \(I_{graphene}/I_{MgO}\), tracing temperature and element dependent changes in the cumulative sticking coefficient for Ni\({}_{2}\)MnGa on graphene-covered MgO. Error bars are standard deviations on multiple regions of a given sample. (d,e) SEM images of the nominally 80 nm thick films grown at room temperature on graphene/MgO and MgO and capped with Au. (f,g) SEM images of the nominally 80 nm thick films grown at 625\({}^{\circ}\)C on graphene/MgO and MgO and capped with Au.
tive sticking coefficient on graphene/MgO. Cumulative sticking for Ni and Mn on graphene/MgO are lowest at high substrate temperature and approach 1 for all three elements below 400\({}^{\circ}\) C. We attribute this temperature dependence on graphene to the combined decreased desorption rate and smoother morphology with less exposed graphene at lower temperature (Fig. 1(d)). We attribute the reduced sticking on graphene, compared to MgO, to relatively weak van der Waals interactions between metal adsorbates and graphene. Indeed, density functional theory calculations suggest that adsorption energies for metals on graphene are of order \(E_{a}\sim 0.5\) eV, compared to \(E_{a}\sim 3\) eV for Au on the Au (111) surface [16]. Surprisingly, the sticking coefficient for Ga on graphene is near unity and independent of temperature, despite the fact that Ga has a higher vapor pressure than Ni or Mn. We speculate the increased sticking of Ga on graphene may arise from reactions or from intercalation beneath graphene. Ga, In, and Sn are known to intercalate at graphene/SiC interfaces [27], and Au is known to intercalate between sheets of graphite [28].
Our findings suggest that control of the Ni\({}_{2}\)MnGa film stoichiometry on graphene requires compensating for the non-unity sticking coefficients at high growth temperature or initiating the growth at lower temperatures where the cumulative sticking coefficients are closer to 1. Lower temperature growth is also beneficial for promoting a smoother morphology on graphene (Fig. 1(d)). Once the interface has formed and the graphene layer is buried, growth can resume under more normal temperatures and fluxes.
For simplicity adopt the strategy of growth at a fixed lower temperature. Fig. 2 compares x-ray diffraction patterns for Ni\({}_{2}\)MnGa films grown at 370\({}^{\circ}\) C, 400\({}^{\circ}\) C, and 600\({}^{\circ}\) C on graphene/MgO, with a film grown by directly on MgO. We find that the sample grown at 600\({}^{\circ}\) C on graphene/MgO displays several impurity reflections, marked by "x," consistent with large deviations from stoichiometry observed by IBS and EDS (Fig. 1). Films grown at 400\({}^{\circ}\)C and below on graphene display only the Heusler Ni\({}_{2}\)MnGa reflections and no impurity reflections. Interestingly, the films on graphene/MgO display both 00\(L\) and \(HH0\) reflections, indicating both (001) and (110) oriented growth, whereas epitaxy directly on MgO produces only (001) oriented growth (black curve).
Azimuthal \(\phi\) scans reveal that both (110) and (001) oriented Ni\({}_{2}\)MnGa domains on graphene/MgO (001) have well defined in-plane orientations with respect to the underlying MgO substrate, despite the presence of the polycrystalline graphene interlayer. In Fig. 3(a), the four-fold pattern of Ni\({}_{2}\)MnGa 101 reflections is rotated by 45 degrees with respect to the MgO 101. This in
Figure 3: (a) Azimuthal \(\phi\) scans for a Ni\({}_{2}\)MnGa film grown on graphene/MgO (001). The off axis 010 reflections track the in-plane orientation of Ni\({}_{2}\)MnGa domains with (110) out of plane orientation. The 101 reflections track the in-plane orientation of the (001) domain. (b) Domain orientations of Ni\({}_{2}\)MnGa (blue) with respect to MgO (001) (black) determined from (a).
Figure 2: Out of plane X-ray diffraction scans (Cu \(K\alpha\)) of Ni\({}_{2}\)MnGa films grown on MgO and on graphene/MgO at 600\({}^{\circ}\)C, 400\({}^{\circ}\)C, and 370\({}^{\circ}\)C, compared to a film grown directly on MgO. Asterisks * denote MgO substrate reflections and “x” denotes secondary phase reflections.
dicates that the (001) Ni\({}_{2}\)MnGa domain has a 45 degree rotated cube on cube epitaxial relationship to MgO, i.e. Ni\({}_{2}\)MnGa (001) [110] \(\parallel\) MgO (001) [100], and a 2% tensile lattice mismatch (Fig. 3(b)). For the (110) domain, we observe a four-fold pattern of 010 Ni\({}_{2}\)MnGa reflections aligned with the MgO 101. This indicates two rectangular domains, labelled A and B, with orientations Ni\({}_{2}\)MnGa (110) [001] \(\parallel\) MgO (001) [010] and Ni\({}_{2}\)MnGa (110) [001] \(\parallel\) MgO (001) [100] (Fig. 3(b)). For these (110)-orientated domains, the mismatch between Ni\({}_{2}\)MnGa \(d_{110}\) and the MgO \(a\) lattice spacings is 2%, while the mismatch in the orthogonal in-plane direction (\(a_{Ni2MnGa}\) vs \(a_{MgO}\)) is much larger and would require a larger supercell to produce a commensurate structure. We speculate that the presence of the graphene interlayer may relax the constraints of direct epitaxy in which there are direct bonds formed between film and substrate, and allow for alternative film orientations that lower the total energy. Similar new epitaxial structures have been observed in the form of rotated superstructures for GdPtSb films on graphene/sapphire [5]. Further studies are required to understand why the (110) domain appears on graphene/MgO and not directly on MgO.
Finally, we show that applying external strains to exfoliated membranes tunes magnetic properties. We exfoliate membranes by adhering the film to a glass slide using crystal bond, then peeling the film from the graphene/MgO. After exfoliation we observe only the 110 and 001-type film reflections and no substrate reflections, as shown in Fig. 4(a). We then apply ripples to the membrane to induce strain. The rippling was performed by adhering a tensile strained polyurethane film to the exfoliated Ni\({}_{2}\)MnGa membrane, heating to approximately 150\({}^{\circ}\)C to release the Ni\({}_{2}\)MnGa/polyurethane bilayer from the crystalbond, and relaxing to impart ripples upon contraction of the polyurethane. Further details of the rippling procedure are described in Ref. [10].
Fig. 4(b) shows SQUID magnetometry measurements for a Ni\({}_{2}\)MnGa film on graphene/MgO, and the same film after exfoliation and subjected to strain in the form of rippling, measured at 100 K with field oriented in plane. We find that strain and/or strain gradients enhance the coercive field, from 400 to 650 Oe. The membrane has thickness 80 nm, ripple period of 8 microns, and peak to peak height of 3 microns. Assuming a sinusoidal shape we we estimate the peak magnitudes of strain to be \(|\epsilon|<3.6\%\), if no plastic deformation [10]. The strain tunable coercive field may be useful for strain-assisted reading and writing of magnetic memory.
In summary, we showed that sticking coefficients for metals on graphene-covered substrates are non unity and highly dependent on element and temperature. IBS and EDS measurements of films with tens of nanometers thickness provide upper bounds for the sticking coefficients on graphene/MgO: \(\sigma<0.6\) for Ni and Mn and \(\sigma<1\) for Ga at 600\({}^{\circ}\)C. Surface sensitive measurements in the monolayer limit are required to fully quantify the atomic sticking coefficients on graphene, and understand the effects of changing the underlying substrate and effects of defects and contaminants at the graphene/substrate interface. In particular, the lattice potential permeation argument of remote epitaxy[6; 7] suggests that the sticking coefficients should also depend on the identity of the substrate. We show that synthesis at lower temperature \(\leq\) 400\({}^{\circ}\)C enables phase pure epitaxy of Ni\({}_{2}\)MnGa films on graphene/MgO. Similar strategies may apply to remote and van der Waals epitaxy of other materials with complex stoichiometries, for which adsorption-controlled growth windows are not accessible.
## Acknowledgment
We thank Greg Haugsted for IBS/RBS measurements. This work was primarily supported by the Air Force Office of Scientific Research grant FA9550-21-0127 (Z.L. and J.K.K.). Preliminary Heusler synthesis was supported by the National Science Foundation DMR-1752797 (Z.L., S.M., and J.K.K.). Graphene synthe
Figure 4: (a) X-ray diffraction before and after membrane exfoliation. (b) SQUID magnetometry of a relaxed Ni\({}_{2}\)MnGa film on graphene/MgO (dark blue), and on the same sample after exfoliation and rippling to create a strained Ni\({}_{2}\)MnGa membrane (light blue). The measurement was performed at 100 K with field oriented within the film plane.
sis via CVD supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Grant DE-SC0016007 (K.S. and M.S.A.).
We gratefully acknowledge the use of x-ray diffraction facilities supported by the NSF through the University of Wisconsin Materials Research Science and Engineering Center under Grant No. DMR-1720415.
|
2310.12896 | Properties of Ajima Circles | We study properties of certain circles associated with a triangle. Each
circle is inside the triangle, tangent to two sides of the triangle, and
externally tangent to the arc of a circle erected internally on the third side. | Stanley Rabinowitz, Ercole Suppa | 2023-10-19T16:47:30Z | http://arxiv.org/abs/2310.12896v1 | # Properties of Ajima Circles
###### Abstract.
We study properties of certain circles associated with a triangle. Each circle is inside the triangle, tangent to two sides of the triangle, and externally tangent to the arc of a circle erected internally on the third side.
Apollonius circle, Gergonne point, tangent circles.
**Mathematics Subject Classification (2020).** 51-02, 51M04.
Sangaku Journal of Mathematics (SJM) (c)SJM
ISSN 2534-9562
Volume 7 (2023), pp. xx-yy
Received XX September 2023 Published on-line XX XXX 2023
web: [http://www.sangaku-journal.eu/](http://www.sangaku-journal.eu/)
(c)The Author(s) This article is published with open access.1
Footnote 1: This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
## 1. Introduction
The following figure appears in a Sangaku described in [14] and reprinted in [21].
In this figure, the semicircle erected inwardly on side \(BC\) is named \(\omega_{a}\). Semicircles \(\omega_{b}\) and \(\omega_{c}\) are defined similarly. The circle inside \(\triangle ABC\), tangent to sides \(AB\) and \(AC\), and externally tangent to semicircle \(\omega_{a}\) is named \(\gamma_{a}\). Circles \(\gamma_{b}\) and \(\gamma_{c}\)
Figure 1. Sangaku configuration
are defined similarly. The sangaku gave a relationship involving the radii of the three circles.
Additional properties of this configuration were given in [29] and [30]. For example, in Figure 2 (left), the three blue common tangents are all congruent. Their common length is \(2r\), twice the inradius of \(\triangle ABC\). In Figure 2 (right), the six touch points lie on a circle with center \(I\), the incenter of \(\triangle ABC\).
It is the purpose of this paper to present properties of circles such as \(\gamma_{a}\) and also to generalize these results by replacing the semicircles with arcs having the same angular measure.
## 2. Properties of \(\omega_{a}\) and \(\gamma_{a}\)
In this section we will discuss properties of the configuration shown in Figure 3 in which \(\omega_{a}\) is any circle passing through vertices \(B\) and \(C\) of \(\triangle ABC\). The circle \(\gamma_{a}\) is inside \(\triangle ABC\), tangent to sides \(AB\) and \(AC\) and tangent to \(\omega_{a}\) at \(T\). This circle is sometimes known in the literature as an Ajima circle [10].
An _Ajima circle_ of a triangle is a circle (\(\gamma\)) that is tangent to two sides of the triangle and also tangent to a circle (\(\omega\)) passing through the endpoints of the third side. In this paper, we are primarily interested in Ajima circles that lie inside the triangle and for which \(\gamma\) and \(\omega\) are _externally_ t
Figure 3. The configuration we are studying
Figure 2. properties
Occasionally, we will generalize a result and present a theorem in which \(\gamma_{a}\) is any circle tangent to \(AB\) and \(AC\) (not necessarily tangent to \(\omega_{a}\)). To help the reader recognize when a result applies to an Ajima circle, we will color all Ajima circles yellow.
The standard notation related to our configuration that we use throughout this paper is shown in the following table.
\begin{tabular}{|c|l|} \hline \multicolumn{2}{|c|}{**Standard Notation**} \\ \hline
**Symbol** & **Description** \\ \hline \(a,b,c\) & lengths of sides of \(\triangle ABC\) \\ \hline \(\omega_{a}\) & circle through points \(B\) and \(C\) \\ \hline \(\gamma_{a}\) & Ajima circle inscribed in \(\angle BAC\) tangent to \(\omega_{a}\) \\ \hline \(D\) & center of \(\gamma_{a}\) \\ \hline \(T\) & Unless specified otherwise, \(T\) is the point where \(\gamma_{a}\) touches \(\omega_{a}\). \\ \hline \(O_{a}\) & center of \(\omega_{a}\) \\ \hline \(I\) & incenter of \(\triangle ABC\) \\ \hline \(r\) & inradius of \(\triangle ABC\) \\ \hline \(R\) & circumradius of \(\triangle ABC\) \\ \hline \(p\) & semiperimeter of \(\triangle ABC=(a+b+c)/2\) \\ \hline \(\Delta\) & area of \(\triangle ABC\) \\ \hline \(S\) & twice the area of \(\triangle ABC\) (i.e. \(2\Delta\)) \\ \hline \(G_{e}\) & Gergonne point of \(\triangle ABC\) \\ \hline \(\theta\) & angular measure of arc \(\widehat{BTC}\) \\ \hline \end{tabular}
Without loss of generality, we will assume \(AB<AC\).
We will survey some known results and give some new properties of this configuration.
The following result is due to Protasov [23]. Proofs can be found in [3] and [1, pp. 90-94].
**Theorem 2.1** (Protasov's Theorem).: _The segment \(TI\) bisects \(\angle BTC\) (Figure 4)._
The following result comes from [34].
**Lemma 2.2**.: _Let \(\Gamma\) and \(\Omega\) be two circles that are externally tangent at \(T\). Let \(B\) and \(C\) be points on \(\Omega\) and let \(BU\) and \(CV\) be tangents to \(\Gamma\) as shown in Figure 5. Then_
\[\frac{BU}{CV}=\frac{BT}{CT}.\]
Proof.: Let \(BT\) meet \(\Gamma\) at \(Q\) and let \(CT\) meet \(\Gamma\) at \(P\). Let \(XY\) be the tangent to both circles at \(T\) (Figure 6).
We have
\[\angle TPQ=\frac{1}{2}\widehat{QT}=\angle YTQ=\angle XTB=\frac{1}{2}\widehat{ BT}=\angle TCB.\]
Since \(\angle QTP=\angle BTC\), we find that \(\triangle PQT\sim\triangle CBT\). Thus,
\[\frac{BT}{QT}=\frac{CT}{PT}\]
Figure 5. \(BU/CV=BT/CT\)
Figure 6.
which implies
\[\frac{BT}{BQ}=\frac{CT}{CP}\quad\text{or}\quad\frac{BQ}{CP}=\frac{BT}{CT}.\]
Since \(BU\) and \(CV\) are tangents, we have \((BU)^{2}=BT\cdot BQ\) and \((CV)^{2}=CT\cdot CP\). Combining, we get
\[\frac{(BU)^{2}}{(CV)^{2}}=\frac{BT\cdot BQ}{CT\cdot CP}=\left(\frac{BT}{CT} \right)\left(\frac{BQ}{CP}\right)=\left(\frac{BT}{CT}\right)\left(\frac{BT}{ CT}\right)=\frac{(BT)^{2}}{(CT)^{2}}\]
which implies \(BU/CV=BT/CT\).
**Theorem 2.3**.: _Let \(\gamma_{a}\) touch \(AB\) and \(AC\) at \(F\) and \(E\), respectively. Let \(TI\) meet \(BC\) at \(Z\) (Figure 7). Then_
\[\frac{BF}{CE}=\frac{BZ}{CZ}.\]
Proof.: By Lemma 2.2, \(BF/CE=BT/CT\). By Protasov's Theorem, \(TI\) bisects \(\angle BTC\). Since \(TZ\) is an angle bisector of \(\triangle BTC\), we have \(BT/TC=BZ/CZ\). Hence \(BF/CE=BT/CT=BZ/CZ\).
The following result comes from [39]. A nice geometric proof can be found in [3]. See also [22].
**Theorem 2.4** (The Catalytic Lemma).: _Let \(E\) be the point where \(\gamma_{a}\) touches \(AC\). Then \(E\), \(T\), \(I\), and \(C\) are concyclic (Figure 8)._
Figure 8. \(E\), \(T\), \(I\), \(C\) are concyclic
Figure 7. \(BF/CE=BZ/CZ\)
**Theorem 2.5**.: _Let \(E\) be the point where \(\gamma_{a}\) touches \(AC\). Then \(\angle BTC=2\angle IEC\) (Figure 9)._
Proof.: By the Catalytic Lemma, \(E\), \(T\), \(I\), and \(C\) are concyclic (Figure 10). By Protasov's Theorem, \(TI\) bisects \(\angle BTC\), so \(\angle BTC=2\angle ITC\). But \(\angle ITC=\angle IEC\) because both angles subtend the same arc \(\widetilde{IC}\).
The following result comes from [3].
**Theorem 2.6**.: _Let the touch points of circle \(\gamma_{a}\) with \(AC\) and \(AB\) be \(E\) and \(F\), respectively. Suppose \(\omega_{a}\) meets \(AC\) at \(J\) between \(A\) and \(C\). Let \(X\) be the center of the excircle of \(\triangle BJC\) opposite \(C\). Then \(X\), \(F\), and \(E\) are collinear (Figure 11)._
The following result comes from [32].
Figure 11. \(X\), \(F\), and \(E\) are collinear
Figure 9. blue angle is twice green angle
**Theorem 2.7**.: _The perpendicular bisector of \(BC\) meets \(\omega_{a}\) on the opposite side from \(T\) at \(N\) as shown in Figure 12. Then \(T\), \(I\), and \(N\) are collinear._
Proof.: From Protasov's Theorem, \(TI\) bisects \(\angle BTC\). Thus, \(TI\) intersects the arc \(\widehat{BC}\) (not containing \(T\)) at its midpoint. This midpoint lies on the perpendicular bisector of \(BC\) and we are done.
**Note.** This theorem provides a nice method for constructing \(\gamma_{a}\). First construct \(N\) as the intersection of the perpendicular bisector of \(BC\) with \(\omega_{a}\). Then construct \(T\) as the intersection of \(NI\) with \(\omega_{a}\). Finally, the center of \(\gamma_{a}\) is found as the intersection of the line joining the center of \(\omega_{a}\) and \(T\) with \(AI\).
**Corollary 2.8**.: _We have \(\angle NBC=\angle BCN\)._
The following result is suggested by [17].
**Theorem 2.9**.: _Let \(N\) be the midpoint of arc \(\widehat{BC}\) opposite \(T\). Let \(E\) be the point where \(\gamma_{a}\) touches side \(AC\) (Figure 13). Let \(J\) be the point where \(\omega_{a}\) meets \(AC\). Then \(IE\parallel NJ\)._
Proof.: By Theorem 2.5, \(\angle IEC\) is half of \(\angle BTC\). But half of \(\angle BTC\) is equal to \(\angle NTC\) and \(\angle NTC=\angle NJJC\) because both angles are inscribed in arc \(\widehat{NC}\). Thus, \(\angle IEC=\angle NJJC\) which makes \(IE\parallel NJ\).
Figure 12.
Figure 13. blue lines are parallel
The following result comes from [31].
**Theorem 2.10**.: _Let \(\omega_{a}\) meet \(AC\) at \(J\) and let the line through \(I\) parallel to \(BJ\) meet \(AC\) at \(F\). Let \(E\) be the point where \(\gamma_{a}\) touches \(AC\). Then \(IF=FE\) (Figure 14)._
Proof.: Let \(TI\) meet \(\omega_{a}\) again at \(N\). By Theorem 2.9, \(NJ\parallel IE\) (Figure 15).
\(\angle 3=\angle 1\). But \(\angle 1=\angle 2\) since both subtend arc \(\widehat{NC}\) in circle \(\omega_{a}\). Hence,
\[\angle 3=\angle 2. \tag{1}\]
By Corollary 2.8, we have \(\angle 2=\angle 6\). But \(\angle 6=\angle 4\) since both subtend arc \(\widehat{BN}\). Thus,
\[\angle 2=\angle 4. \tag{2}\]
Figure 14. blue segments are congruent
Figure 15.
Since \(IE\parallel NJ\) and \(IF\parallel BJ\), we can conclude that
\[\angle 4=\angle 5. \tag{3}\]
Combining equations (1), (2), and (3), we find that
\[\angle 3=\angle 2=\angle 4=\angle 5,\]
so \(\angle 3=\angle 5\). Thus, \(\triangle FIE\) is isosceles with \(IF=FE\).
For other proofs, see [4] and [5].
This theorem provides another simple way to construct circle \(\gamma_{a}\). Draw the line through \(I\) parallel to \(BJ\) to get point \(F\) where this line meets \(AC\). With center \(F\), draw a circle with radius \(FI\). Let this circle meet \(AC\) (nearer \(A\)) at point \(E\). This is the touch point for circle \(\gamma_{a}\). Erect a perpendicular at \(E\) to \(AC\). This perpendicular meets \(AI\) at the center of \(\gamma_{a}\).
**Theorem 2.11**.: _Let \(T\) be any point on arc \(\widehat{BC}\). Let \(F\) be the foot of the perpendicular from \(T\) to \(BC\) (Figure 16). Then \(\angle BTF=\angle O_{a}TC\)._
Proof.: Let \(G\) be the foot of the perpendicular from \(O_{a}\) to \(TC\). Since \(\angle CBT\) is measured by half the measure of \(\widehat{TC}\) and \(\angle TO_{a}C\) equals the measure of \(\widehat{TC}\), we have
\[\angle FBT=\frac{1}{2}\angle CO_{a}T=\angle GO_{a}T.\]
Complements of equal angles are equal, so \(\angle BTF=\angle O_{a}TC\).
Figure 16. green angles are equal
**Theorem 2.12**.: _Let \(M\) be the midpoint of \(BC\) (Figure 17). Then \(\angle MO_{a}T=2\angle ITO_{a}\)._
Proof.: Let \(F\) be the foot of the perpendicular from \(T\) to \(BC\). By Theorem 2.11, \(\angle 1=\angle 2\) in the figure to the right.
By Protasov's Theorem, \(\angle BTI=\angle ITC\). Therefore \(\angle FTI=\angle ITO_{a}\) or
\[\angle ITO_{a}=\frac{1}{2}\angle FTO_{a}=\frac{1}{2}\angle MO_{a}T\]
since \(TF\parallel MO_{a}\).
Hence, \(\angle MO_{a}T=2\angle ITO_{a}\).
**Theorem 2.13**.: _Let \(IT\) meet \(\gamma_{a}\) again at \(T^{\prime}\) (Figure 18). Then \(T^{\prime}D\perp BC\)._
The following proof is due to Biro Istvan.
Figure 17. blue angle = twice green angle
Figure 18. \(T^{\prime}D\perp BC\)
Proof.: Since \(\gamma_{a}\) and \(\omega_{a}\) are tangent at \(T\), this means \(D\), \(T\), and \(O_{a}\) are collinear. Since \(I\), \(T\), and \(T^{\prime}\) are also collinear, we find \(\angle T^{\prime}TD=\angle ITO_{a}\). Extend \(TI\) until it meets \(\omega_{a}\) again at \(N\) (Figure 19).
By Theorem 2.7, \(NO_{a}\perp BC\). Base angles of an isosceles triangle are equal and vertical angles are equal, so \(\angle DT^{\prime}T=\angle T^{\prime}TD=\angle ITO_{a}=\angle O_{a}NT\). So \(T^{\prime}D\parallel O_{a}N\) because \(\angle DT^{\prime}N=\angle O_{a}NT\). Thus, \(T^{\prime}D\perp BC\).
**Theorem 2.14**.: _Let \(IT\) meet \(\gamma_{a}\) again at \(T^{\prime}\). Let \(E\) be the point where \(\gamma_{a}\) touches side \(AC\) (Figure 20). Then \(\angle EDT^{\prime}=\angle ACB\)._
Proof.: Let \(DT^{\prime}\) meet \(AC\) at \(T_{1}\) and let \(DT^{\prime}\) meet \(BC\) at \(T_{2}\). By Theorem 2.13, \(T_{1}T_{2}\perp BC\). From right triangles \(T_{1}ED\) and \(CT_{2}T_{1}\), we see that \(\angle EDT^{\prime}=\angle ACB\) since they are both complementary to \(\angle T_{2}T_{1}C\).
Figure 20. green angles are equal
## 3. Properties Related to the Incircle
In this section, we will discuss properties of Ajima circles that are related to the incircle. As before, \(I\) will denote the incenter of \(\triangle ABC\). Obviously, \(IL\perp BC\).
Throughout this section, points will be labeled as shown in Figure 21 and described in the following table.
**Theorem 3.1**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.2**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.3**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.4**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.5**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.6**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.7**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.8**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.9**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.10**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.11**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.12**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.13**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.14**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.15**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.16**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.17**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.18**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.19**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.20**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.21**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.22**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.30**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.4**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.5**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.6**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(D\) be the center of \(\gamma_{a}\). Then \(DL^{\prime}\perp BC\) (Figure 22)._
**Theorem 3.7**.:
Proof.: The incircle and circle \(\gamma_{a}\) are homothetic with \(A\) being the center of the homothety. This homothety maps \(D\) to \(I\) and maps \(L^{\prime}\) to \(L\). Since a homothety maps lines into parallel lines, we can conclude that \(DL^{\prime}\parallel IL\). Since \(IL\perp BC\), we therefore have \(DL^{\prime}\perp BC\).
**Theorem 3.2**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Then the tangent to \(\gamma_{a}\) at \(L^{\prime}\) is parallel to \(BC\) (Figure 23)._
Proof.: The tangent at \(L^{\prime}\) is perpendicular to \(DL^{\prime}\) (Figure 22) which is also perpendicular to \(BC\) by Theorem 3.1.
**Theorem 3.3**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(D\) be the center of \(\gamma_{a}\). Let \(T\) be any point on \(\gamma_{a}\). Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\) (closer to \(L\)). Let \(AT\) meet \(\gamma_{a}\) at \(Y\) and \(Y^{\prime}\) with \(Y\) closer to \(A\). Let \(AT\) extended meet the incircle again at \(T^{\prime}\) (Figure 24). Then \(YL^{\prime}\parallel Y^{\prime}L\), \(L^{\prime}T\parallel LT^{\prime}\), and \(YD\parallel Y^{\prime}I\)._
Proof.: The incircle and circle \(\gamma_{a}\) are homothetic with \(A\) being the center of the homothety. This homothety maps \(D\) to \(I\), \(L^{\prime}\) to \(L\), \(Y\) to \(Y^{\prime}\), and \(T\) to \(T^{\prime}\). These results then follow because a homothety maps a line into a parallel line.
Figure 23. blue tangent is parallel to \(BC\)
**Theorem 3.4**.: _We have \(\angle XTL^{\prime}=\angle XLB\) (Figure 25)._
Proof.: This is a special case of the following more general theorem.
**Theorem 3.5**.: _Let \(T\) be any point on \(\gamma_{a}\), on the opposite side of \(AL\) from \(B\). Let \(AL\) meet \(\gamma_{a}\) at \(X\) and \(L^{\prime}\) (with \(X\) nearer \(A\)). Then \(\angle XTL^{\prime}=\angle XLB\)._
Proof.: Let \(L^{\prime}Z\) be the tangent to \(\gamma_{a}\) at \(L^{\prime}\) as shown in Figure 26.
From Theorem 3.2, \(L^{\prime}Z\parallel LB\), so \(\angle AL^{\prime}Z=\angle ALB\). But \(\angle XTL^{\prime}=\angle XL^{\prime}Z\) since both are measured by half of arc \(\widehat{XL^{\prime}}\). Thus \(\angle XTL^{\prime}=\angle ALB=\angle XLB\).
Figure 26. green angles are equal
Figure 25. green angles are equal
**Theorem 3.6**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(T\) be any point on \(\gamma_{a}\), on the opposite side of \(AL\) from \(B\). Let \(AL\) meet \(\gamma_{a}\) at \(X\) and \(L^{\prime}\) (with \(X\) nearer \(A\)) as shown in Figure 27. Let \(TL^{\prime}\) meet \(CB\) at \(K\). Then \(X\), \(T\), \(L\), and \(K\) are concyclic._
Proof.: From Theorem 3.5, \(\angle XTL^{\prime}=\angle XLB\), or equivalently, \(\angle XTK=\angle XLK\). Thus, \(X\), \(T\), \(L\), and \(K\) are concyclic.
The next five results have been suggested by Navid Safaei.
**Lemma 3.7**.: _Let \(N\) be the midpoint of arc \(\widehat{BC}\) of a circle. Let \(T\) be a point on arc \(\widehat{BN}\). Then \(TN\) is the external angle bisector of \(\angle BTC\) (Figure 28)._
Proof.: Using properties of angles inscribed in a circle, we have
\[\angle NTU=\frac{1}{2}(\widehat{BT}+\widehat{TN})=\frac{1}{2}\widehat{BN}= \frac{1}{2}\widehat{CN}=\angle CTN,\]
so \(TN\) bisects \(\angle CTU\).
Figure 27. \(X,T,L,K\) lie on a circle.
**Lemma 3.8**.: _Let \(N\) be the midpoint of arc \(\widehat{BC}\) of \(\omega_{a}\). Then \(L^{\prime}\), \(T\), and \(N\) are collinear (Figure 29)._
Proof.: Note that \(T\) is the center of a homothety between \(\gamma_{a}\) and \(\omega_{a}\). Since the tangents at \(L^{\prime}\) and \(N\) are parallel to \(BC\) (by Theorem 3.2), this means that they are corresponding points of the homothety and hence \(L^{\prime}N\) passes through the center of the homothety, \(T\).
**Theorem 3.9**.: _The line \(TL^{\prime}\) is the exterior angle bisector of \(\angle BTC\) in \(\triangle BTC\). (Figure 30)._
Proof.: This follows immediately from Lemmas 3.7 and 3.8.
Figure 29.
**Theorem 3.10**.: _Let \(E\) and \(F\) be the points where \(\gamma_{a}\) touches \(AC\) and \(AB\), respectively. Then \(EF\), \(TL^{\prime}\), and \(CB\) are concurrent._
Proof.: Let \(EF\) meet \(CB\) at \(K\). From Theorem 2.3, we have
\[\frac{BT}{CT}=\frac{BF}{CE}. \tag{4}\]
By Menelaus' Theorem applied to \(\triangle ABC\)
and the transversal \(FE\), we have
\[\frac{BF}{FA}\cdot\frac{AE}{EC}\cdot\frac{CK}{KB}=-1\]
from which we get
\[\frac{BF}{CE}=\frac{KB}{CK} \tag{5}\]
because \(FA=AE\). From equations (4) and (5) we get
\[\frac{KB}{CK}=\frac{BF}{CE}=\frac{BT}{CT}. \tag{6}\]
Hence (by a property of external angle bisectors), \(TK\) is the external angle bisector of \(\angle BTC\) in \(\triangle BTC\). Let \(N\) be the midpoint of arc \(\widehat{BC}\) as shown in the figure above. By Lemma 3.7, \(TN\) is also the external angle bisector of \(\angle BTC\). It follows that \(N\), \(T\), and \(K\) are collinear. Since \(TK\) passes through both \(L^{\prime}\) and \(N\) (by Lemma 3.8), the four points, \(K\), \(L^{\prime}\), \(T\), and \(N\) lie on a line, so \(EF\), \(TL^{\prime}\), and \(CB\) all pass through \(K\)
Figure 31. Blue lines are concurrent.
**Theorem 3.11**.: _The lines \(YX\), \(TL^{\prime}\), and \(CB\) are concurrent (Figure 32)._
Proof.: Let \(E\) and \(F\) be the points where \(\gamma_{a}\) touches \(AC\) and \(AB\), respectively. Let \(EF\) meet \(CB\) at \(K\). Then \(EF\), \(TL^{\prime}\), and \(CB\) are concurrent at \(K\) by Theorem 3.10 (Figure 33).
Since \(AF\) and \(AE\) are tangents to circle \(\gamma_{a}\), this means that \(EF\) is the polar of \(A\) with respect to \(\gamma_{a}\). Since \(AXL^{\prime}\) and \(AYT\) are two secants from \(A\), this means that \(YX\) meets \(TL^{\prime}\) on the polar of \(A\) (line \(EF\)). But \(K\) is the only point on \(TL^{\prime}\) that lies on the polar of \(A\). Thus, \(YX\) also passes through \(K\).
Figure 32. Blue lines are concurrent.
The following result comes from [28].
**Theorem 3.12**.: _The line \(TL^{\prime}\) bisects \(\angle ATL\) (Figure 34)._
Proof.: Let \(TL^{\prime}\) meet \(CB\) at \(K\) (Figure 35).
By Theorem 3.11, \(YX\) passes through \(K\). By Theorem 3.6, \(XTLK\) is a cyclic quadrilateral, so \(\angle KTL=\angle KXL\). But since \(XYTL^{\prime}\) is also a cyclic quadrilateral, \(\angle KXL=\angle YTL^{\prime}\). Thus, \(\angle KTL=\angle YTL^{\prime}\) so \(TL^{\prime}\) bisects \(\angle ATL\).
Figure 34. \(TL^{\prime}\) bisects \(\angle ATL\)
Figure 35.
**Theorem 3.13**.: _Extend \(AT\) until it meets the incircle at \(T^{\prime}\) as shown in Figure 36. Then \(TL=TT^{\prime}\)._
Proof.: Let \(AL\) meet \(\gamma_{a}\) at \(L^{\prime}\), closer to \(L\), as shown in the figure to the right. The incircle and \(\gamma_{a}\) are homothetic with \(A\) being the center of the homothety. Since a homothety maps a line into a parallel line, \(L^{\prime}T\parallel LT^{\prime}\). Thus \(\angle 2=\angle 3\) and \(\angle 1=\angle 4\). By Theorem 3.12, \(\angle 1=\angle 2\). Hence \(\angle 3=\angle 4\) making \(\angle TLT^{\prime}\) an isosceles triangle.
The following result comes from [33].
**Theorem 3.14**.: _Let \(AT\) meet \(BC\) at \(M\). Then \(TI\) bisects \(\angle LTM\) (Figure 37)._
Figure 36. blue lines are congruent
Figure 37. \(TI\) bisects \(\angle LTM\)
Proof.: Let \(AM\) meet the incircle at \(T^{\prime}\) (closer to \(M\)) as shown in the figure to the right. By Theorem 3.13, \(TL=TT^{\prime}\). Since both \(L\) and \(T^{\prime}\) lie on the incircle, we must have \(IL=IT^{\prime}\). Thus \(\triangle LTI\cong\triangle T^{\prime}TI\) by SSS. Hence \(\angle LTI=\angle ITT^{\prime}\).
**Theorem 3.15**.: _We have \(\angle ATD=\angle ILT\) (Figure 38)._
Proof.: Let \(F\) be the foot of the perpendicular from \(T\) to \(BC\). Let \(AT\) meet \(BC\) at \(M\). Since \(\gamma_{a}\) is tangent to \(\omega_{a}\) at \(T\), this means that \(DTO_{a}\) is a straight line.
Number the resulting angles as shown in the figure to the right. Lines \(AM\) and \(DO_{a}\) meet at \(T\) forming equal vertical angles. These are labeled \(x\) in the figure. From Theorem 2.11, \(\angle BTF=\angle O_{a}TC\). These are labeled \(1\) in the figure. Since \(TF\parallel IL\), \(\angle FTL=\angle ILT\). These are labeled \(y\) in the figure. From Theorem 3.14, \(\angle LTI=\angle ITM\). These are labeled \(2\) in the figure.
By Protasov's Theorem, \(1+y+2=2+x+1\). Thus \(x=y\) and \(\angle ATD=\angle ILT\).
Figure 38. green angles are equal
**Lemma 3.16**.: _Two circles, \(C_{1}\) and \(C_{2}\), are internally tangent at \(P\). A chord \(AB\) of \(C_{1}\) meets \(C_{2}\) at points \(C\) and \(D\) as shown in Figure 39. Then \(\angle APC=\angle DPB\)._
Proof.: Let \(t\) be the common tangent at \(P\). Let \(PA\) meet \(C_{2}\) at \(E\) and let \(PB\) meet \(C_{2}\) at \(F\). Label the angles as shown in the figure to the right.
In the blue circle, \(\angle 1=\angle 2\) since both are measured by half of arc \(\widehat{PF}\). In the red circle, \(\angle 1=\angle 3\) since both are measured by half of arc \(\widehat{PB}\).
Thus \(\angle 2=\angle 3\) which makes \(EF\parallel AB\). Parallel chords intercept equal arcs, so \(\widehat{CE}=\widehat{FD}\) which implies \(\angle x=\angle y\).
The following result comes from [9].
**Theorem 3.17**.: _Let \(AT\) meet \(BC\) at \(M\). The \(\odot TLM\) is tangent to \(\gamma_{a}\) (Figure 40)._
Figure 40. three circles touch at \(T\)
Figure 39. green angles are equal
Proof.: Let \(\Gamma\) be the circle tangent to \(\omega_{a}\) at \(T\) and passing through \(L\). Let \(LC\) meet \(\Gamma\) again at \(M^{\prime}\) as shown in Figure 41. By Lemma 3.16,
\[\angle BTL=\angle M^{\prime}TC.\]
These are labeled "\(1\)" in the figure. By Protasov's Theorem,
\[\angle BTI=\angle ITC.\]
Subtracting shows that
\[\angle 2=\angle 3.\]
So \(TI\) bisects \(\angle LTM^{\prime}\). But by Theorem 3.14, \(TI\) bisects \(\angle LTM\). This implies that \(M^{\prime}=M\), so \(\Gamma=\odot(TLM)\) and we are done.
**Theorem 3.18**.: _We have \(\angle DTI=\angle TIL\) (Figure 42)._
Figure 41.
Figure 42. green angles are equal
Proof.: Let \(AT\) meet \(BC\) at \(M\). The sum of the angles of \(\triangle TIL\) is \(180^{\circ}\), so
\[180^{\circ}-\angle LTI=\angle TIL+\angle TLI. \tag{7}\]
Since \(ATM\) is a straight line, we have
\[\angle DTI=180^{\circ}-\angle ITM-\angle ATD.\]
By Theorem 3.14, \(\angle ITM=\angle LTI\), so
\[\angle DTI=180^{\circ}-\angle LTI-\angle ATD.\]
From equation (7), we get
\[\angle DTI=\angle TIL+\angle TLI-\angle ATD.\]
From Theorem 3.15, \(\angle TLI=\angle ATD\). Hence \(\angle DTI=\angle TIL\).
**Lemma 3.19**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(T\) be any point on \(\gamma_{a}\) on the other side of \(AL\) from \(B\). Let \(AL\) meet \(\gamma_{a}\) at \(X\) (nearer \(A\)). Let \(AT\) meet the incircle at \(Y^{\prime}\). Then \(X\), \(Y^{\prime}\), \(T\), and \(L\) are concyclic (Figure 43)._
Proof.: Let \(L^{\prime}\) be the point (nearer \(L\)) where \(AL\) meets \(\gamma_{a}\) as shown in the figure to the right. Lines \(AXL^{\prime}\) and \(AYT\) are both secants to \(\gamma_{a}\), so
\[AX\cdot AL^{\prime}=AY\cdot AT.\]
Note that the incircle and circle \(\gamma_{a}\) are homothetic with \(A\) as the center of the homothety. This homothety maps \(L^{\prime}\) to \(L\) and maps \(Y\) to \(Y^{\prime}\). Therefore,
\[\frac{AL^{\prime}}{AL}=\frac{AY}{AY^{\prime}}.\]
Hence
\[\frac{AX}{AT}=\frac{AY}{AL^{\prime}}=\frac{AY^{\prime}}{AL}.\]
Thus \(AX\cdot AL=AY^{\prime}\cdot AT\) which implies that \(X\), \(Y^{\prime}\), \(T\), and \(L\) lie on a circle.
Figure 43. four points lie on a circle
The following result comes from [35].
**Theorem 3.20**.: _We have \(IT\perp TL^{\prime}\) (Figure 44)._
Proof.: Line \(TL^{\prime}\) is the external angle bisector of \(\angle BTC\) by Theorem 3.9. Line \(TI\) is the internal angle bisector of \(\angle BTC\) by Protasov's Theorem. Thus, \(IT\perp TL^{\prime}\).
**Theorem 3.21**.: _Let \(\gamma_{a}\) be any circle inscribed in \(\angle BAC\). Let \(AL\) meet \(\gamma_{a}\) at \(X\) (closer to \(A\)). Let \(\gamma_{a}\) touch \(AC\) and \(AB\) at \(E\) and \(F\), respectively. Let \(AI\) meet \(EF\) at \(G\). Then \(X\), \(G\), \(I\), and \(L\) lie on a circle (Figure 45)._
Proof.: Let \(XL\) meet \(\gamma_{a}\) again at \(L^{\prime}\). Line \(AXL^{\prime}\) is a secant to circle \(\gamma_{a}\), and \(AE\) a tangent. So \(AX\cdot AL^{\prime}=(AE)^{2}\).
Figure 44. blue lines are perpendicular
Figure 45. four points lie on a circle
Triangles \(AGE\) and \(AED\) are similar right triangles, so
\[\frac{AE}{AG}=\frac{AD}{AE}\quad\text{or}\quad(AE)^{2}=AD\cdot AG.\]
Thus,
\[AX\cdot AL^{\prime}=AD\cdot AG\quad\text{or}\quad\frac{AX}{AG}=\frac{AD}{AL^{ \prime}}.\]
From Theorem 3.1, \(DL^{\prime}\parallel IL\), so
\[\frac{AD}{AL^{\prime}}=\frac{AI}{AL}.\]
Therefore,
\[\frac{AX}{AG}=\frac{AI}{AL}\quad\text{or}\quad AX\cdot AL=AG\cdot AI.\]
This implies that \(X\), \(G\), \(I\), and \(L\) lie on a circle.
**Theorem 3.22**.: _Let \(\gamma_{a}\) touch \(AC\) and \(AB\) at \(E\) and \(F\), respectively. Let \(AI\) meet \(EF\) at \(G\). Let \(EF\) meet \(CB\) at \(K\). Then \(X\), \(G\), \(Y^{\prime}\), \(T\), \(I\), \(L\), and \(K\) lie on a circle with diameter \(KI\) (Figure 46)._
Proof.: From Theorem 3.10, \(TK\) passes through \(L^{\prime}\). But from Theorem 3.20, \(TL^{\prime}\perp TI\), so \(\angle KTI\) is a right angle. This means that \(T\) lies on the circle with diameter \(KI\). Since \(\angle ILK\) is also a right angle, this means that \(L\) is also on this circle.
From Theorem 3.6, the circle through \(T\), \(L\), and \(K\) also passes through \(X\).
From Lemma 3.19, the circle through \(X\), \(T\), and \(L\) passes through \(Y^{\prime}\).
Since \(AI\) bisects \(\angle BAC\), \(G\) is the midpoint of \(EF\) and \(AG\perp EF\). Hence \(\angle KGI\) is a right angle. Thus, \(G\) lies on the circle with diameter \(KI\).
Therefore, all seven points lie on the blue circle shown in Figure 46.
See also [8] for a proof that \(X\), \(G\), \(T\), \(I\), \(L\), and \(K\) lie on a circle.
Figure 46. seven points lie on a circle
**Theorem 3.23**.: _We have \(\angle ATD=\angle IXT\) (Figure 47)._
Proof.: By Theorem 3.22, \(X\), \(T\), \(I\), and \(L\) lie on a circle as shown in the figure to the right. Then
\[\angle IXT=\angle ILT\]
because both subtend arc \(\widehat{TI}\). By Theorem 3.15,
\[\angle ILT=\angle ATD\]
where \(D\) is the center of \(\gamma_{a}\) as seen in Figure 47. Thus, \(\angle ATD=\angle IXT\).
## 4. Arcs with a Given Angular Measure
We can generalize many of the results in [30] by replacing the semicircles with arcs having the same angular measure. Let \(\omega_{a}\), \(\omega_{b}\), and \(\omega_{c}\) be arcs with the same angular measure \(\theta\) erected internally on the sides of \(\triangle ABC\) as shown in Figure 48.
Figure 47. green angles are equal
Figure 48. arcs have same angular measure
Throughout this section, we will use the symbols shown in the following table.
**Theorem 4.1**.: _Let the line through \(I\) parallel to \(BE\) meet \(AC\) at \(F\) (Figure 49). Then_
\[\angle IFA=\frac{\theta}{2}.\]
Proof.: Since arc \(\widehat{BC}\) has measure \(\theta\), \(\angle CO_{a}B=\theta\) and the remaining arc on the circle (\(O_{a}\)) outside the triangle must have measure \(360^{\circ}-\theta\). An inscribed angle is measured by half its intercepted arc, so \(\angle BEC=180^{\circ}-\theta/2\). Consequently, \(\angle AEB=\theta/2\) and since \(BE\parallel IF\), we have \(\angle AFI=\theta/2\).
Figure 49.
**Note 1.** If \(\theta<2C\), the figure looks different (Figure 50). In this case, the arc (extended) meets \(AC\) at a point \(E\) such that \(C\) lies between \(A\) and \(E\). In this case, \(\angle CEB\) is measured by half arc \(\widetilde{BC}\) and \(BE\parallel IF\) implies that \(\angle AFI=\theta/2\).
**Note 2.** If \(F\) lies between \(A\) and \(H\) or if \(\theta>2(180^{\circ}-A)\) which causes \(A\) to lie between \(E\) and \(F\), the figure also looks different (Figure 51). In this case, the arc meets \(AC\) (possibly extended) at a point \(E\) such that \(F\) lies between \(E\) and \(H\). In this case, the red arc has measure \(\theta\) and the remaining arc (below \(BC\)) has measure \(360^{\circ}-\theta\). Then \(\angle BEC\) is measured by half that arc and so \(\angle BEC=180^{\circ}-\theta/2\). So \(BE\parallel IF\) implies that \(\angle AFI=\theta/2\).
**Corollary 4.2**.: _Let the line through \(I\) parallel to \(BE\) meet \(AC\) at \(F\). Then_
\[IF=r\csc\frac{\theta}{2}.\]
Figure 50. Case \(\theta<2C\)
Proof.: From right triangle \(IHF\), we have
\[\sin\frac{\theta}{2}=\frac{IH}{IF}=\frac{r}{IF},\]
so \(IF=r\csc\frac{\theta}{2}\).
Let \(\gamma_{a}\) be the circle inside \(\triangle ABC\) tangent to sides \(AB\) and \(AC\) and also tangent to \(\omega_{a}\). The radii of circles \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) are denoted by \(\rho_{a}\), \(\rho_{b}\), and \(\rho_{c}\), respectively.
**Theorem 4.3**.: _We have (Figure 52)_
\[\angle HIK=\frac{\theta}{4}.\]
Proof.: A line through \(I\) parallel to \(BE\) meets \(AC\) at \(F\) (Figure 53).
From Theorem 4.1, \(\angle AFI=\theta/2\). From Theorem 2.10, \(IF=FK\), so \(\triangle KFI\) is isosceles and \(\angle FIK=\angle IKF=(180^{\circ}-\theta/2)/2=90^{\circ}-\theta/4\). From right triangle \(FHI\), we see that \(\angle FIH=90^{\circ}-\theta/2\).
Thus, \(\angle HIK=\angle FIK-\angle FIH=(90^{\circ}-\theta/4)-(90^{\circ}-\theta/2)= \theta/4\).
Figure 52. green angle = \(\theta/4\)
**Theorem 4.4**.: _We have \(\angle DKI=\theta/4\) (Figure 54)._
Proof.: Since \(DK\) and \(IH\) are both perpendicular to \(AC\), we have \(DK\parallel IH\). Thus \(\angle DKI=\angle HIK=\theta/4\).
**Corollary 4.5**.: _The length of the common external tangent between \(\rho_{a}\) and the incircle is \(r\tan\frac{\theta}{4}\)._
Proof.: In Figure 52, we see that \(HK\) is the common external tangent between \(\rho_{a}\) and the incircle. Since \(I\) is the incenter, \(IH=r\). From right triangle \(IHK\), we see that
\[\tan\angle HIK=\tan\frac{\theta}{4}=\frac{HK}{IH}=\frac{HK}{r}\]
and the result follows.
**Note.** If the arc \(\omega_{a}\) gets large enough, point \(A\) will lie inside \(\omega_{a}\) and circle \(\gamma_{a}\) will not exist. However, if we expand the definition of \(\gamma_{a}\) in that case so that it refers to the circle outside \(\triangle ABC\), tangent to sides \(AB\) and \(AC\) extended, and tangent _internally_ to \(\omega_{a}\) as shown in Figure 55, then Theorem 4.3 still holds.
Figure 55. green angle = \(\theta/4\)
**Theorem 4.6** (Ajima's Theorem).: _We have_
\[\rho_{a}=r\left(1-\tan\frac{A}{2}\tan\frac{\theta}{4}\right). \tag{8}\]
Proof.: See Figure 56. By Corollary 4.5,
\[HK=r\tan\frac{\theta}{4}.\]
In right triangle \(AIH\), we have \(\angle IAH=A/2\), so
\[AH=r\cot\frac{A}{2}.\]
Therefore,
\[AK=AH-HK=r\cot\frac{A}{2}-r\tan\frac{\theta}{4}.\]
From right triangle \(AKD\) with \(DK=\rho_{a}\), we have
\[\rho_{a}=AK\tan\frac{A}{2}=\left(r\cot\frac{A}{2}-r\tan\frac{\theta}{4}\right) \tan\frac{A}{2}\]
which is the desired result by the identity \(\tan x\cot x=1\).
The wasan geometer Naonobu Ajima found this result in 1781 (see [13, p. 32]). More info about Ajima's Theorem can be found in [13, pp. 96-97]. It has been said [12, p. 103] that this result is of great importance because it is used in the solution of many Japanese temple geometry problems.
**Corollary 4.7**.: _We have_
\[\rho_{a}=r\left(1-\frac{rt}{p-a}\right).\]
Proof.: This follows immediately from the well-known fact [37] that in Figure 52, \(AH=p-a\), so \(\tan(A/2)=r/(p-a)\).
**Corollary 4.8**.: _We have_
\[\rho_{a}=\frac{\Delta-(p-b)(p-c)t}{p}=r-\frac{(p-b)(p-c)t}{p}.\]
Figure 56.
Proof.: We use the well-known formulas \(r=\frac{\Delta}{p}\) and \(\Delta=\sqrt{(p(p-a)(p-b)(p-c)}\). From Corollary 4.7, we have
\[\rho_{a} =r-\frac{r^{2}t}{p-a}\] \[=r-\left(\frac{\Delta^{2}}{p^{2}}\right)\frac{t}{p-a}\] \[=r-\left(\frac{p(p-a)(p-b)(p-c)}{p^{2}}\right)\frac{t}{p-a}\] \[=r-\frac{(p-b)(p-c)t}{p}\] \[=\frac{\Delta-(p-b)(p-c)t}{p}.\qed\]
Theorem 4.6 remains true, with a sign change, if we allow the extended position for \(\gamma_{a}\).
**Theorem 4.9**.: _Let \(\omega_{a}\) be an arc of a circle with angular measure \(\theta\) that passes through points \(B\) and \(C\) of \(\triangle ABC\). Suppose \(\theta>2(180^{\circ}-A)\) so that \(A\) lies inside \(\omega_{a}\). Let \(\gamma_{a}\) be the circle outside the triangle tangent to sides \(AB\) and \(AC\) extended and also internally tangent to \(\omega_{a}\) as shown in Figure 57. Let \(\rho_{a}\) be the radius of \(\gamma_{a}\). Then_
\[\rho_{a}=-r\left(1-\tan\frac{A}{2}\tan\frac{\theta}{4}\right).\]
In all cases, we could say that
\[\rho_{a}=r\left|1-\tan\frac{A}{2}\tan\frac{\theta}{4}\right|.\]
A unifying discussion about circles tangent to arcs with a given angular measure can be found in [24].
Figure 57.
**Lemma 4.10**.: _For any \(x\),_
\[\sin 2x=\frac{2\tan x}{\tan^{2}x+1}.\]
Proof.: We have
\[\frac{2\tan x}{\tan^{2}x+1}=\frac{2\tan x}{\sec^{2}x}=\frac{2(\sin x)/(\cos x)}{ 1/\cos^{2}x}=2\sin x\cos x=\sin 2x.\qed\]
**Theorem 4.11** (Radius of \(\omega_{a}\)).: _We have_
\[R_{a}=\frac{a}{2}\csc\frac{\theta}{2}.\]
Proof.: Let \(M\) be the foot of the perpendicular from \(O_{a}\) to \(BC\) (Figure 58).
Then \(O_{a}C=R_{a}\) and \(MC=a/2\). We have \(\angle CO_{a}B=\theta\) since the angular measure of the arc is \(\theta\). Thus \(\angle CO_{a}M=\theta/2\) and hence \(\sin(\theta/2)=(a/2)/R_{a}\) and the result follows.
**Corollary 4.12**.: _We have_
\[R_{a}=\frac{a(t^{2}+1)}{4t}. \tag{9}\]
Proof.: This follows from Lemma 4.10.
**Corollary 4.13**.: _We have_
\[R_{a}=\frac{R(t^{2}+1)\sin A}{2t}. \tag{10}\]
Proof.: From the Extended Law of Sines, we have \(a/\sin A=2R\). Substituting \(a=2R\sin A\) into equation (9) gives the desired result.
Figure 58.
**Theorem 4.14**.: _We have (Figure 59)_
\[\frac{AL^{\prime}}{AL}=\frac{\rho_{a}}{r}.\]
Proof.: This follows from the fact that \(L\) and \(L^{\prime}\) are corresponding points in the homothety with center \(A\) that maps \(\gamma_{a}\) into the incircle.
**Theorem 4.15** (Length of \(AL^{\prime}\)).: _We have (Figure 60)_
\[AL^{\prime}=\left(1-\frac{rt}{p-a}\right)\sqrt{\frac{(p-a)[ap-(b-c)^{2}]}{a}}.\]
Proof.: Since \(L\) is the point where the incircle touches \(BC\), \(AL\) is a Gergonne cevian of \(\triangle ABC\). The length of a Gergonne cevian is known. From Property 3.1.3 in [26], we have
\[AL=\sqrt{\frac{(p-a)[ap-(b-c)^{2}]}{a}}. \tag{11}\]
Figure 59.
Figure 60.
Circle \(\gamma_{a}\) and the incircle are homothetic, with \(A\) being the center of the homothety. Since \(L^{\prime}\) and \(L\) are corresponding points of the homothety, we have
\[\frac{AL^{\prime}}{AL}=\frac{\rho_{a}}{r}.\]
Thus, \(AL^{\prime}=(\rho_{a}/r)\cdot AL\). Combining this with the value of \(\rho_{a}/r\) from Corollary 4.7 gives us our result.
**Lemma 4.16**.: _We have \(AK=p-a-rt\)._
Proof.: It is well known that \(AH=p-a\) (Figure 61). From Corollary 4.5, we have \(HK=rt\). Thus, \(AK=AH-HK=p-a-rt\).
**Theorem 4.17** (Length of \(Ax\)).: _We have (Figure 62)._
\[AX=\frac{(p-a-rt)\sqrt{a(p-a)}}{\sqrt{ap-(b-c)^{2}}}.\]
Figure 61.
Figure 62.
Proof.: Since \(AXL^{\prime}\) is a secant to \(\gamma_{a}\) and \(AK\) is a tangent, we have \(AX\cdot AL^{\prime}=(AK)^{2}\). From Lemma 4.16,
\[AK=p-a-rt.\]
From Theorem 4.15, we have
\[AL^{\prime}=\left(\frac{p-a-rt}{p-a}\right)\sqrt{\frac{(p-a)[ap-(b-c)^{2}]}{a}}.\]
So
\[AX =\frac{(AK)^{2}}{AL^{\prime}}\] \[=\frac{(p-a-rt)(p-a)}{\sqrt{\frac{(p-a)[ap-(b-c)^{2}]}{a}}}\] \[=\frac{(p-a-rt)\sqrt{a(p-a)}}{\sqrt{ap-(b-c)^{2}}}.\qed\]
**Corollary 4.18**.: _We have_
\[\frac{AX}{AL^{\prime}}=\frac{a(p-a)}{ap-(b-c)^{2}}.\]
## 5. Barycentric Coordinates
In this section, we will find the barycentric coordinates for various points associated with our configuration.
**Theorem 5.1** (Coordinates for \(D\)).: _The barycentric coordinates for \(D\) are_
\[D=\Big{(}ap(p-a)+(b+c)t\Delta):bp(p-a)-bt\Delta:cp(p-a)-ct\Delta\Big{)}\]
_where \(\Delta\) denotes the area of \(\triangle ABC\), \(p\) denotes the semiperimeter, and \(t=\tan\frac{\theta}{4}\)._
Proof.: Let \(y\) be the distance between \(D\) and \(BC\). Summing the areas of triangles \(DBC\), \(DCA\) and \(DAB\) we obtain
\[ay+b\rho_{a}+c\rho_{a}=2\Delta.\]
Thus,
\[ay=2\Delta-(b+c)\rho_{a}.\]
Letting \([XYZ]\) denote the area of \(\triangle XYZ\), we find that the barycentric coordinates for \(D\) are therefore
\[D =\Big{(}[DBC]:[DCA]:[DAB]\Big{)}=(ay:b\rho_{a}:c\rho_{a})\] \[=(2\Delta-(b+c)\rho_{a}:b\rho_{a}:c\rho_{a})\] \[=\left(\frac{2\Delta}{\rho_{a}}-(b+c):b:c\right).\]
Replacing \(\rho_{a}\) by its value given by Corollary 4.7, we get
\[D =\left(\frac{2\Delta}{r\left(1-\frac{rt}{p-a}\right)}-(b+c):b:c\right)\] \[=\Big{(}\frac{2(p-a)\Delta}{r\left(p-a-rt\right)}-(b+c):b:c\Big{)}\] \[=\Big{(}(b+c-a)\Delta-(b+c)r(p-a-rt):br(p-a-rt):cr(p-a-rt)\Big{)}.\]
Replacing \(r\) by \(\Delta/p\), then multiplying all coordinates by \(p^{2}/\Delta\) gives
\[D=\Big{(}ap(b+c-p)+(b+c)t\Delta:bp(p-a)-bt\Delta:cp(p-a)-ct\Delta\Big{)}.\]
Finally, noting that \(b+c-p=p-a\), gives the desired result.
**Theorem 5.2** (Coordinates for \(O_{a}\)).: _The barycentric coordinates for \(O_{a}\) are_
\[O_{a}=\left(-a^{2}:S_{c}+S\cot\phi:S_{b}+S\cot\phi\right).\]
_where \(\phi=90^{\circ}-\theta/2\), \(S=2\Delta\), \(S_{b}=(c^{2}+a^{2}-b^{2})/2\), and \(S_{c}=(a^{2}+b^{2}-c^{2})/2\)._
Proof.: The result follows from Conway's Formula [40, p. 34].
**Theorem 5.3** (Coordinates for \(T\)).: _The barycentric coordinates for \(T\) are \((T_{x}:T_{y}:T_{z})\) where_
\[T_{x} =2a\sin\frac{\theta}{4}\left(au\cos\frac{\theta}{2}+(b+c)u+2aS\sin \frac{\theta}{2}\right),\] \[T_{y} =-u\left(2\left(a^{2}-bc-c^{2}\right)\cos\frac{\theta}{2}+a^{2}+2 ab-(b+c)^{2}\right)\sin\frac{\theta}{4}\] \[+2S\left(a^{2}+bc-c^{2}\right)\cos\frac{3\theta}{4}+2bS(2a+b-c) \cos\frac{\theta}{4},\] \[T_{z} =-u\left(2\left(a^{2}-bc-b^{2}\right)\cos\frac{\theta}{2}+a^{2}+2 ac-(b+c)^{2}\right)\sin\frac{\theta}{4}\] \[+2S\left(a^{2}+bc-b^{2}\right)\cos\frac{3\theta}{4}+2cS(2a-b+c) \cos\frac{\theta}{4},\]
_where \(S=2\Delta\) and \(u=a^{2}-(b-c)^{2}\)._
Proof.: The barycentric coordinates for \(D\) were found in Theorem 5.1. This can be simplified to \(D=(D_{x}:D_{y}:D_{z})\) where
\[D_{x} =a^{3}-a(b+c)^{2}-2S(b+c)t,\] \[D_{y} =-b\left(-a^{2}+b^{2}+2bc+c^{2}-2St\right),\] \[D_{z} =-c\left(-a^{2}+b^{2}+2bc+c^{2}-2St\right)\]
by using the substitutions \(r=S/(a+b+c)\), \(p=(a+b+c)/2\), and \(\Delta=S/2\).
The barycentric coordinates for \(O_{a}\) were found in Theorem 5.2, namely
\[O_{a}=\left(-a^{2}:S_{c}+S\cot\phi:S_{b}+S\cot\phi\right)\]
where \(\phi=90^{\circ}-\theta/2\), \(S_{b}=(c^{2}+a^{2}-b^{2})/2\), and \(S_{c}=(a^{2}+b^{2}-c^{2})/2\).
From Corollary 4.8, we have
\[\rho_{a}=\frac{S-2(p-b)(p-c)t}{2p}.\]
The radius of \(\omega_{a}\) was found in Theorem 4.11, namely
\[R_{a}=\frac{a}{2}\csc\frac{\theta}{2}.\]
The touch point \(T\) divides the segment \(DO_{a}\) in the ratio \(\rho_{a}:R_{a}\). This fact allows us to use Mathematica to find the barycentric coordinates for \(T\) from the known barycentric coordinates for \(D\) and \(O_{a}\)
## 6. Properties of Three Ajima Circles
Let \(\omega_{a}\), \(\omega_{b}\), and \(\omega_{c}\), be arcs of angular measure \(\theta\) erected internally on the sides of \(\triangle ABC\). Let \(\gamma_{a}\) be the circle inscribed in \(\angle BAC\) and tangent externally to \(\omega_{a}\). Define \(\gamma_{b}\) and \(\gamma_{c}\) similarly. The three circles, \(\gamma_{a}\), \(\gamma_{b}\) and \(\gamma_{c}\) will be called a _general triad of circles_ associated with \(\triangle ABC\) (Figure 63).
For the remainder of this paper, we will assume that the three circles \(\gamma_{a}\), \(\gamma_{b}\) and \(\gamma_{c}\) all lie inside \(\triangle ABC\). An equivalent condition is that all angles of \(\triangle ABC\) have measure less than \(180^{\circ}-\frac{\theta}{2}\).
**Theorem 6.1**.: _The common external tangents to any pair of circles in a general triad are congruent (Figure 64). The common length is \(2r\tan\frac{\theta}{4}\)._
Proof.: The common length is twice \(KH\) (Figure 52) whose value is given by Corollary 4.5.
Figure 63. general triad of circles
**Note.** The theorem remains true if some or all of the yellow circles are outside of \(\triangle ABC\) as shown in Figure 65.
**Theorem 6.2**.: _Let \(M_{a}\), \(M_{b}\), and \(M_{c}\) be the midpoints of the common tangents (lying along the sides of \(\triangle ABC\)) to a general triad of circles associated with that triangle. Then \(M_{a}\), \(M_{b}\), and \(M_{c}\) are the touch points of the incircle of \(\triangle ABC\) with the sides of the triangle (Figure 66)._
Proof.: This follows from Corollary 4.5.
Figure 65. blue lines are congruent
**Theorem 6.3**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with triangle \(\triangle ABC\). Let \(M_{a}\), \(M_{b}\), and \(M_{c}\) be the points where the incircle of \(\triangle ABC\) touches the sides. Then the radical axis of \(\gamma_{b}\), and \(\gamma_{c}\) is \(AM_{a}\) (Figure 67)._
Proof.: From Theorem 6.2, \(M_{a}E_{a}=M_{a}F_{a}\). Thus, the tangents from \(M_{a}\) to \(\gamma_{b}\) and \(\gamma_{c}\) are equal. Since \(AD_{c}=AD_{b}\) and \(D_{c}E_{c}=D_{b}F_{b}\) (Theorem 6.1), this means \(AE_{c}=AF_{b}\). Hence the tangents from \(A\) to \(\gamma_{b}\) and \(\gamma_{c}\) are equal. The radical axis of circles \(\gamma_{b}\) and \(\gamma_{c}\) is the locus of points such that the lengths of the tangents to the two circles from that point are equal. The radical axis of two circles is a straight line. Therefore, the radical axis of circles \(\gamma_{b}\) and \(\gamma_{c}\) is \(AM_{a}\), the Gergonne cevian from \(A\).
**Theorem 6.4**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with triangle \(\triangle ABC\). Then the radical center of the three circles of the triad is the Gergonne point of \(\triangle ABC\) (Figure 68)._
Proof.: By Theorem 6.3, the radical axis of circles \(\gamma_{b}\) and \(\gamma_{c}\) is \(AM_{a}\), the Gergonne cevian from \(A\). Similarly, the radical axis of circles \(\gamma_{a}\) and \(\gamma_{c}\) is the Gergonne cevian from \(B\) and the radical axis of circles \(\gamma_{a}\) and \(\gamma_{b}\) is the Gergonne cevian from \(C\). Hence, the radical center of the general triad of circles is the intersection point of the three Gergonne cevians, namely, the Gergonne point of \(\triangle ABC\).
Figure 67.
**Theorem 6.5**.: _The six points of contact of a general triad of circles lie on a circle with center \(I\), the incenter of \(\triangle ABC\) (Figure 69)._
Proof.: This follows from Theorem 4.3 from which we can deduce that
\[ID_{b}=ID_{c}=IE_{a}=IE_{c}=IF_{a}=IF_{b}=r\sec\frac{\theta}{4}.\qed\]
**Theorem 6.6**.: _Let the centers of \(\omega_{a}\), \(\omega_{b}\), and \(\omega_{b}\), be \(O_{a}\), \(O_{b}\), and \(O_{c}\), respectively. Then \(AO_{a}\), \(BO_{b}\), and \(CO_{c}\) are concurrent (Figure 70)._
Proof.: Note that isosceles triangles \(BCO_{a}\), \(CAO_{b}\), and \(ABO_{c}\) are similar. Therefore \(AO_{a}\), \(BO_{b}\), and \(CO_{c}\) are concurrent by Jacobi's Theorem [38].
Figure 69. touch points are concyclic
**Theorem 6.7** (Paasche Analog).: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with triangle \(\triangle ABC\). Let \(T_{a}\), \(T_{b}\), and \(T_{c}\) be the points where they touch the three arcs having the same angular measure (Figure 71). Then \(AT_{a}\), \(BT_{b}\), and \(CT_{c}\) are concurrent._
Proof.: The barycentric coordinates for \(T_{a}\) were found in Theorem 5.3. The barycentric coordinates for \(A\) are \((1:0:0)\). We can thus find the equation of the line \(AT_{a}\) using formula (3) from [15]. Similarly, we can find the equations for the lines \(BT_{b}\) and \(CT_{c}\). Then, using Mathematica, we can use the condition that three lines are concurrent (formula (6) from [15]) to prove that \(AT_{a}\), \(BT_{b}\) and \(CT_{c}\) are concurrent.
We call this theorem the Paasche Analog because when \(\theta=180^{\circ}\), the point of concurrence is the Paasche point of the triangle [19].
**Open Question 1**.: _Is there a purely geometric proof for Theorem 6.7?_
The coordinates for the point of concurrence are complicated and we do not give them here. However, we did find the following interesting result.
**Theorem 6.8**.: _When \(\theta=120^{\circ}\), the point of concurrence of \(AT_{a}\), \(BT_{b}\), and \(CT_{c}\) is the isogonal conjugate of \(X_{7005}\). When \(\theta=240^{\circ}\), the point of concurrence of \(AT_{a}\), \(BT_{b}\), and \(CT_{c}\) is \(X_{14358}\)._
Figure 71. red lines are concurrent
## 7. Some Metric Identities
Throughout this section, we will let
\[t=\tan\frac{\theta}{4}\]
and
\[\mathbb{W}=\frac{4R+r}{p}.\]
The following three identities were given in [29, Lemma 3] and we will need them here as well.
**Lemma 7.1**.: _Let \(A\), \(B\), and \(C\) be the angles of a triangle with inradius \(r\), circumradius \(R\), and semiperimeter \(p\). Then_
\[\tan\frac{A}{2}+\tan\frac{B}{2}+\tan\frac{C}{2}=\mathbb{W}.\]
**Lemma 7.2**.: _Let \(A\), \(B\), and \(C\) be the angles of a triangle. Then_
\[\tan\frac{A}{2}\tan\frac{B}{2}+\tan\frac{B}{2}\tan\frac{C}{2}+\tan\frac{C}{2} \tan\frac{A}{2}=1.\]
**Lemma 7.3**.: _Let \(A\), \(B\), and \(C\) be the angles of a triangle with inradius \(r\) and semiperimeter \(p\). Then_
\[\tan\frac{A}{2}\tan\frac{B}{2}\tan\frac{C}{2}=\frac{r}{p}.\]
From Theorem 4.6,
\[\rho_{a}=r\left(1-t\tan\frac{A}{2}\right),\]
so
\[r-\rho_{a}=rt\tan\frac{A}{2} \tag{12}\]
with similar formulas for \(r-\rho_{b}\) and \(r-\rho_{c}\). Also,
\[\tan\frac{A}{2}=\frac{r-\rho_{a}}{rt} \tag{13}\]
with similar formulas for \(\tan\frac{B}{2}\) and \(\tan\frac{C}{2}\). Using equation (12) gives us the following corollary to these lemmas.
**Corollary 7.4**.: _For a general triad of circles associated with \(\triangle ABC\), we have_
\[\sum(r-\rho_{a}) =rt\mathbb{W}, \tag{15}\] \[\sum(r-\rho_{a})(r-\rho_{b}) =r^{2}t^{2},\] (16) \[\prod(r-\rho_{a}) =\frac{r^{4}t^{3}}{p}. \tag{14}\]
**Theorem 7.5**.: _For a general triad of circles associated with \(\triangle ABC\), we have_
\[\rho_{a}+\rho_{b}+\rho_{c}=3r-rt\mathbb{W}.\]
Proof.: This follows immediately from equation (14).
When \(\theta=180^{\circ}\), the arcs become semicircles, \(t=1\), and this result agrees with formula (6) in [29].
**Theorem 7.6**.: _For a general triad of circles associated with \(\triangle ABC\), we have_
\[3r^{2}-2r\sum\rho_{a}+\sum\rho_{a}\rho_{b}=r^{2}t^{2}.\]
Proof.: Expanding the left side of equation (15) gives the desired result.
**Theorem 7.7**.: _For a general triad of circles associated with \(\triangle ABC\), we have_
\[\rho_{a}\rho_{b}+\rho_{b}\rho_{c}+\rho_{c}\rho_{a}=r^{2}\left(t^{2}-2t\mathbb{ W}+3\right).\]
Proof.: From Theorem 7.6, we have
\[3r^{2}-2r\sum\rho_{a}+\sum\rho_{a}\rho_{b}=r^{2}t^{2}.\]
Using Theorem 7.5, we get
\[3r^{2}-2r\left(3r-rt\mathbb{W}\right)+\sum\rho_{a}\rho_{b}=r^{2}t^{2}.\]
Thus,
\[\sum\rho_{a}\rho_{b}=r^{2}t^{2}+2r\left(3r-rt\mathbb{W}\right)-3r^{2}\]
which simplifies to
\[\sum\rho_{a}\rho_{b}=r^{2}t^{2}-2r^{2}t\mathbb{W}+3r^{2}\]
as desired.
When \(\theta=180^{\circ}\), the arcs become semicircles and this result agrees with formula (7) in [29].
**Theorem 7.8**.: _We have_
\[\rho_{a}^{2}+\rho_{b}^{2}+\rho_{c}^{2}=r^{2}\left[3-2t\mathbb{W}+\left( \mathbb{W}^{2}-2\right)t^{2}\right]\]
Proof.: Using the identity
\[\left(\sum\rho_{a}\right)^{2}=\sum\rho_{a}^{2}+2\sum\rho_{a}\rho_{b},\]
we find that
\[\rho_{a}^{2}+\rho_{b}^{2}+\rho_{c}^{2}=\left(3r-rt\mathbb{W}\right)^{2}-2r^{2} \left(t^{2}-2t\mathbb{W}+3\right).\]
Simplifying gives
\[\rho_{a}^{2}+\rho_{b}^{2}+\rho_{c}^{2}=r^{2}\left[3-2t\mathbb{W}+\left( \mathbb{W}^{2}-2\right)t^{2}\right]\]
which is the desired result.
When \(\theta=180^{\circ}\), the arcs become semicircles and this result agrees with formula (8) in [29].
**Theorem 7.9**.: _For a general triad of circles associated with \(\triangle ABC\), we have_
\[\rho_{a}\rho_{b}\rho_{c}=r^{3}\left(1-t\mathbb{W}+t^{2}-\frac{r}{p}t^{3} \right).\]
Proof.: Start with equation (16). Expand and use Theorems 7.5 and 7.7 to substitute known values for \(\sum\rho_{a}\) and \(\sum\rho_{a}\rho_{b}\). Solving for \(\rho_{a}\rho_{b}\rho_{c}\) then gives the desired formula.
The following result was found empirically using the program "OK Geometry".3
Footnote 3: OK Geometry is a tool for analyzing dynamic geometric constructions, developed by Zlatan Magajna which can be freely downloaded from [https://www.ok-geometry.com/](https://www.ok-geometry.com/).
**Theorem 7.10**.: _We have_
\[a^{2}\rho_{a}^{2}(2r-\rho_{a})^{2}+16r(r-\rho_{a})(rR-rR_{a}-\rho_{a}R)(rR-rRa+ \rho_{a}R_{a})=0.\]
Proof.: Starting with the left side of the equation, we make the following substitutions, in succession.
\[\rho_{a} =r\left(1-\frac{r}{p-a}\tan\frac{\theta}{4}\right)\] \[R =\frac{abc}{4\Delta}\] \[r =\frac{\Delta}{p}\] \[\Delta =\sqrt{p(p-a)(p-b)(p-c)}\] \[p =\frac{a+b+c}{2}\] \[R_{a} =\frac{a}{2\cos(90^{\circ}-\frac{\theta}{2})}\]
Simplifying the resulting expression using Mathematica, we find that the expression is equal to \(0\).
In some special cases, this formula can be simplified.
**Theorem 7.11**.: _If \(\theta=360^{\circ}-4A\), then_
\[\rho_{a}=\frac{2rR_{a}}{R+2R_{a}}.\]
Proof.: The proof is the same as the proof of Theorem 7.10.
For a fixed \(\theta\), we can find a relationship between \(r\), \(R\), \(R_{a}\) and \(\rho_{a}\), not involving \(a\).
**Theorem 7.12**.: _We have_
\[\frac{R_{a}}{rR}=\frac{(r-\rho_{a})(1+t^{2})}{(r-\rho_{a})^{2}+r^{2}t^{2}}. \tag{17}\]
Proof.: This follows by eliminating \(\tan(A/2)\) from equations (8) and (10). The expression \(\sin A\) is expressed in terms of \(\tan(A/2)\) using Lemma 4.10.
Solving for \(t^{2}\) in equation (17) gives us the following.
**Theorem 7.13**.: _We have_
\[t^{2}=\frac{(r-\rho_{a})(\rho_{a}R_{a}+rR-rR_{a})}{r(R\rho_{a}+rR_{a}-rR)}.\]
## 8. Apollonius Circles of the Three Ajima Circles
A circle that is tangent to three given circles is called an _Apollonius circle_ of those three circles.
If all three circles lie inside an Apollonius circle, then the Apollonius circle is called the _outer Apollonius circle_ of the three circles. The outer Apollonius circle surrounds the three circles and is internally tangent to all three.
If all three circles lie outside an Apollonius circle, then the Apollonius circle is called the _inner Apollonius circle_ of the three circles. The inner Apollonius circle will either be internally tangent to the three given circles or it will be externally tangent to all the circles. Figure 72 shows various configurations. In each case, the red circle is the inner Apollonius circle of the three blue circles.
We will be looking at the inner and outer Apollonius circles of a general triad of circles associated with \(\triangle ABC\). But first, let us review some known facts about tangent circles.
**Lemma 8.1**.: _Let \(U(r_{1})\) and \(V(r_{2})\) be two circles in the plane. Let \(S\) be a center of similarity of the two circles (Figure 73). Then_
\[\frac{US}{SV}=\frac{r_{1}}{r_{2}}.\]
Proof.: The line of centers of two circles points passes through the center of similitude. So \(S\) lies on \(UV\). In a similarity, corresponding distances in two similar figures are in proportion to their ratio of similitude. Their ratio of similitude is the ratio of their radii, namely \(r_{1}/r_{2}\). So \(SU/SV=r_{1}/r_{2}\).
Figure 72. inner Apollonius circle of three circles
When we say that a circle is inscribed in an angle \(ABC\), we mean that the circle is tangent to the rays \(\overrightarrow{BA}\) and \(\overrightarrow{BC}\).
The following result comes from [25, Theorem 2].
**Lemma 8.2**.: _Let \(C_{a}\) be an arbitrary circle inscribed in \(\angle BAC\) of \(\triangle ABC\). Let \(C_{b}\) be an arbitrary circle inscribed in \(\angle CBA\). Let \(C_{c}\) be an arbitrary circle inscribed in \(\angle ACB\). Let \(S\) be the inner (respectively outer) Apollonius circle of \(C_{a}\), \(C_{b}\), and \(C_{c}\). Let \(T_{a}\) be the point where \(C_{a}\) touches \((S)\). Define \(T_{b}\) and \(T_{c}\) similarly. Then \(AT_{a}\), \(BT_{b}\), and \(CT_{c}\) are concurrent at a point \(P\) (Figure 74). The point \(P\) is the internal (external) center of similitude of the incircle of \(\triangle ABC\) and circle \((S)\)._
The following lemma comes from [7, p. 85].
**Lemma 8.3**.: _If two circles touch two others, then the radical axis of either pair passes through a center of similitude of the other pair (Figure 75)._
The following lemma comes from Gergonne's construction of Apollonius circles. (See [11, pp. 159-160].)
**Lemma 8.4**.: _Let \((O_{1})\), \((O_{2})\), and \((O_{3})\) be three circles in the plane. Let \(C_{i}\) be the inner Apollonius circle of the circles \((O_{1})\), \((O_{2})\), and \((O_{3})\). Let \(C_{o}\) be the outer Apollonius circle of the circles \((O_{1})\), \((O_{2})\), and \((O_{3})\). Let \(U_{1}\) be the point where \(C_{i}\) touches \((O_{1})\). Define \(U_{2}\) and \(U_{3}\) similarly. Let \(V_{1}\) be the point where \(C_{o}\) touches \((O_{1})\). Define \(V_{2}\) and \(V_{3}\) similarly. Then \(V_{1}U_{1}\), \(V_{2}U_{2}\), and \(V_{3}U_{3}\) are concurrent at the radical center, \(R\), of \((O_{1})\), \((O_{2})\), and \((O_{3})\) (Figure 76)._
**Theorem 8.5**.: _Let \(C_{1}\), \(C_{2}\), and \(C_{3}\) be three circles as shown in Figure 77. Let \(U(\rho_{i})\) and \(V(\rho_{o})\) be the inner and outer Apollonius circles of \(C_{1}\), \(C_{2}\), and \(C_{3}\), respectively. Let \(S\) be the radical center of the three circles. Then \(S\) lies on \(UV\) and_
\[\frac{SU}{SV}=\frac{\rho_{i}}{\rho_{o}}.\]
Proof.: Circles \(C_{1}\) and \(C_{2}\) each touch circles \(C_{i}\) and \(C_{o}\). By Lemma 8.3, the radical axis of \(C_{1}\) and \(C_{2}\) passes through a center of similarity, \(S^{*}\), of \(C_{i}\) and \(C_{o}\). Similarly, the radical axis of \(C_{2}\) and \(C_{3}\) passes through \(S^{*}\). These two radical axes meet at \(S\), so \(S=S^{*}\).
Note that \(C_{i}\) and \(C_{o}\) are two circles with center of similarity \(S\). By Lemma 8.1, \(S\) lies on \(UV\) and \(SU/SV=\rho_{i}/\rho_{o}\).
Figure 76.
Figure 77. \(SU/SV=\rho_{i}/\rho_{o}\)
The following results were found via complex calculations carried out with Mathematica. The details are omitted.
**Theorem 8.6** (Coordinates for \(U_{a}\)).: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Let \(U_{a}\) be the point where \(C_{i}\) touches \(\gamma_{a}\). Then the barycentric coordinates for \(U_{a}\) are \((x:y:z)\) where_
\[x =2a(p-b)(p-c)t\] \[y =(p-c)[S-2(p-b)(p-c)t]\] \[z =(p-b)[S-2(p-b)(p-c)t]\]
_and where \(p\) is the semiperimeter of \(\triangle ABC\), \(S\) is twice the area, and \(t=\tan(\theta/4)\)._
The coordinates for \(U_{b}\) and \(U_{c}\) are similar.
**Theorem 8.7** (Coordinates for \(U\)).: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Let \(U\) be the center of \(C_{i}\). Then the barycentric coordinates for \(U\) are \((X:Y:Z)\) where_
\[X =\left(-2a^{3}+a^{2}(b+c)+(b-c)^{2}(b+c)\right)t-2aS\] \[Y =\left(a^{3}-a^{2}c+a\left(b^{2}-c^{2}\right)+c^{3}+b^{2}c-2b^{3} \right)t-2bS\] \[Z =\left(a^{3}-a^{2}b+a\left(c^{2}-b^{2}\right)+b^{3}+bc^{2}-2c^{3} \right)t-2cS\]
_and where \(S\) is twice the area of \(\triangle ABC\) and \(t=\tan(\theta/4)\)._
**Theorem 8.8** (Coordinates for \(V_{a}\)).: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{o}\) be the outer Apollonius circle of the circles in the triad. Let \(V_{a}\) be the point where \(C_{o}\) touches \(\gamma_{a}\). Then the barycentric coordinates for \(V_{a}\) are \((x:y:z)\) where_
\[x =2(p-b)(p-c)[2S+a(p-a)t]\] \[y =(p-a)(p-c)[S-2(p-b)(p-c)t]\] \[z =(p-a)(p-b)[S-2(p-b)(p-c)t]\]
_and where \(p\) is the semiperimeter of \(\triangle ABC\), \(S\) is twice the area, and \(t=\tan(\theta/4)\)._
The coordinates for \(V_{b}\) and \(V_{c}\) are similar.
**Theorem 8.9** (Coordinates for \(V\)).: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{o}\) be the outer Apollonius circle of the circles in the triad. Let \(V\) be the center of \(C_{o}\). Then the barycentric coordinates for \(V\) are \((X:Y:Z)\) where_
\[X =\left(-2a^{3}+a^{2}(b+c)+(b-c)^{2}(b+c)\right)t+6aS\] \[Y =\left(a^{3}-a^{2}c+a\left(b^{2}-c^{2}\right)+c^{3}+b^{2}c-2b^{3} \right)t+6bS\] \[Z =\left(a^{3}-a^{2}b+a\left(c^{2}-b^{2}\right)+b^{3}+bc^{2}-2c^{3} \right)t+6cS\]
_and where \(S\) is twice the area of \(\triangle ABC\) and \(t=\tan(\theta/4)\)._
**Theorem 8.10**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Let \(U_{a}\) be the point where \(C_{i}\) touches \(\gamma_{a}\). Define \(U_{b}\) and \(U_{c}\) similarly. Then \(A\), \(U_{a}\), and \(G_{e}\) are collinear (Figure 78). Similarly, \(B\), \(U_{b}\), and \(G_{e}\) are collinear; and \(C\), \(U_{c}\), and \(G_{e}\) are collinear._
Proof.: By symmetry, it suffices to prove that \(A\), \(U_{a}\), and \(G_{e}\) are collinear. The barycentric coordinates for \(A\) are \((1:0:0)\). The barycentric coordinates for \(G_{e}\) are well known to be
\[G_{e}=\left(\frac{1}{b+c-a}:\frac{1}{c+a-b}:\frac{1}{a+b-c}\right).\]
The barycentric coordinates for \(U_{a}\) were given in Theorem 8.6. Using these coordinates and the condition for three points to be collinear (formula (4) from [15]), it is straightforward to confirm that \(A\), \(U_{a}\), and \(G_{e}\) are collinear.
**Open Question 2**.: _Is there a purely geometric proof for Theorem 8.10?_
**Corollary 8.11**.: _The point we called \(L^{\prime}\) in Section 3 (the intersection of \(AL\) with \(\gamma_{a}\) nearer \(L\)) coincides with \(U_{a}\), the point where the inner Apollonius circle touches \(\gamma_{a}\)._
Figure 78. lines concur at the Gergonne point
**Theorem 8.12**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{o}\) be the outer Apollonius circle of the circles in the triad. Let \(V_{a}\) be the point where \(C_{o}\) touches \(\gamma_{a}\). Define \(V_{b}\) and \(V_{c}\) similarly. Then \(A\), \(V_{a}\), and \(G_{e}\) are collinear (Figure 79). Similarly, \(B\), \(V_{b}\), and \(G_{e}\) are collinear; and \(C\), \(V_{c}\), and \(G_{e}\) are collinear._
Proof.: By Lemma 8.4, \(U_{a}V_{a}\), \(U_{b}V_{b}\), and \(U_{c}V_{c}\) concur at the radical center of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). By Theorem 6.4, this radical center is the Gergonne point of the triangle. So \(V_{a}\) lies on \(G_{e}U_{a}\). By Theorem 8.10, \(A\), \(U_{a}\), and \(G_{e}\) are collinear. So \(A\) lies on \(G_{e}U_{a}\). Since both \(A\) and \(V_{a}\) lie on \(G_{e}U_{a}\), we see that \(V_{a}\) lies on \(AU_{a}\). Similarly, \(V_{b}\) lies on \(BU_{b}\) and \(V_{c}\) lies on \(CU_{c}\).
**Corollary 8.13**.: _The point we called \(X\) in Section 3 (the intersection of \(AL\) with \(\gamma_{a}\) nearer \(A\)) coincides with \(V_{a}\), the point where the outer Apollonius circle touches \(\gamma_{a}\)._
Combining Theorems 8.10 and 8.12 lets us state the following result.
**Theorem 8.14**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Let \(C_{o}\) be the outer Apollonius circle of the circles in the triad. Let \(U_{a}\) be the point where \(C_{i}\) touches \(\gamma_{a}\). Define \(U_{b}\) and \(U_{c}\) similarly. Let \(V_{a}\) be the point where \(C_{o}\) touches \(\gamma_{a}\). Define \(V_{b}\) and \(V_{c}\) similarly. Then \(A\), \(V_{a}\), and \(U_{a}\) are collinear. Similarly, \(B\), \(V_{b}\), and \(U_{b}\) and \(C\), \(V_{c}\), and \(U_{c}\) are collinear. The three lines meet at \(G_{e}\), the Gergonne point of \(\triangle ABC\) (Figure 80)._
Figure 79. lines concur at the Gergonne point
**Theorem 8.15**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Let \(U_{a}\) be the point where \(C_{i}\) touches \(\gamma_{a}\). Let \(L\) be the point where the incircle of \(\triangle ABC\) touches \(BC\) (Figure 81). Then \(A\), \(U_{a}\), and \(L\) are collinear._
Proof.: By Theorem 8.14, \(AU_{a}\) passes through \(G_{e}\), the Gergonne point of \(\triangle ABC\). But by definition, \(AL\) also passes through \(G_{e}\). Thus, \(AU_{a}\) coincides with \(AL\).
This gives us an easy way to construct the inner Apollonius circle of a general triad of circles. Let the incircle of \(\triangle ABC\) touch \(BC\) at \(L\). Then \(AL\) meets \(\gamma_{a}\) (closer to \(L\)) at \(U_{a}\). Construct \(U_{b}\) and \(U_{c}\) in the same manner. Then the circumcircle of \(\triangle U_{a}U_{b}U_{c}\) is the inner Apollonius circle.
To construct the outer Apollonius circle, find the point \(V_{a}\) where \(AL\) meets \(\gamma_{a}\) (closer to \(A\)). Construct \(V_{b}\) and \(V_{c}\) in the same manner. Then the circumcircle of \(\triangle V_{a}V_{b}V_{c}\) is the outer Apollonius circle.
Figure 80. lines concur at the Gergonne point
**Theorem 8.16**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Let \(U_{a}\) be the point where \(C_{i}\) touches \(\gamma_{a}\). Let \(t_{a}\) be the tangent to \(\gamma_{a}\) at \(U_{a}\). Define \(t_{b}\) and \(t_{c}\) similarly (Figure 82). Then \(t_{a}\), \(t_{b}\), and \(t_{c}\) form a triangle homothetic to \(\triangle ABC\). The center of the homothety is \(G_{e}\), the Gergonne point of \(\triangle ABC\)._
Proof.: By Theorem 3.2, the tangent to \(\gamma_{a}\) at \(U_{a}\) is parallel to \(BC\). So \(t_{a}\parallel BC\), \(t_{b}\parallel CA\), and \(t_{c}\parallel AB\). Thus, the triangle formed by \(t_{a}\), \(t_{b}\), and \(t_{c}\), is similar to \(\triangle ABC\). Let \(A^{\prime}\), \(B^{\prime}\), and \(C^{\prime}\) be the vertices of this triangle. By a well-known theorem [18, Art. 24], this implies that \(\triangle ABC\) is homothetic to \(\triangle A^{\prime}B^{\prime}C^{\prime}\) (with \(A\) mapping to \(A^{\prime}\), \(B\) to \(B^{\prime}\), and \(C\) to \(C^{\prime}\)). Let \(L\), \(M\), and \(N\) be the points where the incircle of \(\triangle ABC\) touches \(BC\), \(CA\), and \(AB\) (Figure 83).
The homothety maps the incircle of \(\triangle ABC\) into the incircle of \(\triangle A^{\prime}B^{\prime}C^{\prime}\), and the touch points into the touch points, i.e. \(L\) maps to \(U_{a}\), \(M\) maps to \(U_{b}\), and \(N\) maps to \(U_{c}\). Hence, the center of the homothety is the point of concurrence of lines \(LU_{a}\), \(MU_{b}\), and \(NU_{c}\). By Theorem 8.15, the line \(LU_{a}\) coincides with the line \(AL\), the line \(MU_{b}\) coincides with \(BM\), and the line \(NU_{c}\) coincides with line \(CN\). Therefore, the center of the homothety is \(G_{e}\), the Gergonne point of \(\triangle ABC\).
Figure 82. red triangle is homothetic to \(\triangle ABC\)
**Corollary 8.17**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(C_{i}\) be the inner Apollonius circle of the circles in the triad. Then \(C_{i}\) and the incircle of \(\triangle ABC\) are homothetic with \(G_{e}\) as the center of the homothety (Figure 84)._
**Theorem 8.18**.: _For a general triad of circles associated with \(\triangle ABC\), let \(U(\rho_{i})\) be the inner Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) (Figure 85). Let \(G_{e}\) be the Gergonne point of \(\triangle ABC\). Then_
\[\frac{G_{e}U}{G_{e}I}=\frac{\rho_{i}}{r}.\]
Proof.: Let the touch points of circle (\(U\)) with \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be \(U_{a}\), \(U_{b}\), and \(U_{c}\), respectively. By Lemma 8.2, \(AU_{a}\), \(BU_{b}\), and \(CU_{c}\) concur at a point \(P\) that is a center of similitude of circle (\(U\)) and the incircle, (\(I\)). By Theorem 8.14, \(P=G_{e}\). Note that (\(U\)) and (\(I\)) are two circles with center of similarity \(G_{e}\). By Lemma 8.1, \(G_{e}U/G_{e}I=\rho_{i}/r\).
Figure 84. \(G_{e}\) is the center of similarity between the incircle and \(C_{i}\)
**Theorem 8.19**.: _For a general triad of circles associated with \(\triangle ABC\), let \(V(\rho_{o})\) be the outer Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) (Figure 86). Let \(G_{e}\) be the Gergonne point of \(\triangle ABC\). Then_
\[\frac{G_{e}V}{G_{e}I}=\frac{\rho_{o}}{r}.\]
Proof.: The proof is the same as the proof of Theorem 8.18.
**Theorem 8.20**.: _For a general triad of circles associated with \(\triangle ABC\), let \(U(\rho_{i})\) and \(V(\rho_{o})\) be the inner and outer Apollonius circles of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\), respectively. Let \(G_{e}\) be the Gergonne point of \(\triangle ABC\). Then_
\[\frac{G_{e}V}{G_{e}U}=\frac{\rho_{o}}{\rho_{i}}.\]
Proof.: This follows from Theorem 8.5. It also follows from Theorems 8.18 and 8.19.
**Theorem 8.21**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with triangle \(\triangle ABC\). A circle externally tangent to each circle of the triad touches \(\gamma_{a}\) at \(U_{a}\). Then the tangents from \(U_{a}\) to \(\gamma_{b}\) and \(\gamma_{c}\) have the same length (Figure 88)._
Proof.: By Theorem 8.14, \(U_{a}\) lies on the Gergonne cevian from vertex \(A\). By Theorem 6.3, this Gergonne cevian is the radical axis of circles \(\gamma_{b}\) and \(\gamma_{c}\). Thus, the two tangents have the same length.
**Theorem 8.22** (Miyamoto Analog).: _For a general triad of circles associated with \(\triangle ABC\), the inner Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), \(\gamma_{c}\) (blue circle in Figure 89), is internally tangent to the inner Apollonius circle of \(\omega_{a}\), \(\omega_{b}\), \(\omega_{c}\) (green circle in Figure 89)._
Proof.: This is a special case of the following theorem which is stated in [20].
Figure 88. red tangent lengths are equal
**Theorem 8.23** (Miyamoto Generalization).: _Let \(\omega_{a}\) be any arc erected internally on side \(BC\) of \(\triangle ABC\). Let \(\gamma_{a}\) be the circle that is inside \(\triangle ABC\), tangent to \(AB\) and \(AC\), and tangent externally to \(\omega_{a}\). Define \(\omega_{b}\), \(\omega_{c}\), \(\gamma_{b}\), and \(\gamma_{c}\) similarly. Then the inner Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), \(\gamma_{c}\) (blue circle in Figure 90), is internally tangent to the inner Apollonius circle of \(\omega_{a}\), \(\omega_{b}\), \(\omega_{c}\) (green circle in Figure 90)._
**Lemma 8.24**.: _We have_
\[r\mathbb{W}=\frac{2ab+2bc+2ca-a^{2}-b^{2}-c^{2}}{2(a+b+c)}.\]
Proof.: Recall that \(\mathbb{W}=(4R+r)/p\). We use the well-known identities \(r=\Delta/p\), \(R=abc/(4\Delta)\), and \(\Delta=\sqrt{p(p-a)(p-b)(p-c)}\). Then we have
\[r\mathbb{W} =r\left(\frac{4R+r}{p}\right)\] \[=\left(\frac{\Delta}{p}\right)\left(4\cdot\frac{abc}{4\Delta}+ \frac{\Delta}{p}\right)\Big{/}\ p\] \[=\left(\frac{abc}{p}+\frac{\Delta^{2}}{p^{2}}\right)\Big{/}\ p\] \[=\frac{1}{p}\left(\frac{abc}{p}+\frac{p(p-a)(p-b)(p-c)}{p^{2}} \right).\]
Letting \(p=(a+b+c)/2\) and simplifying, gives
\[r\mathbb{W}=\frac{2ab+2bc+2ca-a^{2}-b^{2}-c^{2}}{2(a+b+c)}.\qed\]
Figure 90. green and blue circles touch at \(P\). Red arcs have different angular measures.
**Lemma 8.25** (Length of \(AG_{e}\)).: _We have_
\[AG_{e}=\frac{(p-a)\sqrt{a(p-a)[ap-(b-c)^{2}]}}{pr\mathbb{W}}.\]
Proof.: The distance from a vertex of a triangle to its Gergonne point is known. From Property 2.1.1 in [26], we have
\[AG_{e}=\frac{(b+c-a)\sqrt{a(b+c-a)[2ap-2(b-c)^{2}]}}{2ab+2bc+2ca-a^{2}-b^{2}-c^ {2}}. \tag{18}\]
From Lemma 8.24, this can be written as
\[AG_{e}=\frac{(b+c-a)\sqrt{a(b+c-a)[2ap-2(b-c)^{2}]}}{4pr\mathbb{W}}.\]
Noting that \(b+c-a=2(p-a)\), gives us our result.
**Lemma 8.26**.: _Let the touch points of the incircle of \(\triangle ABC\) with its sides be \(L\), \(M\), and \(N\), as shown in Figure 91. Then_
\[\frac{LG_{e}}{AG_{e}}=\frac{(p-b)(p-c)}{a(p-a)}.\]
Proof.: By definition, \(G_{e}\) is the intersection of \(AL\) and \(BM\). Applying Menelaus' Theorem to \(\triangle ALC\) with transversal \(BM\) gives
\[AG_{e}\cdot BL\cdot CM=LG_{e}\cdot BC\cdot AM\]
or
\[(AG_{e})(p-b)(p-c)=(LG_{e})(a)(p-a)\]
which is equivalent to our desired result.
**Corollary 8.27**.: _With the same terminology,_
\[\frac{AL}{AG_{e}}=\frac{AG_{e}+LG_{e}}{AG_{e}}=1+\frac{LG_{e}}{AG_{e}}=1+\frac {(p-b)(p-c)}{a(p-a)}.\]
**Lemma 8.28**.: _We have_
\[\mathbb{W}=\frac{r}{p-a}\left(\frac{a(p-a)}{(p-b)(p-c)}+1\right).\]
Figure 91.
Proof.: Using the well known formulas \(\Delta^{2}=p(p-a)(p-b)(p-c)\) and \(r=\Delta/p\), we get
\[\frac{r}{p-a}\left(\frac{a(p-a)}{(p-b)(p-c)}+1\right) =\frac{ar}{(p-b)(p-c)}+\frac{r}{p-a}\] \[=r\cdot\frac{a(p-a)+(p-b)(p-c)}{(p-a)(p-b)(p-c)}\] \[=\frac{\Delta}{p}\cdot\frac{a(p-a)+(p-b)(p-c)}{(p-a)(p-b)(p-c)}\] \[=\frac{a(p-a)+(p-b)(p-c)}{\Delta}\] \[=\frac{a\cdot\frac{b+c-a}{2}+\frac{(a-b+c)(a+b-c)}{4}}{\Delta}\] \[=\frac{2ab+2bc+2ac-a^{2}-b^{2}-c^{2}}{4\Delta}\] \[=\frac{4pr\mathbb{W}}{4\Delta}\] (by Lemma 8.24) \[=\mathbb{W}. \qed\]
**Theorem 8.29** (Radius of Inner Apollonius Circle).: _Let \(\rho_{i}\) be the radius of the inner Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). Then_
\[\rho_{i}=rt\mathbb{W}-r.\]
Proof.: By Corollary 8.17, \(C_{i}\) and the incircle are homothetic with \(G_{e}\) being the external center of similitude. Under this homothety, \(U_{a}\) maps to \(L\). Thus
\[\frac{G_{e}U_{a}}{G_{e}L}=\frac{\rho_{i}}{r}.\]
We can write this as
\[\frac{\rho_{i}}{r}=\frac{AG_{e}-AU_{a}}{AL-AG_{e}}=\frac{AG_{e}-AL\cdot\frac{ \rho_{a}}{r}}{AL-AG_{e}}=\frac{1-\frac{AL}{AG_{e}}\cdot\frac{\rho_{a}}{r}}{ \frac{AL}{AG_{e}}-1}\]
because \(AU_{a}=AL\cdot\frac{\rho_{a}}{r}\) (from Theorem 4.14). Now
\[\frac{AL}{AG_{e}}=\frac{AG_{e}+LG_{e}}{AG_{e}}=1+\frac{LG_{e}}{AG_{e}}.\]
From Lemma 8.26, we have
\[\frac{LG_{e}}{AG_{e}}=\frac{(p-b)(p-c)}{a(p-a)}\]
so
\[\frac{\rho_{i}}{r}=\frac{1-(1+\frac{(p-b)(p-c)}{a(p-a)})\cdot\frac{\rho_{a}}{ r}}{\frac{(p-b)(p-c)}{a(p-a)}}\]
which is equivalent to
\[\frac{\rho_{i}}{r}+1=\left(1-\frac{\rho_{a}}{r}\right)\left(\frac{a(p-a)}{(p-b )(p-c)}+1\right).\]
We also know that
\[1-\frac{\rho_{a}}{r}=\frac{rt}{p-a}\]
from Corollary 4.7. Thus
\[\frac{\rho_{i}}{r}+1=\frac{rt}{p-a}\left(\frac{a(p-a)}{(p-b)(p-c)}+1\right).\]
By Lemma 8.28, this reduces to
\[\frac{\rho_{i}}{r}+1=t\mathbb{W},\]
so \(\rho_{i}/r=t\mathbb{W}-1\) or \(\rho_{i}=rt\mathbb{W}-r\).
Note that \(r_{i}\) will be negative if the inner Apollonius circle is internally tangent to \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). This will happen if \(t\mathbb{W}<1\).
**Open Question 3**.: _Is there a simpler proof of Theorem 8.29?_
**Corollary 8.30**.: _We have_
\[\rho_{i}=2r-(\rho_{a}+\rho_{b}+\rho_{c}).\]
Proof.: From Theorem 7.5, we have \(rt\mathbb{W}=3r-(\rho_{a}+\rho_{b}+\rho_{c}).\) Therefore, we have \(\rho_{i}=rt\mathbb{W}-r=3r-(\rho_{a}+\rho_{b}+\rho_{c})-r=2r-(\rho_{a}+\rho_{b }+\rho_{c})\).
**Theorem 8.31**.: _The circles \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) meet in a point (Figure 92) if and only if \(t=1/\mathbb{W}\)._
Proof.: The three circles concur if and only if the radius of the inner Apollonius circle is \(0\), that is, when \(\rho_{i}=0\). By Theorem 8.29, \(\rho_{i}=rt\mathbb{W}-r.\) So \(\rho_{i}=0\) if and only if \(r=rt\mathbb{W}\) or \(1=t\mathbb{W}\) since \(r>0\). In other words, when \(t=1/\mathbb{W}\).
**Corollary 8.32**.: _If \(t=1/\mathbb{W}\), the circles \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) all pass through \(G_{e}\), the Gergonne point of \(\triangle ABC\)._
Proof.: The common chord of each pair of circles is the radical axis of those two circles. Since the three common chords meet at the point of concurrence of the three circles, this point must be the radical center of the three circles. By Theorem 6.4, this is the Gergonne point of \(\triangle ABC\).
**Theorem 8.33**.: _The circles \(\omega_{a}\), \(\omega_{b}\), and \(\omega_{c}\) meet in a point (Figure 93) if and only if \(\theta=120^{\circ}\)._
Proof.: Suppose the three arcs meet at \(P\). Let \(\angle PAC=x\), \(\angle PBA=y\), and \(\angle PCB=z\). Then \(\angle BAP=A-x\), \(\angle CBP=B-y\), and \(\angle ACP=C-z\). An angle inscribed in a circle is measured by half its intercepted arc. So
\[2(A-x)+2y =\theta,\] \[2(B-y)+2z =\theta,\] \[2(C-z)+2x =\theta.\]
Adding these three equations gives
\[3\theta=2(A+B+C)=2(180^{\circ})=360^{\circ}\]
or \(\theta=120^{\circ}\).
**Theorem 8.34** (Radius of Outer Apollonius Circle).: _Let \(\rho_{o}\) be the radius of the outer Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). Then_
\[\rho_{o}=\frac{r}{3}t\mathbb{W}+r.\]
Proof.: Let \(G_{e}\) be the Gergonne point of \(\triangle ABC\). Similar to Corollary 8.17, \(C_{o}\) and the incircle are homothetic with \(G_{e}\) being the external center of similitude. Let \(U_{a}\) and \(V_{a}\) be the points where \(AG_{e}\) meets \(\gamma_{a}\), with \(V_{a}\) closer to \(A\). Let \(W_{a}\) be the point where \(U_{a}V_{a}\) intersects the incircle (Figure 94).
Figure 94. brown and red circles are homothetic at \(G_{e}\)
Under this homothety, \(V_{a}\) maps to \(W_{a}\). Thus
\[\frac{\rho_{o}}{r} =\frac{G_{e}V_{a}}{G_{e}W_{a}}=\frac{AG_{e}-AV_{a}}{AG_{e}-AW_{a}} \tag{19}\] \[=\frac{AG_{e}/AL-AV_{a}/AL}{AG_{e}/AL-AW_{a}/AL},\]
From Theorem 4.17,
\[AV_{a}=\frac{(p-a-rt)\sqrt{a(p-a)}}{\sqrt{ap-(b-c)^{2}}}.\]
From Corollary 8.27,
\[\frac{AL}{AG_{e}}=1+\frac{(p-b)(p-c)}{a(p-a)}\]
so
\[\frac{AG_{e}}{AL}=\frac{a(p-a)}{a(p-a)+(p-b)(p-c)}.\]
From the homothety, center \(A\) that maps \(\gamma_{a}\) to the incircle,
\[\frac{AU_{a}}{AL}=\frac{\rho_{a}}{r}.\]
From Corollary 4.18, we have
\[\frac{AV_{a}}{AU_{a}}=\frac{a(p-a)}{ap-(b-c)^{2}}\]
Multiplying the previous two equations gives
\[\frac{AV_{a}}{AL}=\frac{a(p-a)}{ap-(b-c)^{2}}\cdot\frac{\rho_{a}}{r}.\]
From the homothety, center \(A\) that maps \(\gamma_{a}\) to the incircle,
\[\frac{AW_{a}}{AV_{a}}=\frac{r}{\rho_{a}}.\]
Multiplying the previous two equations gives
\[\frac{AW_{a}}{AL}=\frac{a(p-a)}{ap-(b-c)^{2}}.\]
Substituting the values for the ratios found into equation (19) gives
\[\frac{\rho_{o}}{r} =\frac{AG_{e}/AL-AV_{a}/AL}{AG_{e}/AL-AW_{a}/AL},\] \[=\frac{\frac{a(p-a)}{a(p-a)+(p-b)(p-c)}-\frac{a(p-a)}{ap-(b-c)^{2 }}\cdot\frac{\rho_{a}}{r}}{\frac{a(p-a)}{a(p-a)+(p-b)(p-c)}-\frac{a(p-a)}{ap-( b-c)^{2}}}.\]
Simplifying this algebraically gives
\[\frac{\rho_{o}}{r}=1+\frac{2rt}{3}\cdot\frac{2ab+2bc+2ca-a^{2}-b^{2}-c^{2}}{( a+b-c)(b+c-a)(c+a-b)}.\]
Applying Lemma 8.24 gives
\[\frac{\rho_{o}}{r}-1 =\frac{2rt}{3}\cdot\frac{4rp\mathbb{W}}{(a+b-c)(b+c-a)(c+a-b)}\] \[=\frac{2rt}{3}\cdot\frac{4rp\mathbb{W}}{8(p-c)(p-a)(p-b)}\] \[=\frac{t}{3}\cdot\frac{(rp)^{2}\mathbb{W}}{p(p-c)(p-a)(p-b)}\] \[=\frac{t}{3}\cdot\frac{\Delta^{2}\mathbb{W}}{\Delta^{2}}\] \[=\frac{t\mathbb{W}}{3},\]
using the well-known formulas \(\Delta=rp\) and \(\Delta=\sqrt{p(p-a)(p-b)(p-c)}\). Thus, \(\rho_{o}=rt\mathbb{W}/3+r\).
**Open Question 4**.: _Is there a simpler proof of Theorem 8.34?_
**Corollary 8.35**.: _We have_
\[\frac{\rho_{i}+r}{\rho_{o}-r}=3.\]
Proof.: This follows immediately from Theorems 8.29 and 8.34.
Remember when applying this result, that \(\rho_{i}\) is to be considered negative when the inner Apollonius circle is internally tangent to \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\), as shown in Figure 96.
The line through the incenter of a triangle and the Gergonne point of that triangle is called the _Soddy line_ of the triangle.
**Theorem 8.36**.: _For a general triad of circles associated with \(\triangle ABC\), let \(U\) be the center of the inner Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). Let \(V\) be the center of the outer Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). Then \(U\) and \(V\) lie on the Soddy line of the triangle and \(UI:IV=3:1\) (Figure 95)._
Proof.: All distances along the Soddy line will be signed. We have
\[\frac{UI}{G_{e}I}=\frac{UG_{e}+G_{e}I}{G_{e}I}=\frac{UG_{e}}{G_{e}I}+1=\frac{ \rho_{i}}{r}+1=\frac{\rho_{i}+r}{r}\]
Figure 95. \(UI:IV=3:1\)
\[\frac{IV}{G_{e}I}=\frac{G_{e}V-G_{e}I}{G_{e}I}=\frac{G_{e}V}{G_{e}I}-1=\frac{\rho_ {o}}{r}-1=\frac{\rho_{o}-r}{r}.\]
Dividing gives
\[\frac{UI}{IV}=\frac{\rho_{i}+r}{\rho_{o}-r}.\]
The result now follows from Corollary 8.35.
**Open Question 5**.: _Is there a simple geometric proof that \(UI/IV=3\)?_
If the inner Apollonius circle is internally tangent to the three circles as in Figure 96, then \(U\) and \(V\) still lie on the Soddy line, but the points on that line occur in the order \(G_{e}\), \(U\), \(I\), \(V\).
**Corollary 8.37**.: _We have_
\[\frac{G_{e}I}{IV}=\frac{r}{\rho_{o}-r}.\]
**Corollary 8.38**.: _We have the extended proportion_
\[UG_{e}:G_{e}I:IV=\rho_{i}:r:\rho_{o}-r.\]
It should be noted that the distance from \(G_{e}\) to \(I\) in terms of parts of the triangle is known. From [27, p. 184], we have the following result.
**Theorem 8.39**.: _We have_
\[G_{e}I=\frac{r}{4R+r}\sqrt{(4R+r)^{2}-3p^{2}}=r\sqrt{1-3/\mathbb{W}^{2}}.\]
This allows us to express any of the distances between the points \(U\), \(V\), \(G_{e}\), and \(I\) in terms of \(R\), \(r\), and \(p\) by using Corollary 8.38.
**Theorem 8.40**.: _For a general triad of circles associated with \(\triangle ABC\), the inradius \(r\) and the radii \(\rho_{i}\) and \(\rho_{o}\) of the Apollonius circles satisfy the relation_
\[3\rho_{o}=\rho_{i}+4r.\]
Proof.: This is algebraically equivalent to Corollary 8.35.
Figure 96. \(UI:IV=3:1\)
**Theorem 8.41**.: _For a general triad of circles associated with \(\triangle ABC\), the radii \(\rho_{i}\) and \(\rho_{o}\) of the Apollonius circles and the radii \(\rho_{a}\), \(\rho_{b}\), and \(\rho_{c}\) of the three circles in the triad satisfy the relation_
\[3\rho_{o}=2(\rho_{a}+\rho_{b}+\rho_{c})+3\rho_{i}.\]
Proof.: From Theorem 7.5, we have
\[\rho_{a}+\rho_{b}+\rho_{c}=3r-rt\mathbb{W}.\]
From Theorems 8.29 and 8.34, we have
\[3\rho_{o}-3\rho_{i}=6r-2rt\mathbb{W}=2(\rho_{a}+\rho_{b}+\rho_{c})\]
as desired.
**Theorem 8.42**.: _Let \(\rho_{i}\) be the radius of the inner Apollonius circle of \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\). Then_
\[\rho_{i}^{2}=\rho_{a}^{2}+\rho_{b}^{2}+\rho_{c}^{2}+2r^{2}\left(t^{2}-1\right).\]
Proof.: This follows algebraically by combining Theorems 8.29 and 7.8.
When \(\theta=180^{\circ}\), the arcs become semicircles, \(t=1\), and this result agrees with Theorem 6.1 in [30].
## 9. Relationship with Semicircles
Many of the elements of our configuration are proportional to the corresponding elements when \(\omega_{a}\), \(\omega_{b}\), and \(\omega_{c}\) are semicircles (i.e. when \(\theta=180^{\circ}\) or \(t=1\)).
If \(x\) is any measurement or object, let \(x^{*}\) denote the same measurement or object when \(\theta=180^{\circ}\), i.e. when the arcs are semicircles.
In [30], it was found that
\[\rho_{a}^{*} =r\left(1-\tan\frac{A}{2}\right)\] \[\rho_{b}^{*} =r\left(1-\tan\frac{B}{2}\right)\] \[\rho_{c}^{*} =r\left(1-\tan\frac{C}{2}\right)\]
and in Theorem 4.6, we found that
\[\rho_{a} =r\left(1-\tan\frac{A}{2}\tan\frac{\theta}{4}\right)\] \[\rho_{b} =r\left(1-\tan\frac{B}{2}\tan\frac{\theta}{4}\right)\] \[\rho_{c} =r\left(1-\tan\frac{C}{2}\tan\frac{\theta}{4}\right)\]
In other words, if \(t=\tan\frac{\theta}{4}\), then we have the following results.
**Theorem 9.1**.: _The following identities are true._
\[\rho_{a}-r=t(\rho_{a}^{*}-r)\] \[\rho_{b}-r=t(\rho_{b}^{*}-r)\] \[\rho_{c}-r=t(\rho_{c}^{*}-r)\]
**Theorem 9.2**.: _The following identities are true._
\[\rho_{a}-\rho_{b} =t(\rho_{a}^{*}-\rho_{b}^{*})\] \[\rho_{b}-\rho_{c} =t(\rho_{b}^{*}-\rho_{c}^{*})\] \[\rho_{c}-\rho_{a} =t(\rho_{c}^{*}-\rho_{a}^{*})\]
Let \(T_{bc}\) denote the length of the common external tangent between circles \(\gamma_{b}\) and \(\gamma_{c}\). Define \(T_{ab}\) and \(T_{ca}\) similarly.
In [30], it was found that \(T_{ab}^{*}=T_{bc}^{*}=T_{ca}^{*}=2r\). In Theorem 6.1, we found that \(T_{ab}=T_{bc}=T_{ca}=2rt\). This gives us the following result.
**Theorem 9.3**.: _The following identities are true._
\[T_{ab} =tT_{ab}^{*}\] \[T_{bc} =tT_{bc}^{*}\] \[T_{ca} =tT_{ca}^{*}\]
Using these, we can prove the following new result.
**Theorem 9.4**.: _Let \(D_{bc}\) denote the distance between the centers of \(\gamma_{b}\) and \(\gamma_{c}\). Define \(D_{ab}\) and \(D_{ca}\) similarly. Then the following identities are true._
\[D_{ab} =tD_{ab}^{*}\] \[D_{bc} =tD_{bc}^{*}\] \[D_{ca} =tD_{ca}^{*}\]
Proof.: By symmetry, it suffices to prove the result for \(D_{bc}\). Let \(E\) be the center of \(\gamma_{b}\) and let \(F\) be the center of \(\gamma_{c}\). Let the common external tangent along \(BC\) be \(XY\) as shown in Figure 97. Let the foot of the perpendicular from \(E\) to \(FY\) be \(H\). Then in right triangle \(EHF\), we have \(EH=XY=T_{bc}=tT_{bc}^{*}\) by Theorem 9.3. We also have \(FH=|\rho_{c}-\rho_{b}|\). By Theorem 9.2, \(FH=t|\rho_{c}^{*}-\rho_{a}^{*}|\). Since \(\triangle EHF\sim\triangle E^{*}H^{*}F^{*}\), we must therefore have \(D_{bc}=tD_{bc}^{*}\).
**Corollary 9.5**.: _The triangles formed by the centers of \(\gamma_{a}\), \(\gamma_{b}\), \(\gamma_{c}\) and \(\gamma_{a}^{*}\), \(\gamma_{b}^{*}\), \(\gamma_{c}^{*}\) are similar._
**Theorem 9.6**.: _We have_
\[\rho_{i}+r=t(\rho_{i}^{*}+r).\]
Proof.: This follows immediately from Theorem 8.29.
**Theorem 9.7**.: _We have_
\[\rho_{o}-r=t(\rho_{o}^{*}-r).\]
Proof.: This follows immediately from Theorem 8.34.
**Theorem 9.8**.: _Let \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\) be a general triad of circles associated with \(\triangle ABC\). Let \(U\) be the center of the inner Apollonius circle of the circles in the triad. Let \(d_{a}\) denote the distance between \(U\) and the center of \(\gamma_{a}\). Define \(d_{b}\) and \(d_{c}\) similarly. Then the following identities are true._
\[d_{a}-d_{b} =t(d_{a}^{*}-d_{b}^{*})\] \[d_{b}-d_{c} =t(d_{b}^{*}-d_{b}^{*})\] \[d_{c}-d_{a} =t(d_{c}^{*}-d_{a}^{*})\]
Proof.: Let \(\rho_{i}\) be the radius of the inner Apollonius circle. Then \(d_{a}=\rho_{i}+\rho_{a}\) (Figure 98).
Similarly, \(d_{b}=\rho_{i}+\rho_{b}\). Thus \(d_{a}-d_{b}=\rho_{a}-\rho_{b}\). By Theorem 9.2,
\[d_{a}-d_{b}=t(\rho_{a}^{*}-\rho_{b}^{*})=t(d_{a}^{*}-d_{b}^{*}).\]
The same argument works for \(d_{b}-d_{c}\) and \(d_{c}-d_{a}\).
**Theorem 9.9**.: _If \(u=EF\), \(v=DF\), \(w=DE\) are the distances between the centers of the circles \(\gamma_{a}\), \(\gamma_{b}\), and \(\gamma_{c}\), we have_
\[u^{2} =\frac{a(p-a)[ap-(b-c)^{2}]t^{2}}{p^{2}},\] \[v^{2} =\frac{b(p-b)[bp-(c-a)^{2}]t^{2}}{p^{2}},\] \[w^{2} =\frac{c(p-c)[ap-(a-b)^{2}]t^{2}}{p^{2}}.\]
Proof.: This follows from Theorem 9.4 and the simplified values of \(u^{*}\), \(v^{*}\), and \(w^{*}\) found in [30].
## 10. Variations
We have studied the case where Ajima circle \(\gamma_{a}\) is inscribed in \(\angle BAC\) and is inside \(\triangle ABC\) and is externally tangent to circle \(\omega_{a}\).
There are actually four circles that are inscribed in \(\angle BAC\) and are tangent to circle \(\omega_{a}\). These circles are shown in Figure 99.
Circle \(c_{1}\) is variation \(1\) and is the variation already studied. Note that circle \(c_{4}\) is inscribed in \(\angle BAC\) and is _outside_\(\triangle ABC\) as well as being externally tangent to circle \(\omega_{a}\). Circle \(c_{2}\) and \(c_{3}\) are _internally_ tangent to \(\omega_{a}\). The touch point of \(c_{2}\) and \(\omega_{a}\) is inside \(\triangle ABC\) while the touch point of \(c_{3}\) and \(\omega_{a}\) is outside \(\triangle ABC\).
Many of the results we found for variation \(1\) work for the other variations as well. We present below a few of these results. Proofs are omitted because they are similar to the proofs given for variation \(1\). Variants \(2\), \(3\), and \(4\), are shown in Figure 100.
Figure 99. circles inscribed in \(\angle A\) and tangent to red circle
### The Catalytic Lemma
The Catalytic Lemma remains true except in some cases where the incenter is replaced by an excenter. Figure 101 shows variants 2, 3, and 4. In each case, \(B\), \(K\), \(T\), and an incenter/excenter lie on a circle.
### Protasov's Theorem
Protasov's Theorem remains true except in some cases where the incenter is replaced by an excenter. Figure 102 shows variants 2, 3, and 4. In each case, the blue line bisects the angle formed by the dashed lines.
### Theorem 2.10
Theorem 2.10 remains true except in some cases where the incenter is replaced by an excenter. Figure 103 shows variants 2, 3, and 4. In each case, the parallel to \(BE\) through an incenter or excenter meets \(AC\) at \(F\), where \(E\) is the point where \(\omega_{a}\) meets \(AC\). Then \(IF=FK\) where \(K\) is the point where \(\gamma_{a}\) touches \(AC\).
Figure 101. four points lie on a circle
Figure 102. blue line bisects angle formed by dashed lines
Figure 103. blue lines are congruent
### Theorem 2.7
Theorem 2.7 remains true except in some cases where the incenter is replaced by an excenter. Figure 103 shows variants 2, 3, and 4. In each case, the line through \(T\) and an incenter or excenter meets \(\omega_{a}\) at a point on the perpendicular bisector of \(BC\) (opposite \(T\)).
### Ajima's Theorem
Similar formulas for the radii of the variant circles can be found similar to Ajima's Theorem. These are shown in Figure 105, where \(r_{a}\) denotes the radius of the \(A\)-excircle of \(\triangle ABC\).
Figure 104. \(T\), \(N\), and an incenter or excenter lie on a line.
Figure 105.
### The Paasche Analog
The Paasche Analog (Theorem 6.7) remains true. If \(\gamma_{a}\) is any one of these variant circles, and if \(T_{a}\) is the touch point of \(\gamma_{a}\) with \(\omega_{a}\), with \(T_{b}\) and \(T_{c}\) defined similarly, then \(AT_{a}\), \(BT_{b}\), and \(CT_{c}\) are concurrent. See Figure 106 for one case.
|
2305.16846 | Lagrangian Flow Networks for Conservation Laws | We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities
and velocities continuously in space and time. By construction, the proposed
LFlows satisfy the continuity equation, a PDE describing mass conservation in
its differentiable form. Our model is based on the insight that solutions to
the continuity equation can be expressed as time-dependent density
transformations via differentiable and invertible maps. This follows from
classical theory of the existence and uniqueness of Lagrangian flows for smooth
vector fields. Hence, we model fluid densities by transforming a base density
with parameterized diffeomorphisms conditioned on time. The key benefit
compared to methods relying on numerical ODE solvers or PINNs is that the
analytic expression of the velocity is always consistent with changes in
density. Furthermore, we require neither expensive numerical solvers, nor
additional penalties to enforce the PDE. LFlows show higher predictive accuracy
in density modeling tasks compared to competing models in 2D and 3D, while
being computationally efficient. As a real-world application, we model bird
migration based on sparse weather radar measurements. | F. Arend Torres, Marcello Massimo Negri, Marco Inversi, Jonathan Aellen, Volker Roth | 2023-05-26T11:58:27Z | http://arxiv.org/abs/2305.16846v2 | # Lagrangian Flow Networks for Conservation Laws
###### Abstract
We introduce _Lagrangian Flow Networks_ (LFlows) for modeling fluid densities and velocities continuously in space and time. The proposed LFlows satisfy by construction the continuity equation, a PDE describing mass conservation in its differentiable form. Our model is based on the insight that solutions to the continuity equation can be expressed as time-dependent density transformations via differentiable and invertible maps. This follows from classical theory of existence and uniqueness of Lagrangian flows for smooth vector fields. Hence, we model fluid densities by transforming a base density with parameterized diffeomorphisms conditioned on time. The key benefit compared to methods relying on Neural-ODE or PINNs is that the analytic expression of the velocity is always consistent with the density. Furthermore, there is no need for expensive numerical solvers, nor for enforcing the PDE with penalty methods. _Lagrangian Flow Networks_ show improved predictive accuracy on synthetic density modeling tasks compared to competing models in both 2D and 3D. We conclude with a real-world application of modeling bird migration based on sparse weather radar measurements.
## 1 Introduction
The development of physics-informed Machine Learning (PI-ML) (Karniadakis et al., 2021) opens new opportunities to combine the power of modern ML methods with physical constraints that serve as meaningful regularizers. These constraints might for example be available in the form of partial differential equations (PDEs). Within PI-ML we consider hydrodynamic flow problems governed by the physical law of mass conservation. This law is described in its local and differentiable form by a PDE commonly known as the _continuity equation_ (CE)
\[\left\{\begin{aligned} \partial_{t}\rho+\nabla\cdot( \boldsymbol{v}\rho)&=0&&(t_{0},\boldsymbol{x}) \in(t_{0},T)\times\Omega,\\ \rho(t_{0},\boldsymbol{x})&=\rho_{t_{0}}(\boldsymbol {x})&&\boldsymbol{x}\in\Omega.\end{aligned}\right. \tag{1}\]
For any time \(t\in[t_{0},T)\) the function \(\rho(t,\cdot)\) can be thought of as the density of parcels advected by the velocity field \(\boldsymbol{v}\), with initial density \(\rho_{t_{0}}\). Here, \([t_{0},T]\times\Omega\subset\mathbb{R}\times\mathbb{R}^{d}\) is the space-time domain, the partial derivative w.r.t. time \(t\) is denoted by \(\partial_{t}\) and \(\nabla\cdot\boldsymbol{b}=\nabla_{\boldsymbol{x}}\cdot\boldsymbol{b}=\sum_{ i=1}^{d}\frac{\partial\boldsymbol{b}_{i}}{\partial x_{i}}\) is the spatial divergence of a \(d\) dimensional vector field \(\boldsymbol{b}:[0,T]\times\Omega\mapsto\mathbb{R}^{d}\).
Distinct from numerical initial value problems, we consider settings where no exact boundary and initial conditions are known. Furthermore, additional equations dictating the dynamics of the velocity \(\mathbf{v}(t,\mathbf{x})\) might even be lacking. Instead, sparse and noisy measurements of the fields \(\mathbf{v}\) and \(\rho\) are given. Based on these measurements, we intend to model the density and velocity fields, knowing that the solution must comply with the physical law described by Eq. 1.
This reflects a range of real-world scenarios where only sensor measurements and partial knowledge of the fluids' dynamics are available. For example, recent work within the area of radar ornithology treats the problem of modeling bird densities and velocities from a fluid dynamics perspective. The assumption of mass conservation is explored either in post hoc analyses of regression models (Nussbaumer et al., 2021) or as a model constraint (Lippert et al., 2022a,b). For the latter, the model had to be restricted to extremely coarse-grained volumes.
We provide a class of networks that fulfill the CE by construction while being continuous in both time and space. The proposed networks (i) avoid scaling limitations of more classical volume discretization approaches, (ii) eradicate the need for additional penalty terms that Physics-Informed Neural Networks (PINNs) use, and (iii) do not rely on expensive numerical solvers which are required by Neural-ODE based methods.
Main contributions.The main contributions of this paper are as follows:
* We establish a fundamental link between density transformations based on conditional diffeomorphisms, and spatiotemporal density fields that satisfy the continuity equation.
* We leverage this link to introduce models for hydrodynamic flow problems that satisfy the continuity equation by construction, coined _Lagrangian Flow Networks_ (LFlows).
* To deal with ill-posed settings we propose two regularization methods for obtaining "simple" explanations of the data. We suggest penalizing (i) the total mass of the system and (ii) the average transport cost.
* We apply LFlows on a synthetic flow problem in 2D and 3D, demonstrating its high predictive performance compared to existing methods. For a challenging real-world application, we model bird migrations in Europe based on sparse radar measurements.
## 2 Related Work
Although many recent advances have been made in the general area of PI-ML (Karniadakis et al., 2021), there are few developments that address the described setting.
Data assimilation with the Adjoint.Sensitivity-based data assimilation methods rely on efficiently differentiating through numerical PDE solvers with the adjoint method for optimizing initial conditions and additional unknown parameters (Cacuci, 1981a,b). A key limitation is the required memory and computational resources for fine mesh resolutions in a 3D+Time setting. Within this class of methods, the semi-Lagrangian data assimilation approach is conceptually closest to our proposed model and setting (Robert, 1982; Staniforth and Cote, 1991; Diamantakis and Magnusson, 2016). Both the density as well as the position of measured data points are solved backward in time to an initial time point using the Lagrangian formulation of the continuity equation. That is, the trajectory of the corresponding parcel is traced back, obtaining the _departure point_ (i.e. spatial position) of the parcel and its initial density. The initial density for all parcels is represented by a discrete mesh, which is spatially interpolated to the obtained _departure point_ position to calculate the data loss.
Neural-ODEs and Continuous Normalizing Flows.Neural-ODEs (Chen et al., 2018) provide a framework for optimizing dynamics dictated by a neural network. An efficient autograd implementation of the _adjoint sensitivity method_(Pontryagin, 1987) is provided, enabling black-box differentiation for numerically solved ODEs. Furthermore, Chen et al. (2018) introduce Continuous Normalizing Flows (CNFs). CNFs are able to continuously transform simple probability densities into more complex ones for obtaining powerful density estimators and generative models. From a fluid dynamics perspective, the solved ODE in CNFs corresponds to the continuity equation in its
Lagrangian formulation, written in terms of the log density:
\[\left[\begin{array}{c}\mathbf{x}(0,\mathbf{z})\\ \ln\rho(0,\mathbf{z})\end{array}\right]=\left[\begin{array}{c}\mathbf{z}\\ \ln\rho_{0}(\mathbf{z})\end{array}\right],\qquad\partial_{t}\left[\begin{array}{c }\mathbf{x}(t,\mathbf{z})\\ \ln\rho(t,\mathbf{z})\end{array}\right]=\left[\begin{array}{c}\mathbf{v}_{\Theta}(t, \mathbf{x}(t,\mathbf{z}))\\ -\nabla\cdot\mathbf{v}_{\Theta}(t,\mathbf{x}(t,\mathbf{z}(t,\mathbf{z}))\end{array}\right], \tag{2}\]
where \(\mathbf{z}\in\mathbb{R}^{d}\), \(\mathbf{v}_{\Theta}:[0,T]\times\mathbb{R}^{d}\mapsto\mathbb{R}^{d},t\in[0,T]\), and fixed initial density with unit integral, e.g. \(\rho_{0}=\mathcal{N}(\mathbf{0},\mathbf{I})\). CNFs can be described as solving Eq. 2 backward in time (\(\mathbf{x}_{T}\mapsto\mathbf{z}\)), such that the initial density at the _departure_ point \(\mathbf{z}\) as well as the density changes along the trajectory can be evaluated. The main limitation of CNF-based methods is the computational cost of evaluating the input-derivative of a network potentially hundreds of times in an adaptive ODE solver. As a possible remedy (Onken et al., 2021; Finlay et al., 2020) suggest vector field regularizations motivated by optimal transport theory. Specifically, straight trajectories are enforced via transport penalties, simplifying the dynamics for the numerical solver. Other follow-up work considers the use of stochastic estimates for faster divergence calculations (Grathwohl et al., 2018). Distinct to our setting, the densities and velocities at initial and intermediate time steps have no inherent relevance in probabilistic density estimation, and only the final transformed (probability) density is of interest.
Neural Networks for Conservation Laws.PINNs (Raissi et al., 2019) enforce PDEs in neural networks by introducing additional penalty terms. So-called _collocation points_ are sampled on the signal domain, on which the deviation to the constraints is evaluated and then minimized. The accuracy of PINNs is thus fundamentally limited by the amount (and distribution) of sampled collocation points, as well as the dimension of the signal domain. For conservation laws, recent improvements on PINNs either suggest the use of more sophisticated sampling approaches (Arend Torres et al., 2022), or introducing domain decompositions (Jagtap et al., 2020). While this alleviates the scaling problems, the fundamental limitations given by the possible amount of collocation points (or sub-domains) are still present.
In contrast, Richter-Powell et al. (2022) propose a parameterization of neural networks that enforces conservation of mass by design, which we will refer to as _Divergence Free Neural Networks_ (DFNNs). As the name suggests, solutions to Eq. 1 are represented as divergence-free \((d+1)\) dimensional vector field \(\mathbf{b}=(\rho,\rho\mathbf{v})\) with an augmented \((d+1)\) dimensional input space \(\mathbf{s}=(t,\mathbf{x})\):
\[\frac{\partial\rho}{\partial t}+\nabla_{\mathbf{x}}\cdot(\rho\mathbf{v})=\sum_{i=1}^ {d+1}\frac{\partial b_{i}}{\partial s_{i}}=\nabla_{\mathbf{s}}\cdot\left(\begin{array} []{c}\rho\\ \rho\mathbf{v}\end{array}\right)=\nabla_{\mathbf{s}}\cdot\mathbf{b}=0 \tag{3}\]
The generalization of divergence-free vector fields to higher dimensions is achieved through the concept of differential forms. The resulting parameterization however heavily relies on expensive higher-order automatic differentiation, posing limitations in terms of scalability.
(Conditional) Normalizing FlowsNormalizing Flows (NFs) are a general approach for warping a simple probability distribution into a much more complex target distribution via invertible and differentiable transformations, i.e. diffeomorphisms. Let \(\mathbf{R}\in\mathbb{R}^{d}\) be a random variable with a known density function \(\mathbf{R}\sim p_{\mathbf{R}}(\mathbf{r})\) and let \(Y=\mathcal{T}(\mathbf{R})\), where \(\mathcal{T}\) is a diffeomorphism with trainable parameters. Using change of variables, the probability density of \(\mathbf{Y}\) can be expressed in terms of the "base density" \(p_{\mathbf{R}}\), the map \(\mathcal{T}\), and its Jacobian:
\[p_{\mathbf{Y}}(\mathbf{y})=p_{\mathbf{R}}(\mathcal{T}(\mathbf{y}))\big{|}\det J\mathcal{T}(\bm {y})\big{|} \tag{4}\]
NFs usually rely on transformations for which the Jacobian determinant can be efficiently and easily calculated. A key property is that the Jacobian determinant of compositions can be factorized into their individual Jacobian determinants. This enables efficient and flexible transformations by composing multiple simple layers. A parameterization for conditional distributions \(p_{\mathbf{Y}}(\mathbf{y}|\mathbf{c})\) can be obtained by additionally conditioning the parameters of \(\mathcal{T}\) on another variable \(\mathbf{c}\) via a hypernetwork (Ha et al., 2016). This is commonly referred to as a Conditional Normalizing Flow (Atanov et al., 2019; Kobyzev et al., 2020). For a comprehensive review of Normalizing Flows, we refer the reader to Kobyzev et al. (2020) and Papamakarios et al. (2021).
## 3 Lagrangian Flow Networks
The presentation of our model is divided into two parts: First, we describe the model, providing an overview of how the densities and velocities are computed in the final model. In the second step, we
reinterpret the proposed formulas for the density and velocities from a Lagrangian perspective. This allows linking them to classical theory for Lagrangian flows for smooth vector fields and hence to the continuity equation. Figure 1 visually summarizes the overall concept of _Lagrangian Flow Networks_.
### An Overview of the Model
We start off with parameterizing densities with conditional normalizing flows. Instead of transforming a probability density, we are interested in modeling physical densities. In this case the density does not integrate to one but to the total mass of the system. Consider a base density
\[\rho_{base}(\mathbf{z})=c\cdot\mathcal{N}(\mathbf{0},\mathbf{1}), \tag{5}\]
i.e. an unnormalized Gaussian that integrates to the total mass \(c\in\mathbb{R}_{+}\), with \(c\) being a freely learnable parameter. Given a fixed \(t\in[0,T]\) we can build a parameterized diffeomorphism \(\Phi_{f_{\Theta}(t)}^{-1}:\mathbb{R}^{d}\mapsto\Omega\), with its parameters being given by a hypernetwork \(f_{\Theta}(t)\). Each point \(\mathbf{z}\in\mathbb{R}^{d}\) is then mapped to a position \(\mathbf{x}\in\Omega\) at time \(t\):
\[\mathbf{x}=\Phi_{f_{\Theta}(t)}^{-1}(\mathbf{z}) \tag{6}\]
The density at each position \(\mathbf{x}\) and time \(t\) can be calculated with the change of variables formula. Stepping away from existing conditional NFs, we then introduce the velocity at \((t,\mathbf{x})\) and define it to be the partial derivative of the transformation w.r.t. time, evaluated at \(\mathbf{z}=\Phi_{f_{\Theta}(t)}(\mathbf{x})\). This provides us a scalar density field \(\rho_{\Theta}\) and a velocity field \(\mathbf{v}_{\Theta}\) defined over space \(\mathbf{x}_{t}\in\Omega\) and time \(t\in[0,T]\):
\[\rho_{\Theta}(t,\mathbf{x}) =\rho_{base}\Big{(}\Phi_{f_{\Theta}(t)}(\mathbf{x}),t_{0}\Big{)}\, \left|\det J\Phi_{f_{\Theta}(t)}(\mathbf{x})\right|, \tag{7}\] \[\mathbf{v}_{\Theta}(t,\mathbf{x}) =\frac{\partial\Phi_{f_{\Theta}(t)}^{-1}}{\partial t}\Big{(}\Phi _{f_{\Theta}(t)}(\mathbf{x})\Big{)}=-\Big{(}J\Phi_{f_{\Theta}(t)}(\mathbf{x})\Big{)}^ {-1}\,\,\frac{\partial\Phi_{t}}{\partial t}(\mathbf{x}). \tag{8}\]
The second equality in Eq. 8 follows from a simple algebraic reformulation (see Appendix A.4), ensuring that only the forward direction of the map \(\Phi_{f_{\Theta}(t)}\) is required for evaluating the velocity, but not its inverse. This proves useful for transformations with expensive inverses (e.g. masked autoregressive layers), or if the inverse is unknown (e.g. planar flows).
In the following section, we show that such a parameterization provides (under common regularity assumptions for the diffeomorphisms \(\Phi_{f_{\Theta}(t)}\)) a distributional solution to the CE.
### A Lagrangian Viewpoint
The introduced parameterizations for the velocity and density with conditional NFs in Eq. 7 and Eq. 8 reflect an Eulerian view, i.e. the density and velocity are evaluated from a fixed reference point.
Figure 1: Visual summary of the transformations and involved fields for modeling the temporal evolution of a 2D density with the Lagrangian Flow Network. The bright-red lines inbetween the planes indicate Lagrangian trajectories of fluid parcels.
Alternatively, they can be interpreted from a Lagrangian perspective, describing the fluid from the perspective of moving fluid parcels (i.e. infinitesimal volumes) with constant mass. Density changes of the fluid are then described by volume changes of these parcels, where spatial contraction leads to higher density and expansion to lower density. Going back to our model, we first define the density that a parcel has at its initial position \(\mathbf{x}\) at time \(t_{0}\) as
\[\rho_{t_{0}}(\mathbf{x})=\rho_{base}\Big{(}\Phi_{f_{\Theta}(t_{0})}(\mathbf{x})\Big{)} \ \big{|}\mathrm{det}\,J\Phi_{f_{\Theta}(t_{0})}\big{(}\mathbf{x}\big{)}\big{|}\,, \tag{9}\]
with \(\Phi_{f_{\Theta}(t_{0})}\) being a spatially smooth diffeomorphism. Next, we build transformations that parameterize parcel trajectories as a function of time by composing the conditional diffeomorphisms \(\Phi_{f_{\Theta}(t)}\). Given the transformation \(\Phi_{f_{\Theta}(t)}\), a map \(\hat{\mathbf{X}}_{t}(\mathbf{x}_{t_{0}})\) can be constructed by composition, such that it is a diffeomorphism in \(\Omega\) for fixed \(t\), resulting in the map \(\mathbf{x}_{t_{0}}\mapsto\mathbf{z}\mapsto\mathbf{x}_{t}\):
\[\hat{\mathbf{X}}_{t}(\mathbf{x}_{t_{0}})=\Phi_{f_{\Theta}(t)}^{-1}(\Phi_{f_{\Theta}(t _{0})}(\mathbf{x}_{0}))=\mathbf{x}_{t} \tag{10}\]
For a more compact notation, the explicit dependence of \(\hat{\mathbf{X}}\) on the neural network is omitted but remains implied. Note that even though we discuss fluid parcels, we _do not_ explicitly model individual parcels. Instead, each parcel has a continuous label \(\mathbf{x}\) and we model the flow of all parcels. With the diffeomorphisms \(\hat{\mathbf{X}}_{t}\) providing the trajectory of a parcel, we can furthermore calculate the density at each time point in terms of the initial density at \(t_{0}\) using the change of variables formula:
\[\rho_{\Theta}(t,\mathbf{x})=\rho_{t_{0}}\Big{(}\hat{\mathbf{X}}_{t}^{-1}(\mathbf{x}) \Big{)}|\mathrm{det}\,J\hat{\mathbf{X}}_{t}^{-1}(\mathbf{x})| \tag{11}\]
From Diffeomorphisms to Velocities.From Eq. 10 follows a natural way to define the velocity of a given parcel at position \(\mathbf{x}\) and time \(t\). First, we map a parcel back to its initial position with \(\mathbf{X}_{t}^{-1}\). We then define the velocity as the change in position along its trajectory at time \(t\) for infinitesimal timesteps \(h\):
\[\mathbf{v}_{\Theta}(t,\mathbf{x})=\lim_{h\to 0}\frac{\hat{\mathbf{X}}_{t+h}(\hat{\mathbf{X}}_{t }^{-1}(\mathbf{x}))-\hat{\mathbf{X}}_{t}(\hat{\mathbf{X}}_{t}^{-1}(\mathbf{x}))}{h}=\frac{ \partial\hat{\mathbf{X}}_{t}}{\partial t}(\hat{\mathbf{X}}_{t}^{-1}(\mathbf{x})) \tag{12}\]
Assuming basic regularity conditions for \(\hat{\mathbf{X}}_{t}\), it can be shown that such a curve \(t\mapsto\hat{\mathbf{X}}_{t}(\mathbf{x})\) uniquely solves the initial value problem given by the dynamics \(\partial_{t}\hat{\mathbf{X}}(\hat{\mathbf{X}}^{-1})=\mathbf{v}_{\Theta}\) and the initial position \(\hat{\mathbf{X}}_{t_{0}}\). That is, the map \(\hat{\mathbf{X}}_{t}\) effectively provides us the Lagrangian trajectory of a parcel whose velocity is given by Eq. 12. The following statement is a consequence of the more general Theorem 4 and we refer to Appendix Section A.2 for a detailed proof under precise assumptions.
**Theorem 1**.: _Let \(0\leq t_{0}<T\) and let \(\Omega\subset\mathbb{R}^{d}\) be a convex open set. Let \(\mathbf{X}:[t_{0},T]\times\Omega\to\Omega\) be a family of maps such that \(\mathbf{X}_{t}:\Omega\to\Omega\) is a bijection for any \(t\in[t_{0},T]\) and \(\mathbf{X}_{t_{0}}(\mathbf{x})=\mathbf{x}\) for any \(\mathbf{x}\in\Omega\). Assume that \(\mathbf{X},\mathbf{X}^{-1}\) are \(C^{\infty}([t_{0},T]\times\Omega;\Omega)\) with globally bounded derivatives._
_Then, the velocity field \(\mathbf{v}(t,\mathbf{x})=\frac{\partial\mathbf{X}_{t}}{\partial t}\big{(}\mathbf{X}_{t}^{-1}( \mathbf{x})\big{)}\) is \(C^{\infty}\). In particular, \(\mathbf{v}\) satisfies the assumptions of the Cauchy-Lipschitz Theorem 3 and \(\mathbf{X}\) is the unique flow map of \(\mathbf{v}\) starting at time \(t_{0}\). Specifically, for any \(\mathbf{x}\in\Omega\) the curve \(t\mapsto\mathbf{X}_{t}(\mathbf{x})\) is the unique solution to the Cauchy Problem_
\[\begin{cases}\partial_{t}\mathbf{X}_{t}\left(\mathbf{x}\right)=\mathbf{v}\left(\mathbf{X}_{t} (\mathbf{x}),t\right)\quad t\in[t_{0},T),\\ \mathbf{X}_{t_{0}}(\mathbf{x})=\mathbf{x}\end{cases} \tag{13}\]
Proof.: See Appendix Section A.2.
A Distributional Solution.Combining the Lagrangian expression for the density and velocity given in Eq. 10 and Eq. 12 with the results of Theorem 1, we then obtain the connection to the continuity equation.
**Theorem 2**.: _Let \(\Omega,T,t_{0},\mathbf{X}\) be as in Theorem 1. Given an initial density \(\rho_{t_{0}}\in L^{1}(\Omega)\), we define_
\[\rho(t,\mathbf{x})=\rho_{t_{0}}\big{(}\mathbf{X}_{t}^{-1}(\mathbf{x})\big{)}|\mathrm{det} \,J\mathbf{X}_{t}^{-1}(\mathbf{x})|. \tag{14}\]
_Then \(\rho(t,\mathbf{x})\) is a distributional solution to the continuity equation 1 according to Definition 1, i.e. the following condition is satisfied for any test function \(\phi\in C_{c}^{\infty}([t_{0},T)\times\Omega)\):_
\[\int_{t_{0}}^{T}\int_{\Omega}(\partial_{t}\phi+\mathbf{v}\cdot\nabla\phi)\rho\,dx \,dt=-\int_{\Omega}\rho_{t_{0}}(\mathbf{x})\phi(t_{0},\mathbf{x})\,dx. \tag{15}\]
_Moreover, if \(\rho_{t_{0}}\in C^{\infty}(\Omega)\), then \(\rho\in C^{\infty}([t_{0},T)\times\Omega)\) and \(\rho\) is a pointwise solution to the continuity equation (1). If we assume in addition that \(\rho_{t_{0}}(\mathbf{x})>0\) for any \(\mathbf{x}\in\Omega\), then the same holds for \(\rho(t,\mathbf{x})\) for any \((t,\mathbf{x})\in[t_{0},T)\times\Omega\) and \(\rho\) satisfies the log-density formula of the continuity equation_
\[\frac{d}{dt}\log(\rho(t,\mathbf{X}_{t}(\mathbf{x})))=-\nabla\cdot\mathbf{v}(t,\mathbf{X}_{t}( \mathbf{x})). \tag{16}\]
Proof.: See Appendix Section A.1 and A.3.
To summarize, we provided a Lagrangian reformulation of our proposed network by a composition of conditional diffeomorphisms. This lead to an expression of the density and velocity that fulfill the distributional formulation of the continuity equation. Note that we required the Lagrangian flow \(\hat{\mathbf{X}}_{t}\) only to highlight its properties, but not for the implementation of the network. Instead, resolving Eq. 12 and Eq. 11 in terms of the conditional transformation \(\Phi_{f_{\Theta}(t_{0})}\) will recover Eq. 7 and Eq. 8.
### LFlows based on Continuous Normalizing Flows (CNF-LFlows)
In addition to the _Lagrangian Flow Networks_ based on conditional Normalizing Flows, we also consider an analog that is based on CNFs as a comparable baseline model. Equivalent to the Lagrangian view presented before, we first transform an unnormalized base density \(\rho_{base}=c\cdot\mathcal{N}(\mathbf{0},\mathbf{I})\) with \(c\in\mathbb{R}_{+}\) to a density at \(t_{0}\), i.e. \(\rho_{t_{0}}\). Instead of a conditional NF, we however use a continuous NF. Specifically, we evaluate the density \(\ln\rho_{t_{0}}(\mathbf{x}_{t_{0}})=\ln\tilde{\rho}(T_{base},\mathbf{x}_{t_{0}})\) by solving the following system of ODEs backward in time (\(\mathbf{x}_{t_{0}}\to\mathbf{z}\)):
\[\left[\begin{array}{c}\mathbf{x}(0,\mathbf{z})\\ \ln\tilde{\rho}(0,\mathbf{z})\end{array}\right]=\left[\begin{array}{c}\mathbf{z}\\ \ln\rho_{base}(\mathbf{z})\end{array}\right],\quad\quad\partial_{t}\left[\begin{array} []{c}\mathbf{x}(t,\mathbf{z})\\ \ln\tilde{\rho}(t,\mathbf{z})\end{array}\right]=\left[\begin{array}{c}\mathbf{f}_{ \Theta}(t,\mathbf{x}(t,\mathbf{z}))\\ -\nabla\cdot\mathbf{f}_{\Theta}(t,\mathbf{x}(t,\mathbf{z}))\end{array}\right], \tag{17}\]
with \(\mathbf{f}_{\Theta}\in\mathbb{R}^{d}\) being the dynamics given by a freely learnable neural network, and \(T_{base}\in\mathbb{R}_{>0}\) being a hyperparameter controlling the flexibility of the CNF.
A second continuous flow then transforms \(\rho_{t_{0}}\) over time, obtaining \(\rho_{t}\), with \(t\in[0,T]\). That is, \(\ln\rho(t,\mathbf{x})\) can be evaluated by solving the following system of ODEs backward in time (\(\mathbf{x}_{t}\to\mathbf{x}_{t_{0}}\)):
\[\left[\begin{array}{c}\mathbf{x}(0,\mathbf{x}_{t_{0}})\\ \ln\rho_{L}(0,\mathbf{x}_{t_{0}})\end{array}\right]=\left[\begin{array}{c}\mathbf{x} _{t_{0}}\\ \ln\rho_{t_{0}}(\mathbf{x}_{t_{0}})\end{array}\right],\quad\partial_{t}\left[ \begin{array}{c}\mathbf{x}(t,\mathbf{x}_{t_{0}})\\ \ln\rho_{L}(t,\mathbf{x}_{t_{0}})\end{array}\right]=\left[\begin{array}{c}\mathbf{v }_{\Theta}(t,\mathbf{x}(t,\mathbf{x}_{t_{0}}))\\ -\nabla\cdot\mathbf{v}_{\Theta}(t,\mathbf{x}(t,\mathbf{x}_{t_{0}}))\end{array}\right] \tag{18}\]
where \(\rho_{L}(t,\mathbf{x}_{t_{0}})\) denotes the density at time \(t\) of the parcel with initial position \(\mathbf{x}_{t_{0}}\). If we adopt the Lagrangian notation of previous sections, this can be written as \(\rho(t,\mathbf{x})=\rho_{L}(t,\mathbf{X}_{t}^{-1}(\mathbf{x}))\).
The dynamics \(\mathbf{v}_{\Theta}\) of the second CNF is the velocity of our system (given by a neural network). The velocity \(\mathbf{v}_{\Theta}\) then be directly trained on observations. By combining the two CNFs, we obtain a map \(\mathbf{x}_{t}\mapsto\mathbf{z}\), allowing us to evaluate the density at each point in time and space. The networks \(\mathbf{v}_{\Theta}\), \(\mathbf{f}_{\Theta}\), and the scaling parameter \(c\) can then be trained on observations of the density. For the optimization, we rely on the PyTorch implementation of the adjoint by Chen et al. (2018). In 3D settings, we use FFJORD for efficient stochastic estimates of the divergence Grathwohl et al. (2018). This model is closely related to semi-Lagrangian data assimilation methods, which would use a discrete mesh representation for \(\rho_{t_{0}}\) and \(\mathbf{v}_{\Theta}\).
## 4 Regularization
Depending on the data, additional regularization might be desired to avoid overfitting. Following Occam's Razor, we intend to promote models that offer "simple" solutions to the observed density.
Global Mass Regularization: Normalization Constant.Given only sparse observations, the problem of learning the density with an unknown total mass (i.e. an unknown normalization constant \(c\)) is ill-posed. The density of parcels that never pass through a sensor can be arbitrarily small or large. We propose to deal with this ill-posedness by introducing a penalty on the total mass, i.e. on the learned normalization constant \(c\):
\[L_{L2}(c)=w_{c}\int_{\Omega}\rho_{\Theta}(t,\mathbf{x})\,dx=w_{c}\cdot c, \tag{19}\]
with the hyperparameter \(w_{c}\in\mathbb{R}_{\geq 0}\) weighting the penalty. In combination with the data loss, this discourages the appearance of large densities that were never actually observed.
Velocity Regularization: Transport Penalty.If no velocity observations are available, the problem of explaining mass movements is also severely ill-posed. Without any further constraints on the velocity, the model could explain the observed density with implausible (e.g. very large) velocities. Although we do assume that velocities are observed in our main experiments, we deemed an extension for more general problems necessary. To this end, we suggest an optimal transport penalty that encourages small velocities and straight trajectories, motivated by a similar regularization for CNFs by Finlay et al. (2020) and Onken et al. (2021).
Consider the Benamou-Brenier formulation of the optimal transport problem between two densities \(\rho_{t_{0}}\) and \(\rho_{t_{1}}\). The optimal transport map can then be stated as finding the solution map \(\mathbf{X}_{t}\) of a flow defined by a vector field \(\mathbf{v}\) which minimizes the following objective:
\[\min_{\mathbf{v},\rho} \int_{t_{0}}^{t_{1}}\int_{\Omega}|\mathbf{v}(t,\mathbf{x})|^{2}\rho(t, \mathbf{x})\,dx\,dt\] (20) subject to \[\partial_{t}\rho=-\nabla\cdot(\rho\mathbf{v})\,,\qquad\rho(t_{0},\mathbf{x})= \rho_{t_{0}}(\mathbf{x}),\qquad\rho(t_{1},\mathbf{x})=\rho_{t_{1}}(\mathbf{x}).\]
As densities are observed at multiple consecutive time points \(\{t_{i},t_{i+1}\}\), we enforce this objective on the full-time domain \([t_{0},T]\). A 1D example showcasing the effect of this penalty is provided in Figure 2, where a 1D density is measured at four different time points, but no velocity is available.
## 5 Implementation
The proposed _Lagrangian Flow Networks_ allow us to flexibly use bijections that suit the problem at hand for \(\Phi_{f_{\Theta}(t)}(\mathbf{x}_{t})\). Consequently, we mostly rely on existing NF layers. The transformations are conditioned on time via hypernetworks \(f_{\Theta}(t)\) that output the parameters of the bijections in \(\Phi_{f_{\Theta}(t)}(\mathbf{x}_{t})\). The implementation of the hypernetworks follows the ResMADE architecture used in Durkan et al. (2019); Nash and Durkan (2019). In all settings, we use swish (Ramachandran et al., 2017) activations in the hypernetworks. For more details and a visualization of the high-level architecture, we refer to the Appendix Section A.6.1. The resulting networks can be trained by minimizing any common training loss on the density and/or velocity, without requiring additional penalties for enforcing the PDE. For the bijections themselves, we mainly use a combination of (conditional) linear transformations (based on SVD decomposition), followed up by a highly flexible element-wise _Sum-of-Sigmoids_ transformation, which we integrate into a ResMADE-like autoregressive structure. The _Sum-of-Sigmoids_ bijections are a slight variation of the transformations presented in Huang et al. (2018) and we refer to the Appendix Section A.6.2 for more details and a qualitative evaluation. If it is desired to further constrain the density to a bounded region, a further bijection \(\Omega\mapsto\mathbb{R}^{d}\) may be used in the first layer of the conditional flow. Our implementation is an extension of the _nflows1_Pytorch (Paszke et al., 2019) package provided by Durkan et al. (2019).
Footnote 1: [https://github.com/bayesiains/nflows](https://github.com/bayesiains/nflows)
**Limitations.** Although much progress has been made in the flexibility of bijective layers, practical limitations still exist in practice, especially in 1D or 2D. Furthermore, a conditional version of existing architectures is not always straight-forward (e.g. for residual flows (Chen et al., 2019)).
## 6 Experiments
We compare the proposed LFlows with a range of competing methods on a simulated data set, and showcase a real-world of mass-conservative neural networks. In all experiments, we make use of the
Figure 2: Lagrangian trajectories of particles randomly drawn from the base distribution. The model was trained without OT-Penalty (_left_) and with OT-Penalty (_right_).
global mass regularization in Eq. 19 but do not require transport penalties. Details, code, and used computational resources for all experiments are provided in the supplementary material.
### Simulation of Compressible Fluids.
As a synthetic experiment, we simulate densities in 2D and 3D over time. The data is created by transforming a mixture of four unnormalized Gaussians by manually parameterizing time-dependent diffeomorphisms on \(t\in[0,1.2],\Omega=(-4,4)^{d}\), providing us with analytical forms for the densities and velocities. During training only parts of the domain \(\Omega\) will be observed. Training observations are available at 21 timesteps for \(t\in[0,1]\), observed within the upper right and lower left quarters of the domain. The remaining parts of the domain were split into test and validation, with the test set also including data up to time 1.2 for a forecast evaluation. Gaussian noise is added to the training observations of the velocity and log density. The 3D dynamics are similar to the 2D setting, with the \(xy\)-velocity being the same for all \(z\) values, and the \(z\) velocity being 0, making the only added difficulty a higher dimensional domain. For details regarding the data and specific training settings, we refer to Appendix Section A.7.1.
We compare the results of LFlows with (i) PINNs using sinusoidal activations (Sitzmann et al., 2020) (ii) DFNNs (Richter-Powell et al., 2022), and (iii) CNF-LFlows. Aside from the LFlows, only the DFNNs fulfill the continuity equation by construction. However, the memory requirements of second-order derivatives limit them to relatively small networks when relying on consumer-grade GPUs. PINNs and CNF-LFlows are restricted in their accuracy by either the amount of sampled collocation points or the tolerance and order of the used adaptive solver. All models were trained with similar computing resources, and optimized in a similar manner based on the validation set \(R^{2}\) for the density 2. Quantitative results for 5 runs with different random seeds are provided in Table 1. LFlows performed the best, with PINNs being a close second in the 2D setting. In 3D however, the used number of collocation points (limited by GPU memory) was not sufficient to enforce the PDE everywhere. This can also be seen in the qualitative results for the 3D setting in Figure 3. While the CNF-LFlows performed well, the non-straight trajectories required relatively small tolerances for the ODE solver. Consequently, they were by far the slowest to train and optimize. The DFNNs fared worst and were in our subjective experience most prone to numerical issues. We verified that the used architecture can provide good predictions when trained on dense data over the whole domain. Due to this observation, we conjecture that DFNNs do not generalize well on sparse observations.
Footnote 2: \(R^{2}=1-\frac{MSE(y_{\text{obs}},\beta)}{Var(y_{\text{obs}})}\leq 1\) with \(R^{2}=1\) indicating a perfect reconstruction.
### Modeling Bird Migration
As a real-world application of a mass conservative model, we model large-scale bird migration within Europe based on weather radar measurements. We use data provided by Nussbaumer et al. (2021), containing estimated bird densities (\(birds/km^{3}\)) and velocities (\(m/s\)). Measurements are from 37 weather radar stations in France, Germany, and the Netherlands in up to 5-minute intervals at 200m altitude bins reaching up to 5km. The velocity data does not include a \(z\)-axis component. To the best of our knowledge, we are the first to train a continuous model constrained by mass conservation on such a data set. Due to space constraints, a detailed description of the data set, used layers, and training settings are provided in the Appendix Section A.7.2.
We train on 3 subsequent nights of April 2018, and evaluate the model by forecasting a 4th night. The validation set consists of the second half of the 3rd night. In this setting, the LFlow has to explain migration waves over multiple nights while not allowing any spurious density (dis-)appearances. To make sure that we do not lose significant predictive power, we verify that the density errors are
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & DFNNs & PINN & CNF-LFlow & LFlow \\ \hline
**2D** & 0.35 (-0.11 / 0.78) & 0.91 (0.89 / 0.93) & 0.67 (0.64 / 0.69) & **0.95** (0.93 / 0.96) \\
**3D** & 0.38 (0.36 / 0.40) & 0.53 (0.35 / 0.73) & 0.57 (0.52 / 0.64) & **0.97** (0.94 / 0.99) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean, (min/max) of test \(R^{2}\) for 5 repeated runs of the 2D and 3D synthetic experiment.
on a scale comparable to a completely unconstrained neural network. For that, we train a 5-layer multi-layer perceptron (MLP) with skip connections, 256 hidden units per layer, and ReLU activations. To encourage spatially smooth predictions, we add in both cases noise to the radar locations during training.
We report the average MSE (\(\pm\)SD) of the log1p transformed densities for 20 different random seeds. With an error of \(0.558\) (\(\pm 0.07\)), our LFlows achieve slightly better but overall comparable accuracy to the MLP with \(0.686\) (\(\pm 0.32\)). However, without fulfilling the CE, the velocity of the MLP can not be physically consistent with the density predictions. The velocity and density would then necessarily lead to disagreeing descriptions of the flow. In contrast, the LFlows ensure physical consistency of velocity and density. Figure 4 shows the predictions of the LFlow network, where we can see a mass of birds migrating northwards, as indicated by the velocity field. In addition, our network directly provides parcel trajectories. In praxis, experts can directly compare these to the migration paths taken by individual birds obtained for example via bird ringing or capture-mark-recapture studies.
Figure 4: Snapshots of the predicted bird density at three consecutive nights within central Europe. The 2D projection was obtained by integrating over altitudes covered by the radars. Red lines indicate \(xy\) projection of 3D trajectories from \(t_{0}\) to \(t\) using randomly sampled departure points.
Figure 3: Evaluation of the predicted density on the \(xy\)-plane with \(z=0\) for different methods. Arrows indicate velocity, except for the DFNNs where the normalized flux is shown. The first plot indicates the spatial splitting into train (green), validation (yellow), and test (purple) subsets.
Conclusion
In this work, we address the problem of modeling densities and velocities governed by the law of mass conservation. By establishing a link between time-conditioned diffeomorphisms and solution maps for the continuity equation, we introduce _Lagrangian Flow Networks_ as a general model for hydrodynamic flow problems. The proposed networks fulfill the continuity equation by construction. By avoiding any discretization of space or time, LFlows directly circumvent scaling and accuracy limitations commonly present in PINNs, Neural-ODEs, and more classical mesh-based methods. To address ill-posed settings we further provide two regularization methods that encourage solutions with simple trajectories and discourage solutions with a large total mass.
We evaluate the practical applicability of LFlows in synthetic experiments, demonstrating higher predictive accuracy compared to competing methods. In a real-world experiment, we model large-scale bird migrations, where mass conservation ensures the physical consistency of the predicted densities and velocities. In addition, the readily available parcel trajectories can be directly compared to migration paths of individual birds obtained from different data modalities.
|
2307.14828 | Identifying regime switches through Bayesian wavelet estimation:
evidence from flood detection in the Taquari River Valley | Two-component mixture models have proved to be a powerful tool for modeling
heterogeneity in several cluster analysis contexts. However, most methods based
on these models assume a constant behavior for the mixture weights, which can
be restrictive and unsuitable for some applications. In this paper, we relax
this assumption and allow the mixture weights to vary according to the index
(e.g., time) to make the model more adaptive to a broader range of data sets.
We propose an efficient MCMC algorithm to jointly estimate both component
parameters and dynamic weights from their posterior samples. We evaluate the
method's performance by running Monte Carlo simulation studies under different
scenarios for the dynamic weights. In addition, we apply the algorithm to a
time series that records the level reached by a river in southern Brazil. The
Taquari River is a water body whose frequent flood inundations have caused
various damage to riverside communities. Implementing a dynamic mixture model
allows us to properly describe the flood regimes for the areas most affected by
these phenomena. | Flávia Castro Motta, Michel Helcias Montoril | 2023-07-27T13:06:28Z | http://arxiv.org/abs/2307.14828v2 | Identifying regime switches through Bayesian wavelet estimation: evidence from flood detection in the Taqvari River Valley
###### Abstract
Two-component mixture models have proved to be a powerful tool for modeling heterogeneity in several cluster analysis contexts. However, most methods based on these models assume a constant behavior for the mixture weights, which can be restrictive and unsuitable for some applications. In this paper, we relax this assumption and allow the mixture weights to vary according to the index (e.g., time) to make the model more adaptive to a broader range of data sets. We propose an efficient MCMC algorithm to jointly estimate both component parameters and dynamic weights from their posterior samples. We evaluate the method's performance by running Monte Carlo simulation studies under different scenarios for the dynamic weights. In addition, we apply the algorithm to a time series that records the level reached by a river in southern Brazil. The Taquari River is a water body whose frequent flood inundations have caused various damage to riverside communities. Implementing a dynamic mixture model allows us to properly describe the flood regimes for the areas most affected by these phenomena.
## 1 Introduction
In several data analysis problems, we want to cluster observations between two groups. For instance, in many clinical studies, the goal is to classify patients according to disease absent or present (see Hall and Zhou, 2003; Rindskopf and Rindskopf, 1986; Hui and Zhou, 1998). In contamination problems found in astronomy investigations, on the other hand, the aim is to separate the objects of interest, called members (e.g., stars), from foreground/background objects contaminating the sample, known as contaminants (see Walker et al., 2009). In genetics, studies based on microarray data are usually driven to detecting differentially expressed genes under two conditions, e.g., "healthy tissue _versus_ diseased tissue" (see Bordes et al., 2006).
To address these scenarios of bimodal data sets, two-component mixture models have shown to be excellent alternatives to cluster data observations within the group that
better describes their features (Patra and Sen, 2016). In this context, the mixture model with two components will assume that the sample of data observations \(y_{1},\ldots,y_{n}\) is, in fact, the realization of a random variable \(Y\) that belongs to a population composed of two subpopulations, known as mixture components. Thus, at each point \(t\), \(t=1,\ldots,n\), \(Y\) is fitted according to some of the mixture components, dictated by a mixture weight \(\alpha\).
This setting may be very restrictive to some data sets. For instance, in epidemiological studies that evaluate the response to medications, the probability of classifying a patient in the group of "disease present" must be allowed to vary across time so that the longitudinal effect of the treatment can be properly measured. The same issue arises in quality control problems, where the probability of the supervised system operating in a failure-free regime is also not constant over time. In order to classify those features properly, under a mixture model assumption, the mixture weight should be allowed to vary according to the index (which could be time or location). In other words, it would be appropriate for the mixture weight to present a dynamic behavior.
Assuming dynamic mixture weights for mixture models is an extension that has already been applied in different areas, from traffic flow applications (see Nagy et al., 2011) to investigations in genetics (see Montoril et al., 2019, 2021). As discussed in Montoril et al. (2021), this generalization is similar to the extension of Hidden Markov Models (HMM) into non-homogeneous Hidden Markov Models (NHMM), first described by Hughes and Guttorp (1994). In both scenarios, one generalizes the model by considering unobserved varying probabilities. In the case of mixture models, those dynamic probabilities are the mixture weights, whereas, in HMM, they are the transition probabilities. It is important to emphasize that, although connected, dynamic mixture weights and transition probabilities are different things.
Considering a "non-homogeneous" structure for the mixture model implies that, besides estimating the dynamic mixture weights, one also needs to estimate the component parameters, and that increases the challenge. For instance, in Montoril et al. (2019), from a frequentist approach, the authors rely on wavelets to perform the estimation of the dynamic weights, where they transform the data in order to deal with a nonparametric heteroscedastic regression. Nonetheless, their procedure depends on assuming known means and variances for the mixture components, which, in practice, may be unrealistic.
In this work, unlike the aforementioned paper, the leading motivation is to provide a Bayesian approach that estimates not only the dynamic mixture weights but also the component parameters of a two-component mixture model. To accomplish this goal, we propose an efficient Gibbs sampling algorithm, which allows the distribution of the posterior draws to be used for inference purposes. Regarding the dynamic mixture weights, we use the data augmentation method by Albert and Chib (1993) and incorporate Bayesian wavelet denoising techniques to estimate the dynamic behavior of the mixture weight. We do this to exploit the good properties of wavelets in curves' estimation.
Wavelets are families of basis functions that can be used to represent other functions, signals, and images as a series of successive approximations (Hardle et al., 2012, Abramovich et al., 2000). In statistical applications, these mathematical tools have been successfully used to solve problems in nonparametric regression (see Donoho and
Johnstone, 1994, Cai and Brown, 1999); density estimation (see Donoho, 1993a, Donoho et al., 1996, Hall and Patil, 1995); time series analysis (see, e.g., Morettin, 1996, Priestley, 1996, Percival and Walden, 1999); among many other areas. There is a vast literature that provides a review of wavelets in statistics (see, e.g., Vidakovic, 1999, Ogden, 1997).
In this paper, wavelet bases are applied to enable the estimation of the dynamic mixture weights. To review the mathematical background and the terminology associated with the wavelet theory, in the following section, we provide a short introduction to the wavelet basis functions; the discrete wavelet transform (DWT); and, the Bayesian approach for denoising in a wavelet-based scenario. The remainder of the paper is organized as follows. In Section 3, we describe the dynamic mixture model considered in this paper and give details related to the MCMC sampling scheme constructed to perform the estimation. In Section 4, we present some numerical experiments. We first conduct Monte Carlo simulations to evaluate the method in a controlled setting. Then, we apply the MCMC algorithm to a river data set to identify periods when flood inundations occurred.
## 2 Wavelets
In this work, we use the term _wavelets_ to refer to a system of orthonormal basis functions for \(L_{2}([0,1])\) or \(L_{2}(\mathbb{R})\). The bases are generated by dyadic translations and dilations of the functions \(\varphi(\cdot)\) and \(\psi(\cdot)\), known, respectively, as the _scaling_ and _wavelet_ functions. These systems of integer-translates and dilates are given by
\[\varphi_{jok}(t) =2^{j_{0}/2}\varphi(2^{j_{0}}t-k),\quad k\in\mathbb{Z},\] \[\psi_{jk}(t) =2^{j/2}\psi(2^{j}t-k),\quad j,k\in\mathbb{Z}.\]
Thus, for any integer \(j_{0}\) and \(J\), a periodic function \(f(t)\in L_{2}([0,1])\) can be approximated in \(L_{2}\)-sense as the projection onto a multiresolution space \(V_{J}\):
\[f(t)=\sum_{k=0}^{2^{j_{0}}-1}c_{j_{0}k}\varphi_{j_{0}k}(t)+\sum_{j=j_{0}}^{J-1 }\sum_{k=0}^{2^{j}-1}d_{jk}\psi_{jk}(t),\]
where \(c_{j_{0}k}\)'s are known as _scaling coefficients_ and \(d_{jk}\)'s are called _detail coefficients_. The former are associated with the coarsest resolution level in witch \(f(t)\) was decomposed, \(j_{0}\). As a result, they capture the gross structure of \(f(t)\). The detail coefficients, on the other hand, being linked to finer resolution levels, can capture local information about \(f(t)\). Put simply, in moving from a coarser resolution level \(j\) to a finer \(j+1\), we are increasing the resolution at which a function is approximated, thus the expansion coefficients become more descriptive about the local features of \(f(t)\).
In practice, we access \(f(t)\in L_{2}([0,1])\) through a grid of points in time or space in which \(f\) is applied. Therefore, consider \(\mathbf{f}=(f(1/n),f(2/n),\ldots,f(n/n))^{T}\) to be a vector of samples of \(f(t)\) on an equispaced grid of \(n\) points, with \(n=2^{J}\), for some positive integer \(J\). To obtain the scaling and detail coefficients that approximate \(\mathbf{f}\), we
perform the _discrete wavelet transform_ (DWT) of \(\mathbf{f}\). In matrix notation, the DWT of \(\mathbf{f}\) is
\[\mathbf{\theta}=\mathbf{W}\mathbf{f}, \tag{1}\]
where \(\mathbf{\theta}=(c_{00},d_{00},\mathbf{d}_{1}^{T},\ldots,\mathbf{d}_{J-1}^{T})^{T}\) is a vector of size \(n\), having both scaling and detail coefficients \(\mathbf{d}_{j}=(d_{j0},d_{j1},\ldots,d_{j2^{j}-1})^{T}\), and \(\mathbf{W}\) is the DWT matrix with \((jk,i)\) entry given by \(W_{jk,i}\sqrt{n}\approx\psi_{jk}(i/n)=2^{j/2}\psi(2^{j}i/n-k)\), \(k=0,\ldots,2^{j}-1\), \(j=1,\ldots,J-1\). (Abramovich et al., 1998). By orthogonality, the multiplication \(\mathbf{W}^{\mathbf{T}}\mathbf{\theta}\) recovers the signal \(\mathbf{f}\). This transformation from wavelet coefficients to fitted values is known as the _inverse discrete wavelet transform_ (IDWT).
One of the main advantages provided by the DWT is the sparse representation generally achieved. As shown by Donoho (1993b), wavelets are _unconditional bases_ for a range of function spaces, such as Holder and Sobolev spaces, as well as spaces suitable for representing functions of 'bounded variation'. As an aside, it is also worth mentioning that using Mallat's pyramid algorithm (Mallat, 1989), the DWT and IDWT are performed requiring only \(\mathcal{O}(n)\) operations, which makes them very efficient in terms of computational speed and storage. These properties help to explain why wavelet bases are excellent tools to address problems of data analysis. In the following section, we present a brief review of handling the denoising problem within the wavelet domain, emphasizing the Bayesian framework due to its central role in the estimation process of this paper.
### Bayesian wavelet denoising
Consider the nonparametric regression model
\[\mathbf{y}=\mathbf{f}+\mathbf{e}, \tag{2}\]
where \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\) is the vector of observed values, \(\mathbf{f}=(f(1/n),\ldots,f(n/n))^{T}\) is the function of interest applied to a grid of \(n\) equally spaced points, and \(\mathbf{e}=(e_{1},\ldots,e_{n})^{T}\) is a vector of zero-mean random variables. For most applications, \(e_{t}\)'s are independent and identically distributed normal random variables with zero mean and constant variance \(\sigma^{2}\). The goal of nonparametric regression is to recover the unknown function \(f\) from the noisy observations \(\mathbf{y}\).
With that in mind, Donoho and Johnstone (1994) propose to transform the observations \(\mathbf{y}\) to the wavelet domain, shrink the noisy wavelet coefficients or even equal them to zero, based on some threshold rule, and then estimate \(\mathbf{f}\) by applying the IDWT to the regularized coefficients. This method is known in the literature as _wavelet shrinkage_. Therefore, let \(n\) be a power of two, \(n=2^{J}\) for some positive integer \(J\). Then, we can represent (2) in the wavelet domain as
\[\mathbf{d}^{*}=\mathbf{\theta}+\mathbf{\varepsilon}, \tag{3}\]
where \(\mathbf{d}^{*}=\mathbf{W}\mathbf{y}\), \(\mathbf{\theta}=\mathbf{W}\mathbf{f}\), and \(\mathbf{\varepsilon}=\mathbf{W}\mathbf{e}\), with \(\mathbf{W}\) being the DWT matrix.
From a Bayesian perspective, the wavelet shrinkage technique consists in assigning a
prior distribution to each wavelet coefficient of the unknown function. The idea is that, by choosing a prior able to capture the sparseness associated with most wavelet decompositions, we can estimate \(\mathbf{\theta}\), by imposing some Bayes rule on the resulting posterior distribution of the wavelet coefficients. Then, applying the IDWT to the estimated \(\mathbf{\theta}\) gives us an estimation of \(\mathbf{f}\).
One of the most appropriate prior choices for modeling wavelet coefficients are the _spike and slab_ priors. First consolidated within Bayesian variable selection methods (George and McCulloch, 1993), these kinds of prior are a mixture between two components: one that concentrates its mass at values close to zero or even in zero (Dirac delta) and another whose mass is spread over a wide range of possible values for the unknown parameters. Choosing this mixture as prior to the distribution of wavelet coefficients allows the first component, known as _spike_, to capture the null wavelet coefficients, while the second component, called _slab_, describes the coefficients associated with the unknown function.
A spike and slab prior frequently assigned to wavelet coefficients is the mixture between a point mass at zero and a Gaussian distribution (see, e.g., Abramovich et al., 1998). In this scenario, each detail wavelet coefficient is distributed following
\[\pi_{j}\mathrm{N}(0,\upsilon_{j}^{2})+(1-\pi_{j})\delta_{0}(\theta_{jk}), \tag{4}\]
\(k=0,1,\ldots,2^{j}-1\), \(j=0,1,\ldots,J-1\), with \(\delta_{0}\) being a point mass at zero. The prior specification is usually completed by assigning a diffuse prior to the scaling coefficient at the coarsest level \(c_{00}\). Thus, the sample scaling coefficient obtained from the DWT of the data estimates \(c_{00}\)(Abramovich et al., 1998).
Under the prior (4), the posterior distribution for each detail coefficient is also a mixture between a Gaussian distribution and \(\delta_{0}\), given by
\[\begin{split}\theta_{jk}|d_{jk}^{*}&\sim\pi_{\rm post }\mathrm{N}\left(\frac{\upsilon_{j}^{2}}{1+\upsilon_{j}^{2}}d_{jk}^{*},\frac{ \upsilon_{j}^{2}}{1+\upsilon_{j}^{2}}\right)+(1-\pi_{\rm post})\delta_{0}( \theta_{jk}),\\ \pi_{\rm post}&=\frac{\pi_{j}g_{\upsilon_{j}^{2}}(d_ {jk}^{*})}{\pi_{j}g_{\upsilon_{j}^{2}}(d_{jk}^{*})+(1-\pi_{j})\phi(d_{jk}^{*}) },\end{split} \tag{5}\]
\(k=0,1,\ldots,2^{j}-1\), \(j=0,1,\ldots,J-1\), where \(\phi\) denotes the standard normal density and \(g_{\upsilon_{j}^{2}}\) denotes the convolution between the slab component in (4) (in this case \(\mathrm{N}(0,\upsilon_{j}^{2})\)) and \(\phi\). Using \(\gamma\) to denote the slab density and \(\star\) to denote the convolution operator, we can write \(g=\gamma\star\phi\). It should be stressed that, as shown by Abramovich et al. (1998), using the posterior medians as the pointwise estimates of \(\mathbf{\theta}\) yields a _thresholding rule_. In other words, we are able to equal the estimated noisy coefficients to zero.
In the _Empirical Bayes thresholding_ method by Johnstone and Silverman (2005a;b), the authors propose replacing the Gaussian component in (4) with heavy-tailed distributions, such as the Laplace density. This replacement intends to provide larger estimates for the non-null coefficients than those obtained from Gaussian distributions. In this scenario, considering the Laplace density as the slab component, the prior for each detail
wavelet coefficient can be written as
\[\pi_{j}\gamma_{a}(\theta_{jk})+(1-\pi_{j})\delta_{0}(\theta_{jk}), \tag{6}\]
\(k=0,1,\ldots,2^{j}-1\), \(j=0,1,\ldots,J-1\), where \(\gamma_{a}(x)\) denotes the Laplace density with scale parameter \(a>0\), i.e.,
\[\gamma_{a}(x)=\frac{a}{2}\exp(-a|x|),\quad x\in\mathbb{R}. \tag{7}\]
Johnstone and Silverman (2005a;b) thresholding method is called Empirical Bayes because the hyperparameters \(\pi_{j}\) and \(a\) are chosen empirically from the data, using a marginal maximum likelihood approach. Thus, for each resolution level \(j\) of the wavelet transform, the arguments \(\pi_{j}\) and \(a\) that maximize the marginal log-likelihood are selected and plugged back into the prior. Then, the estimation of \(\mathbf{\theta}\) is carried out with either posterior medians, posterior means, or other estimators. Under these circumstances, the posterior distribution is given by
\[\begin{split}\theta_{jk}|d_{jk}&\sim\pi_{\text{ post}}f_{1}(\theta_{jk}|d_{jk})+(1-\pi_{\text{post}})\delta_{0}(\theta_{jk}),\\ \pi_{\text{post}}&=\frac{\pi_{j}g_{a}(d_{jk}^{*})}{ \pi_{j}g_{a}(d_{jk}^{*})+(1-\pi_{j})\phi(d_{jk}^{*})},\end{split} \tag{8}\]
\(k=0,1,\ldots,2^{j}-1\), \(j=0,1,\ldots,J-1\), with \(f_{1}(\theta_{jk}|d_{jk})\) being the non-null mixture component and \(g_{a}=\gamma_{a}\star\phi\). It can be shown that \(f_{1}(\theta_{jk}|d_{jk})\) is a mixture of two truncated normal distributions. Define \(f_{\text{TN}}(x|\mu,\sigma,\alpha,\beta)\) to be the density of a truncated normal distribution with location parameter \(\mu\), scale parameter \(\sigma\), minimum value \(\alpha\) and maximum value \(\beta\). Then, with a slight abuse of notation, we can write \(f_{1}(\theta_{jk}|d_{jk})\) as
\[\begin{split} f_{1}(\theta_{jk}|d_{jk})&=\eta\times f _{\text{TN}}\left(\theta_{jk}\Big{|}\frac{d_{jk}}{\sigma_{j}}-a,1,0,+\infty \right)\\ &\quad+(1-\eta)\times f_{\text{TN}}\left(\theta_{jk}\Big{|}\frac {d_{jk}}{\sigma_{j}}+a,1,-\infty,0\right),\end{split} \tag{9}\]
where
\[\eta=\frac{\exp{(-a\frac{d_{jk}}{\sigma_{j}})\Phi(\frac{d_{jk}}{\sigma_{j}}-a )}}{\exp{(a\frac{d_{jk}}{\sigma_{j}})\tilde{\Phi}(\frac{d_{jk}}{\sigma_{j}}+a) +\exp{(-a\frac{d_{jk}}{\sigma_{j}})\Phi(\frac{d_{jk}}{\sigma_{j}}-a)}}},\]
with \(\Phi\) denoting the standard normal cumulative function, and \(\tilde{\Phi}=1-\Phi\).
## 3 The model
Let \(y_{1},\ldots y_{n}\) be a random sample from the dynamic Gaussian mixture model
\[y_{t} =(1-z_{t})x_{1t}+z_{t}x_{2t}, \tag{10}\] \[x_{kt}|\mu_{k},\tau_{k}^{2} \sim\mathrm{N}(\mu_{k},\tau_{k}^{-2}),\quad k=1,2,\] \[z_{t}|\alpha_{t} \sim\mathrm{Bern}(\alpha_{t}),\quad t=1,\ldots,n,\]
where \(z_{t}\)'s are allocation variables that indicate to which mixture component the observations \(y_{t}\)'s belong to. The \(z_{t}\) have a Bernoulli distribution with parameter \(\alpha_{t}\), the mixture weight that has a dynamic behavior. In (10), the component parameters \(\mu_{k}\) and \(\tau_{k}^{2}\), \(k=1,2\), and the dynamic mixture weights \(\alpha_{t}\), \(t=1,\ldots,n\), are parameters to be estimated.
Following Albert and Chib (1993), we introduce a data augmentation approach by associating an auxiliary variable \(l_{t}\) to each allocation variable \(z_{t}\). In the original work, \(l_{t}=\mathbf{x}_{t}^{T}\mathbf{\theta}+e_{t}\) and \(e_{t}\sim\mathrm{N}(0,1)\), where \(\mathbf{x}_{t}\) is a vector of \(p\) known covariates and \(\mathbf{\theta}\) is a vector of \(p\) unknown parameters. In greater detail, \(z_{t}=1\), if \(l_{t}>0\), and \(z_{t}=0\), otherwise. However, unlike in Albert and Chib (1993), where the design matrix \(\mathbf{X}\) in the prob regression corresponds to the covariates related to \(\alpha_{t}\), in this paper, \(\mathbf{X}=\mathbf{W}^{T}\), where \(\mathbf{W}\) is the DWT matrix. Thus, for every \(t=1,\ldots,n\), we have
\[l_{t} =\mathbf{x}_{t}^{T}\mathbf{\theta}+e_{t}, \tag{11}\] \[e_{t} \sim\mathrm{N}(0,1),\]
where \(\mathbf{x}_{t}\) corresponds to the \(t\)-th column of matrix \(\mathbf{W}\) and \(\mathbf{\theta}=(c_{00},d_{00},\mathbf{d}_{1}^{T},\ldots,\mathbf{d}_{J-1}^{T})^{T}\) is the vector of wavelet coefficients, such that \(n=p=2^{J}\). Therefore, the dynamic mixture weight \(\alpha_{t}\), which is the probability of success of \(z_{t}\), is given by the binary regression model,
\[\alpha_{t}=\Phi(\mathbf{x}_{t}^{T}\mathbf{\theta}),\]
where \(\Phi\) is the standard Gaussian cumulative function.
### Bayesian estimation
In this paper, the estimation of both component parameters and dynamic mixture weights is performed through a Gibbs sampling algorithm. By giving conjugate prior distributions to the parameters, we sample from their full conditional posterior distributions and make inferences about the parameter values (e.g., point and credible estimates). In this section, we first present the full conditional posterior distributions from which we draw the parameters of (10). Then, we detail the MCMC algorithm built to perform the sampling.
In (10), since we are mostly interested in the estimation of the mixture weights, we assume that the sample \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\) is a time series whose dependence structure is determined by the dynamic behavior of \(\alpha_{t}\)'s. In this setting, given the component parameters and the dynamic mixture weights, the observations \(y_{t}\)'s are conditionally
independent, and we have \(p(\mathbf{y}|\mathbf{\mu},\mathbf{\tau^{2}},\mathbf{z})=\prod_{t=1}^{n}p(y_{t}|z_{t},\mathbf{\mu},\mathbf{ \tau^{2}})\). Thus, the complete-data likelihood function \(p(\mathbf{y}|\mathbf{\mu},\mathbf{\tau^{2}},\mathbf{z})\) is given by
\[\prod_{k=1}^{2}\left(\frac{\tau_{k}^{2}}{2\pi}\right)^{T_{k}/2}\exp{\left[- \frac{\tau_{k}^{2}}{2}\sum\limits_{t:z_{t}=k-1}(y_{t}-\mu_{k})^{2}\right]},\]
where \(T_{k}=\#\{t:z_{t}=k-1,\,t=1,2,...,n\}\) and \(s_{k}=\sum\limits_{t:z_{t}=k-1}y_{t}\) for \(k=1,2\). For the complete-data Bayesian estimation of \(\mathbf{\mu}=(\mu_{1},\mu_{2})^{T}\) and \(\mathbf{\tau^{2}}=(\tau_{1}^{2},\tau_{2}^{2})^{T}\), \(p(\mathbf{y}|\mathbf{\mu},\mathbf{\tau^{2}},\mathbf{z})\) is combined with prior distributions to obtain the posteriors. A common issue that arises in the Bayesian estimation of mixture models is the invariance of the mixture likelihood function under the relabelling of the mixture components, known as _label switching_. To address this problem in our approach, we adopt the simple constraint \(\mu_{1}<\mu_{2}\) and reorder the pairs \((\mu_{k},\tau_{k}^{2})\) according to this restriction in the MCMC sampling scheme.
Following the usual practice of assigning independent prior distributions to the component parameters (see Escobar and West, 1995, Richardson and Green, 2002), we assume \(p(\mathbf{\mu},\mathbf{\tau_{k}^{2}})=p(\mu_{1})p(\tau_{1}^{2})p(\mu_{2})p(\tau_{2}^{2})\) and place the following priors on \(\mu_{k}\) and \(\tau_{k}^{2}\), \(k=1,2\),
\[\mu_{k} \sim\mathrm{N}(b_{0k},B_{0k}), \tag{12}\] \[\tau_{k}^{2} \sim\Gamma(c_{0k},C_{0k}). \tag{13}\]
For the sake of simplicity, hereafter we denote by \([\dots]\) the set of all remaining variables to be considered for the posterior in use. Hence, under the conjugate priors (12) and (13), one obtains the conditional posterior distributions for \(\mu_{k}\) and \(\tau_{k}^{2}\),
\[\mu_{k}|[\dots] \sim\mathrm{N}(b_{k},B_{k}), \tag{14}\] \[\tau_{k}^{2}|[\dots] \sim\Gamma(c_{k},C_{k}), \tag{15}\]
where
\[B_{k} =(B_{0k}^{-1}+\tau_{k}^{2}T_{k})^{-1}, C_{k} =C_{0k}+\frac{\sum\limits_{t:z_{t}=k-1}(y_{t}-\mu_{k})^{2}}{2},\] \[b_{k} =B_{k}(\tau_{k}^{2}s_{k}+B_{0k}^{-1}b_{0k}), c_{k} =c_{0k}+\frac{T_{k}}{2}.\]
It is worth stressing that assuming the mixture weights to have a dynamic behavior does not interfere with the full conditional posteriors of the component parameters, because they are calculated as in the case of the ordinary (static) mixture model.
Given the observations \(\mathbf{y}\), the component parameters \(\mathbf{\mu}\), \(\mathbf{\tau^{2}}\) and \(\mathbf{\alpha}=(\alpha_{1},\dots,\alpha_{n})^{T}\), the \(z_{t}\)'s are conditionally independent and \(p(z_{t}=1|\mathbf{y},\mathbf{\mu},\mathbf{\tau^{2}},\mathbf{\alpha})\propto\alpha_{t}f_{N}(y_ {t}|\mu_{2},\tau_{2}^{-2})\). Thus, one can easily show that, for each \(t=1,\dots,n\), the full conditional posterior of \(z_{t}\)
is given by
\[z_{t}|[\dots] \sim\text{Bern}(\beta_{t}), \tag{16}\] \[\beta_{t} =\frac{\alpha_{t}f_{N}(y_{t}|\mu_{2},\tau_{2}^{-2})}{\alpha_{t}f_{N }(y_{t}|\mu_{2},\tau_{2}^{-2})+(1-\alpha_{t})f_{N}(y_{t}|\mu_{1},\tau_{1}^{-2})}.\]
The latent variables introduced in (11) are unknown. However, given the vector of wavelet coefficients \(\mathbf{\theta}\) and the allocation data \(\mathbf{z}=(z_{1},\dots,z_{n})^{T}\), we can use the structure of the MCMC algorithm to draw \(l_{1},\dots,l_{n}\) from their posterior distribution, which is
\[l_{t}|[\dots] \sim\text{N}(\mathbf{x}_{t}^{T}\mathbf{\theta},1)\text{ truncated at left by }0\text{ if }z_{t}=1, \tag{17}\] \[l_{t}|[\dots] \sim\text{N}(\mathbf{x}_{t}^{T}\mathbf{\theta},1)\text{ truncated at right by }0\text{ if }z_{t}=0.\]
For the vector of parameters \(\mathbf{\theta}\), Albert and Chib (1993) derived the posterior distribution of \(\mathbf{\theta}\) given \(\mathbf{z}\) and \(\mathbf{l}\) under diffuse and Gaussian priors. In this work, on the other hand, \(\mathbf{\theta}\) is a vector of wavelet coefficients. As a result, we need a _sparsity inducing_ prior able to address the noise \(e_{t}\) in (11). Thus, following the discussion in Section 2.1, we suggest using spike and slab priors for the components of vector \(\mathbf{\theta}\). In this scenario, we assume that the entries of \(\mathbf{\theta}\) are mutually independent. For \(t=2^{j}+k+1\), \(k=0,\dots,2^{j}-1\) and \(j=0,\dots,J-1\), this kind of prior can be specified as
\[\theta_{t}\sim(1-\pi_{j})\delta_{0}(\cdot)+\pi_{j}\gamma(\cdot), \tag{18}\]
where we consider \(\gamma\) to be either the Gaussian distribution or the Laplace distribution as presented in (4) and in (6), respectively. Following Abramovich et al. (1998), the prior specification is completed by assigning a diffuse prior on the scaling coefficient at the coarsest level \(c_{00}\), in the first entry of vector \(\mathbf{\theta}\).
Under (18), the posterior distribution of \(\theta_{t}\) is given by
\[\theta_{t}|[\dots] \sim(1-\pi_{\text{post}})\delta_{0}(\theta_{t})+\pi_{\text{post} }f_{1}(\theta_{t}|\mathbf{w}_{t}^{T}\mathbf{l}), \tag{19}\] \[\pi_{\text{post}} =\frac{\pi_{j}g(\mathbf{w}_{t}^{T}\mathbf{l})}{\pi_{j}g(\mathbf{w}_{t}^{T} \mathbf{l})+(1-\pi_{j})\phi(\mathbf{w}_{t}^{T}\mathbf{l})},\]
where \(\mathbf{w}_{t}\) is a column-vector corresponding to the \(t\)-th row of matrix \(\mathbf{W}\), \(f_{1}(\theta_{t}|\mathbf{w}_{t}^{T}\mathbf{l})\) is the posterior non-null mixture component and \(g\) is the convolution between \(\gamma\) and the standard normal distribution \(\phi\), \(g=\gamma\star\phi\).
Regarding the hyperparameters of the spike and slab priors, that is, the sparsity parameter \(\pi_{j}\) and the variance \(\upsilon_{j}^{2}\) (Gaussian component) or the scale parameter \(a\) (Laplace component), we follow the approach in Johnstone and Silverman (2005a;b) and estimate them jointly by maximizing the marginal log likelihood function, which is given by
\[\sum_{i=1+2^{j}}^{2^{j+1}}\log\{(1-\pi_{j})\phi(\mathbf{w}_{i}^{T}\mathbf{l})+\pi_{j}g (\mathbf{w}_{i}^{T}\mathbf{l})\}.\]
These values are then used in (19) to sample the vector \(\mathbf{\theta}\) in the MCMC procedure, which is detailed in Algorithm 1.
```
1:Choose number of iterations \(N\).
2:Specify initial values for \(\mathbf{\mu}^{(0)}\), \(\mathbf{\tau}^{2}{}^{(0)}\), \(\mathbf{z}^{(0)}=(z_{1}^{(0)},\ldots,z_{n}^{(0)})^{T}\) and \(\mathbf{\alpha}^{(0)}\).
3:for\(i\gets 1\) to \(N\)do
4: Sample \(\mu_{1}^{(i)}\sim p(\mu_{1}|[\ldots])\). \(\triangleright\) See (14)
5: Sample \(\tau_{1}^{2(i)}\sim p(\tau_{1}^{2}|[\ldots])\). \(\triangleright\) See (15)
6: Sample \(\mu_{2}^{(i)}\sim p(\mu_{2}|[\ldots])\). \(\triangleright\) See (14)
7: Sample \(\tau_{2}^{2(i)}\sim p(\tau_{2}^{2}|[\ldots])\). \(\triangleright\) See (15)
8:if\(\mu_{2}<\mu_{1}\)then
9: Permute the labeling of pairs \((\mu_{k}^{(i)},\tau_{k}^{2(i)})\).
10:endif
11: Sample \(z_{t}^{(i)}\sim p(z_{t}|[\ldots])\), for \(t=1,\ldots,n\). \(\triangleright\) See (16)
12: Sample \(l_{t}^{(i)}\sim p(l_{t}|[\ldots])\), for \(t=1,\ldots,n\). \(\triangleright\) See (17)
13: Select \(v_{2}^{2}\)\(/\)\(a\) and \(\pi_{j}\) by marginal maximum likelihood.
14: Sample \(\theta_{t}^{(i)}\sim p(\theta_{t}|[\ldots])\), for \(t=1,\ldots,n\). \(\triangleright\) See (19)
15: Calculate \(\mathbf{\alpha}^{(i)}=\Phi(\mathbf{W}^{T}\mathbf{\theta})\). \(\triangleright\)\(\mathbf{W}\) is the matrix form of the DWT.
16:endfor
```
**Algorithm 1** Gibbs sampling algorithm - Data augmentation
As discussed in Section 2.1, using (18) as prior for \(\theta_{t}\) allows the posterior medians to act like thresholding rules, equating to zero noisy coefficients. Because of this, we elect the absolute loss as the Bayes rule estimator for the numerical experiments performed using the MCMC method described in Algorithm 1.
## 4 Numerical Experiments
In this section, we illustrate the estimation process discussed in the former sections by conducting Monte Carlo experiments and applying it to a river quota data set to identify flood regimes. In both studies, we implement Algorithm 1 running 6,000 iterations, discarding the first 1,000 as burn-in and performing thinning every 5 draws. We consider the following independent priors for the component parameters: \(\mu_{1}\sim N(q_{1},s^{2})\), \(\tau_{1}^{2}\sim\Gamma(0.01,0.01)\), \(\mu_{2}\sim N(q_{3},s^{2})\), and \(\tau_{2}^{2}\sim\Gamma(0.01,0.01)\), where \(q_{1}\) and \(q_{3}\) are the first and third quartiles, respectively, of the observed data and \(s^{2}\) is the sample variance. The purpose of using the data statistics is to reduce subjectivity, and, by adopting the quartiles, to segregate the data into two groups.
Concerning the wavelet bases used to perform the transforms, we use the coiflet basis with six vanishing moments. It is important to highlight that, according to other simulated studies, using other Daubechies wavelet bases provides similar results to those achieved by this specific coiflet basis. We do not present these supplementary analyses due to space limitations.
### Monte Carlo simulations
In our simulated investigations, we generate the artificial data sets by mixing two normally distributed samples of size 1,024, as defined in (10). In this case, we set the following values for the component parameters: \(\mu_{1}=0\), \(\mu_{2}=2\), \(\tau_{1}^{2}=4\) and \(\tau_{2}^{2}=4\). Concerning the dynamic mixture weights, we employ three different curves for \(\alpha_{t}\): sinusoidal, blocks, and bumps, with the first being defined as \(\alpha_{t}=0.4\,\cos(2\pi(t+\pi))+0.5\), and the last two being rescaled test functions introduced by Donoho and Johnstone (1994).
For all three behaviors of \(\alpha_{t}\), we run 1,000 Monte Carlo replicates. Additionally, we regard both spike and slab priors, discussed in Section 2.1, for the distribution of the wavelet coefficients, namely: the spike and slab prior with Gaussian slab (SSG), and the spike and slab prior with Laplace slab (SSL). Hereafter, we use the acronyms, SSG and SSL, to refer to these priors.
As mentioned in Section 3.1, the point estimates are the medians of the MCMC chains for each Monte Carlo replicate. To appraise the performance of the estimation as a whole, we calculate the average of these point estimates and their 95% HPD intervals. The results for the component parameters are presented in Table 1 and Table 2. It is worth noting that the method, under both priors, performs satisfactorily, with some estimates even coinciding with the parameter values, which, in turn, are encompassed by the HPD intervals in every \(\alpha_{t}\)'s scenario.
Regarding the dynamic mixture weights, Figure 1 shows the results. For the sinusoidal scenario, we see that the method, considering both SSG and SSL priors, succeeds in mimicking the curve's shapes. Although the bumps and blocks functions are less smooth than the sinusoidal, the method still can satisfactorily estimate their curves. In fact, for the bumps, the point estimates not only follow the sharp shape of the function but also captures the null values correctly. For the blocks scenario, the estimates properly mimic the discontinuity regions and the HPD intervals succeed at encompassing the entire curve.
### Taquari quota data set
Part of the Taquari-Antas Hydrographic Basin (TAHB) in the state of Rio Grande do Sul (south of Brazil), the Taquari River is located in the upper domain of the Baixo Taquari-Antas Valley, a region that has been affected by an increasing number of ex
\begin{table}
\begin{tabular}{l c c c c} \hline \(\alpha_{t}\)’s curve & \(\mu_{1}=0\) & \(\tau_{1}^{2}=4\) & \(\mu_{2}=2\) & \(\tau_{2}^{2}=4\) \\ \hline Sinusoidal & 0.00 (-0.04;0.06) & 4.00 (3.58;4.65) & 2.00 (1.95;2.04) & 4.00 (3.40;4.59) \\ Bumps & 0.00 (-0.04;0.02) & 4.01 (3.59;4.38) & 1.90 (1.60;2.15) & 3.62 (1.06;6.45) \\ Blocks & 0.00 (-0.04;0.06) & 4.06 (3.41;4.71) & 2.00 (1.95;2.06) & 4.00 (3.50;4.63) \\ \hline \end{tabular}
\end{table}
Table 1: Averages of the point estimates (95% HPD credible intervals) for the component parameters \(\mu_{1},\tau_{1}^{2},\mu_{2}\) and \(\tau_{2}^{2}\), based on 1,000 replications of data sets, considering the SSG prior to \(\theta\).
treme rainfall events in recent decades (Tognoli et al., 2021). As a result, on many occasions, the rain excess is not drained efficiently and floods riverside regions. This phenomenon is aggravated in urban areas, where the human occupation of floodplains and the soil impermeability contribute to reducing the infiltration capacity and overloading the drainage system, leading to flood inundations (Kurek, 2016).
As reported by Oliveira et al. (2018), Encantado is one of the cities adjacent to the course of the Taquari River most susceptible to fluvial inundations. The geomorphological and topographical characteristics of Encantado's land favor the water accumulation and restrict its drainage (Oliveira et al., 2018). Furthermore, the urbanization of areas with high flood vulnerability in this municipality contributes to intensifying the occurrence of flood inundations (Kurek, 2016).
Because of these circumstances, we propose implementing Algorithm 1 to a time series of Taquari's river quota to estimate the probability of an inundation regime in Encantado's urban areas. A river quota is the height of the water body, conventionally measured in centimeters (cm), on a given region of the riverbank. The data set corresponds to the records of Encantado's fluviometric station identified by the code 86720000. The monthly time series of this station comes from the Hidroweb system, an integrated platform of the National Water Resources Management System (SINGREH) available at [https://www.snirh.gov.br/hidroweb/serieshistoricas](https://www.snirh.gov.br/hidroweb/serieshistoricas). Figure 2 shows a map of Encantado, highlighting the station used in this study.
To validate the estimated probabilities, we use a report from the Brazilian Geological Survey (CPRM) (Peixoto and Lamberty, 2019) that records the months when floods occurred in Encantado. Therefore, we can see if the estimates of the mixture weight properly describe the flood regimes, _no inundation_ and _inundation_, for each month. It is worth highlighting that since inundations can last for a couple of days or even more, there are no records of the specific days when these events took place, only the months. Because of that, and considering that the model is a mixture of two Gaussian distributions, we use the monthly average of the Taquari quota to estimate the probability associated with flood inundations. The period analyzed was from May 2004 to December 2014, consisting of 128 observations. Figure 3 presents this data set.
Table 3 shows the point estimates for the component parameters that describe each flood regime. Note that the results provided by the method under the SSG prior are similar to those achieved by it assigning the SSL prior to the distribution of wavelet coefficients. Concerning the dynamic mixture weights, Figure 4 shows the estimates considering both priors for \(\mathbf{\theta}\). By analyzing the results, we see that using the SSL prior
\begin{table}
\begin{tabular}{l c c c c} \hline \(\alpha_{t}\)’s curve & \(\mu_{1}=0\) & \(\tau_{1}^{2}=4\) & \(\mu_{2}=2\) & \(\tau_{2}^{2}=4\) \\ \hline Sinusoidal & 0.00 (-0.05;0.05) & 4.05 (3.50;4.62) & 2.00 (1.95;2.04) & 3.99 (3.49;4.50) \\ Bumps & 0.00 (-0.04;0.03) & 3.96 (3.34;4.53) & 1.89 (1.43;2.22) & 3.66 (0.71;6.56) \\ Blocks & 0.02 (-0.15;0.05) & 3.91 (3.40;5.60) & 1.95 (1.28;2.07) & 3.85 (0.82;4.76) \\ \hline \end{tabular}
\end{table}
Table 2: Averages of the point estimates (95% HPD credible intervals) for the component parameters \(\mu_{1},\tau_{1}^{2},\mu_{2}\) and \(\tau_{2}^{2}\), based on 1,000 replications of data sets, considering the SSL prior to \(\mathbf{\theta}\).
allows estimating higher peaks for the probabilities related to inundation periods than using the SSG prior. In fact, under a Bayes classifier, if the method employs the SSG prior, it can detect neither the months when flood episodes were reported nor change points (\(\{t:\alpha_{t}=0.5\}\)).
In summary, the method provides results consistent with the data on flood inundation
Figure 1: Estimates of the \(\alpha_{t}\)’s provided by SSG prior (right); and SSL prior (left). The curves assigned to \(\alpha_{t}\) are, respectively: the sinusoidal (top), the bumps (middle), and the blocks (bottom). The full lines correspond to the \(\alpha_{t}\)’s curve, the dashed lines correspond to the average of the pointwise estimates and the shaded areas correspond to the 95% HPD intervals.
in Encantado available in other works and reports (see Peixoto and Lamberty, 2019, Tognoli et al., 2021). In addition, choosing the Laplace density in the spike and slab prior tends to provide dynamic weight estimates more capable of detecting floods.
## 5 Conclusion
This paper presents an approach to identify regime switches in bimodal data sets. We use a two-component mixture model whose mixture weight varies according to some index, like time. This adaptation makes the model more flexible and adaptive to a
\begin{table}
\begin{tabular}{l r r} \hline Parameters & SSG prior & SSL prior \\ \hline \(\mu_{1}\) & 227.07 (210.09; 242.89) & 220.60 (206.25; 236.28) \\ \(\tau_{1}^{2}\) & 2.30e-4 (1.54e-4; 3.15e-4) & 2.58e-4 (1.77e-4; 3.45e-4) \\ \(\mu_{2}\) & 405.01 (316.38; 483.35) & 400.20 (355.72; 439.54) \\ \(\tau_{2}^{2}\) & 1.14e-4 (2.56e-5; 3.42e-4) & 1.04e-4 (3.65e-5; 1.85e-4) \\ \hline \end{tabular}
\end{table}
Table 3: Medians (95% HPD credible intervals) for the component parameters \(\mu_{1},\tau_{1}^{2},\mu_{2}\) and \(\tau_{2}^{2}\) of the Taquari quota data set, based on the MCMC samples.
Figure 2: Location map of the fluorometric station in the city of Encantado. In the upper-right corner, the Taquari Antas Hydrographic Basin in Rio Grande do Sul state, south of Brazil.
broader range of clustering and classification problems. Furthermore, we use wavelet bases to estimate the dynamic behavior of the mixture weight due to their excellent properties when it comes to curves' estimation. However, unlike other approaches in the literature that also rely on wavelets (see Montoril et al., 2019), here we consider a Bayesian framework and propose estimating the dynamic weights and the component parameters jointly through an efficient Gibbs sampling algorithm.
We analyze the performance of this MCMC algorithm by conducting Monte Carlo experiments and illustrate the approach with an application to a river quota data set. Results from the simulations show that the method provides good estimates for the
Figure 4: Estimates of the \(\alpha_{t}\)’s of the Taquari quota data provided by SSG prior (right); and SSL prior (left). The full (black) lines correspond to the point estimates (medians) and the dashed (blue) lines mark the months when flood inundations were reported by Peixoto and Lamberty (2019).
Figure 3: Monthly average of Taquari’s river quota (cm) from May 2004 to December 2014.
component parameters and the dynamic weights even when the function behind \(\alpha_{t}\)'s behavior is rougher. Additionally, the estimation performance using SSG prior is similar to the performance achieved when SSL prior is employed. The same does not apply to the results obtained in the river quota data set. For this application, we notice that implementing the method under the SSG prior to the wavelet coefficients yields smaller values for the probabilities associated with inundations occurrence than the estimates provided by using the SSL prior. This is likely because the Gaussian distribution does not have heavy tails, unlike the Laplace distribution.
|
2308.02617 | Triplet-odd pairing in finite nuclear systems: Even-even singly closed
nuclei | Background: The appearance of the pairing condensate is an essential feature
of many-fermion systems. There are two possible types of pairing: spin-singlet
and spin-triplet. However, an open question remains as to whether the
spin-triplet pairing condensate emerges in finite nuclei. Purpose: The aim of
this work is to examine the coexistence of the spin-singlet and spin-triplet
like-particle pairing condensates in nuclei. We also discuss the dependence on
the type of pairing functional. Method: The Hartree-Fock-Bogoliubov
calculations with a Skyrme $+$ local-pair energy-density functional (EDF) are
performed to investigate the pairing condensate in the spherical ground states
of Ca and Sn isotopes. Results: The spin-singlet pair EDF induces not only the
spin-singlet but also the spin-triplet pairing condensates due to a strong
spin-orbit splitting. By discarding the spin-orbit EDF, only the spin-singlet
pairing condensate appears. The spin-triplet pair EDF, however, induces the
spin-orbit splitting and accordingly the spin-singlet pairing condensate.
Conclusions: The spin-orbit splitting plays an essential role in the
coexistence of the spin-singlet and spin-triplet pairing condensates in nuclei. | Nobuo Hinohara, Tomohiro Oishi, Kenichi Yoshida | 2023-08-04T14:45:38Z | http://arxiv.org/abs/2308.02617v3 | # Triplet-odd pairing in finite nuclear systems (I): Even-even singly-closed nuclei
###### Abstract
**Background:** The appearance of the pairing condensate is an essential feature of many-fermion systems. There are two possible types of pairing: spin-singlet and spin-triplet. However, an open question remains as to whether the spin-triplet pairing condensate emerges in finite nuclei.
**Purpose:** The aim of this work is to examine the coexistence of the spin-singlet and spin-triplet like-particle pairing condensates in nuclei. We also discuss the dependence on the type of pairing functional.
**Method:** The Hartree-Fock-Bogoliubov calculations with a Skyrme + local-pair energy-density functional (EDF) are performed to investigate the pairing condensate in the spherical ground states of Ca and Sn isotopes.
**Results:** The spin-singlet pair EDF induces not only the spin-singlet but also the spin-triplet pairing condensates due to a strong spin-orbit splitting. By discarding the spin-orbit EDF, only the spin-singlet pairing condensate appears. The spin-triplet pair EDF, however, induces the spin-orbit splitting and accordingly the spin-singlet pairing condensate.
**Conclusions:** The spin-orbit splitting plays an essential role in the coexistence of the spin-singlet and spin-triplet pairing condensates in nuclei.
## I Introduction
The pairing is universal in many-fermion systems [1; 2]. A mean-field model was first introduced by Bardeen, Cooper, and Schrieffer (BCS) for describing the electronic superconductivity [3]. Within the original BCS theory, assumed is the condensation of a Cooper pair with a relative angular momentum being \(s\)-wave, the total spin zero, and the center-of-mass momentum zero. A variety of forms of pairing, unconventional pairings, that are different from the BCS type have been also found or predicted [4; 5]. Especially, in electronic and cold-atomic systems, the spin-triplet Cooper pair has been actively investigated. The first example is the superfluidity of Helium-3 atoms [6; 7; 8], where the spin-fluctuation interaction induces the spin-triplet Cooper pairs of fermionic atoms. The Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) type of superconductivity, where the center-of-mass momentum is not zero, has been also discussed for decades [9; 10; 11; 12; 13]. In several spices of heavy-fermion metals and ferromagnetic Mott insulators [14; 15; 16; 17], the spin-triplet type of superconductivity is expected. It is worthwhile to mention that spin-triplet pairing is a basic concept of topological superconductivity. For the emergence of spin-triplet pairing, the spin-orbit interaction often plays an essential role [4; 5; 18; 19; 20].
The pair correlation by a nucleon Cooper pair contributes significantly to low-energy nuclear physics. The BCS theory was applied to atomic nuclei soon after the original work [3] by Bohr _et al._[21; 22]. The like-particle spin-singlet pairing has been investigated mostly and is a key to understanding the low-energy properties of the nuclear structure, for example, the odd-even staggering of nuclear masses, the collectivity of the low-lying \(J^{\pi}=2^{+}\) state in even-even nuclei, and the moments of inertia of deformed nuclei [23; 24]. The unconventional pairings have also been studied in nuclear systems. Since a deuteron is the only two-nucleon system that is bound in nature, the spin-triplet proton-neutron pairing has been investigated for a long time and is under lively discussions [25]. Similarly, due to the attractive nature of the nuclear force in the \({}^{3}P_{2}\) channel at high momentum, the emergence of the triplet-odd pairing has been predicted and studied in neutron-star matter [26; 27; 28].
For the study of nuclear superfluidity in medium-mass and heavy nuclei, a self-consistent mean-field or energy-density functional (EDF) approach has been adopted [29]. This choice is advantageous as it naturally provides the anomalous (pair) density and the pairing gap as an order parameter. Since the proton-neutron spin-triplet pairing, as well as the isovector spin-singlet ones, can be characterized by local pair densities, there have been many studies for these types of pairing [30]. On the other side, discussions on the triplet-odd pairing in nuclei have been less active. In Refs. [31; 32], the connection between the magnetic-dipole excitation to the triplet-odd pairing is suggested. However, experimental evidence of the triplet-odd pairing has not been observed.
In this work, we study the like-particle spin-triplet superfluidity in an EDF approach. To this end, we introduce the spin-triplet non-local pair density as an order
parameter. We also investigate the connection between spin-orbit splitting and triplet-odd pairing.
## II Formalism
We describe the spin-singlet and spin-triplet pairing within the local density approximation of the EDF. Details on this framework are well summarized in Refs. [29; 33; 34], and in particular, we focus on the pairing part in this section.
### Non-local pair density
The building block of the pairing in the HFB theory is the pair density matrix. We define it with the standard phase for the case without the proton-neutron pairing as
\[\hat{\tilde{\rho}}(\mathbf{r}_{1}s_{1},\mathbf{r}_{2}s_{2};t)=-2s_{2}\langle\Psi|\hat{ c}_{\mathbf{r}_{2}-s_{2}}\hat{c}_{\mathbf{r}_{1}s_{1}t}|\Psi\rangle, \tag{1}\]
where \(\hat{c}_{\mathbf{r}st}\) represents the nucleon annihilation operator at a position \(\mathbf{r}\), spin \(s\), and isospin \(t\), and \(|\Psi\rangle\) is the HFB state.
The spin-singlet and spin-triplet non-local pair densities are defined by
\[\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2}) =\sum_{s}\hat{\tilde{\rho}}(\mathbf{r}_{1}s,\mathbf{r}_{2}s;t), \tag{2}\] \[\tilde{\mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2}) =\sum_{s_{1}s_{2}}\hat{\tilde{\rho}}(\mathbf{r}_{1}s_{1},\mathbf{r}_{2}s_ {2};t)\hat{\mathbf{\sigma}}_{s_{2}s_{1}}. \tag{3}\]
One can express the pair density matrix as
\[\hat{\tilde{\rho}}(\mathbf{r}_{1}s_{1},\mathbf{r}_{2}s_{2};t)=\frac{1}{2 }\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})\delta_{s_{1}s_{2}}+\frac{1}{2}\tilde{ \mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})\cdot\hat{\mathbf{\sigma}}_{s_{1}s_{2}}, \tag{4}\]
where \(\hat{\mathbf{\sigma}}\) is the spin Pauli matrix. We note that the non-local pair densities show the spatial property of the nucleon pair; the spin-singlet pair is symmetric and the spin-triplet pair is antisymmetric for the exchange of the coordinate variables,
\[\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2}) =\tilde{\rho}_{t}(\mathbf{r}_{2},\mathbf{r}_{1}), \tag{5}\] \[\tilde{\mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2}) =-\tilde{\mathbf{s}}_{t}(\mathbf{r}_{2},\mathbf{r}_{1}). \tag{6}\]
The spin-triplet non-local pair density vanishes at \(\mathbf{r}_{1}=\mathbf{r}_{2}\) indicating that a simple local density approximation does not work for the spin-triplet pair, and the nonlocality plays a major role here.
### Density matrix expansion
The nuclear interaction energy derived from a local two-body interaction can be expressed with the local densities by a density matrix expansion technique. This has been discussed in Ref. [29] for the particle-hole part of the interaction. Here we apply the density matrix expansion for the pair density matrix (non-local pair density).
We introduce the following coordinates of the pair
\[\mathbf{r}_{1}=\mathbf{r}+\frac{\mathbf{r}_{\rm rel}}{2},\quad\mathbf{r}_{2}=\mathbf{r}-\frac{\bm {r}_{\rm rel}}{2}, \tag{7}\]
assuming that the pair density matrix vanishes quickly with increasing \(\mathbf{r}_{\rm rel}\). This allows us to expand the non-local pair densities in terms of the relative coordinate \(\mathbf{r}_{\rm rel}\)
\[\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2}) =\tilde{\rho}_{t}\left(\mathbf{r}+\frac{\mathbf{r}_{\rm rel}}{2},\mathbf{r}- \frac{\mathbf{r}_{\rm rel}}{2}\right)\] \[=\tilde{\rho}_{t}(\mathbf{r},\mathbf{r})\] \[\quad+\frac{\partial}{\partial\mathbf{r}_{\rm rel}}\tilde{\rho}_{t} \left(\mathbf{r}+\frac{\mathbf{r}_{\rm rel}}{2},\mathbf{r}-\frac{\mathbf{r}_{\rm rel}}{2} \right)\Big{|}_{\mathbf{r}_{\rm rel}=0}\cdot\mathbf{r}_{\rm rel}\] \[\quad+\mathcal{O}(|\mathbf{r}_{\rm rel}|^{3})\] \[=\tilde{\rho}_{t}(\mathbf{r})+\frac{1}{2}(\mathbf{\nabla}_{1}-\mathbf{\nabla} _{2})\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})\Big{|}_{\mathbf{r}_{1}=\mathbf{r}_{2}=\bm {r}}\cdot\mathbf{r}_{\rm rel}\] \[\quad+\frac{1}{8}(\mathbf{\nabla}_{1}-\mathbf{\nabla}_{2})^{2}\tilde{\rho} _{t}(\mathbf{r}_{1},\mathbf{r}_{2})\Big{|}_{\mathbf{r}_{1}=\mathbf{r}_{2}=\mathbf{r}}\mathbf{r}_{\rm rel }^{2}\] \[\quad+\mathcal{O}(|\mathbf{r}_{\rm rel}|^{3})\] \[=\tilde{\rho}_{t}(\mathbf{r})+\frac{1}{8}\left[\Delta\tilde{\rho}_{t}( \mathbf{r})-4\tilde{\tau}_{t}(\mathbf{r})\right]\mathbf{r}_{\rm rel}^{2}+\mathcal{O}(|\bm {r}_{\rm rel}|^{3}). \tag{8}\]
We use \(\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})=\tilde{\rho}_{t}(\mathbf{r}_{2},\mathbf{r}_{1})\) to remove the first-order term, and the local pair density and kinetic pair density are defined as
\[\tilde{\rho}_{t}(\mathbf{r}) =\tilde{\rho}_{t}(\mathbf{r},\mathbf{r}), \tag{9}\] \[\tilde{\tau}_{t}(\mathbf{r}) =(\mathbf{\nabla}_{1}\cdot\mathbf{\nabla}_{2})\left.\tilde{\rho}_{t}(\mathbf{ r}_{1},\mathbf{r}_{2})\right|_{\mathbf{r}_{1}=\mathbf{r}_{2}=\mathbf{r}}. \tag{10}\]
The spin-triplet non-local pair density is expanded as
\[\tilde{\mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2}) =\tilde{\mathbf{s}}_{t}\left(\mathbf{r}+\frac{\mathbf{r}_{\rm rel}}{2},\mathbf{r} -\frac{\mathbf{r}_{\rm rel}}{2}\right)\] \[=\mathbf{r}_{\rm rel}\cdot\left[\frac{\partial}{\partial\mathbf{r}_{\rm rel }}\otimes\tilde{\mathbf{s}}_{t}\left(\mathbf{r}+\frac{\mathbf{r}_{\rm rel}}{2},\mathbf{r}- \frac{\mathbf{r}_{\rm rel}}{2}\right)\right]_{\mathbf{r}_{\rm rel}=0}\] \[\quad+\mathcal{O}(|\mathbf{r}_{\rm rel}|^{2})\] \[=i\mathbf{r}_{\rm rel}\cdot\tilde{\mathsf{J}}_{t}(\mathbf{r})+\mathcal{O}(| \mathbf{r}_{\rm rel}|^{2}), \tag{11}\]
where
\[\tilde{\mathsf{J}}_{t}(\mathbf{r})=\frac{1}{2i}(\mathbf{\nabla}_{1}-\mathbf{\nabla}_{2}) \otimes\tilde{\mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})\Big{|}_{\mathbf{r}_{1}=\mathbf{r}_{2 }=\mathbf{r}} \tag{12}\]
is the tensor (spin-current) pair density, and we use \(\tilde{\mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})=-\tilde{\mathbf{s}}_{t}(\mathbf{r}_{2},\mathbf{r}_ {1})\) to remove the zeroth-order term,
and \(\mathbf{v}\cdot(\mathbf{u}\otimes\mathbf{w})\equiv(\mathbf{v}\cdot\mathbf{u})\mathbf{w}\). Spin-triplet anisotropic pairing in condensed matter physics requires an odd wave-number \(\mathbf{k}\) dependence, and the tensor pair density corresponds to the order parameters of the \(p\)-wave superfluidity that consists of 9 components [4].
The density matrix expansion of the non-local pair density provides the local EDF starting from a local two-body spin-singlet and spin-triplet pairing interaction. The general form of the pair EDF is given by [35]
\[E^{S=0}_{\text{pair},t} =\int d\mathbf{r}_{1}d\mathbf{r}_{2}v^{S=0}_{\text{pair},t}(|\mathbf{r}_{1}- \mathbf{r}_{2}|)|\tilde{\rho}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})|^{2}, \tag{13}\] \[E^{S=1}_{\text{pair},t} =\int d\mathbf{r}_{1}d\mathbf{r}_{2}v^{S=1}_{\text{pair},t}(|\mathbf{r}_{1}- \mathbf{r}_{2}|)|\tilde{\mathbf{s}}_{t}(\mathbf{r}_{1},\mathbf{r}_{2})|^{2}, \tag{14}\]
where \(v^{S=0}_{\text{pair},t}\) and \(v^{S=1}_{\text{pair},t}\) are the spin-singlet and spin-triplet pairing interaction strengths that depend on the absolute value of the relative coordinate of the two nucleons.
Inserting the density matrix expansion in the non-local pair densities, we have
\[E^{S=0}_{\text{pair},t} =\int d\mathbf{r}\left\{\tilde{C}^{p}_{t}|\tilde{\rho}_{t}(\mathbf{r})|^ {2}+\tilde{C}^{\Delta\rho}_{t}\text{Re}\left[\tilde{\rho}^{s}_{t}(\mathbf{r}) \Delta\tilde{\rho}_{t}(\mathbf{r})\right]\right.\] \[\left.+\tilde{C}^{r}_{t}\text{Re}\left[\tilde{\rho}^{s}_{t}(\mathbf{r })\tilde{\tau}_{t}(\mathbf{r})\right]\right\}, \tag{15}\] \[E^{S=1}_{\text{pair},t} =\int d\mathbf{r}\tilde{C}^{J}_{t}|\tilde{J}_{t}(\mathbf{r})|^{2}. \tag{16}\]
The coupling constants are related with the local potential as
\[\tilde{C}^{p}_{t} =\int d\mathbf{r}_{\text{rel}}v^{S=0}_{\text{pair},t}(|\mathbf{r}_{\text{ rel}}|), \tag{17}\] \[\tilde{C}^{\Delta\rho}_{t} =-\frac{1}{4}\tilde{C}^{r}_{t}=\frac{1}{4}\int d\mathbf{r}_{\text{ rel}}\mathbf{r}^{2}_{\text{rel}}v^{S=0}_{\text{pair},t}(|\mathbf{r}_{\text{rel}}|),\] (18) \[\tilde{C}^{J}_{t} =\int d\mathbf{r}_{\text{rel}}\mathbf{r}^{2}_{\text{rel}}v^{S=1}_{\text{ pair},t}(|\mathbf{r}_{\text{rel}}|). \tag{19}\]
These are the coupling constants for the spin-singlet pairing \(\tilde{C}^{p}_{t}\) and its next-order terms \(\tilde{C}^{\Delta\rho}_{t}\) and \(\tilde{C}^{r}_{t}\), and the spin-triplet coupling constant \(\tilde{C}^{J}_{t}\).
By using the G3RS-\({}^{1}\)E-1 potential introduced by Tamagaki [26], for instance, \(\tilde{C}^{p}_{t}=-697.087\) MeV fm\({}^{3}\) and \(\tilde{C}^{\Delta\rho}_{t}=1363.253\) MeV fm\({}^{5}\) are obtained for the spin-singlet coupling constants. For the spin-triplet coupling, on the other hand, by using the G3RS-\({}^{3}\)O-1 potential for \(v^{S=1}_{\text{pair},t}\), we obtain \(\tilde{C}^{J}_{t}=6794.724\) MeV fm\({}^{5}\). Note that we assumed the \({}^{3}P_{1}\) channel to obtain this value.
In the lowest order in terms of the non-locality, the local pair density \(\tilde{\rho}_{t}(\mathbf{r})\) and the pair EDF that is proportional to \(|\tilde{\rho}_{t}(\mathbf{r})|^{2}\) represent the spin-singlet pair condensation and EDF, while the spin-current pair density \(\tilde{\mathsf{J}}_{t}(\mathbf{r})\) and the term proportional to \(|\tilde{\mathsf{J}}_{t}(\mathbf{r})|^{2}\) represent the spin-triplet pair condensation and EDF.
Zero-range Skyrme interactions also produce the terms related to the spin-singlet and spin-triplet pair condensation. Only the SkP interaction [36] includes the spin-singlet and spin-triplet terms, and other standard Skyrme EDFs do not consider pairing terms other than those proportional to \(|\tilde{\rho}_{t}(\mathbf{r})|^{2}\) due to unrealistic pairing properties [29].
We note that the spin-triplet pair density and thus EDF can be decomposed into the trace (pseudoscalar), antisymmetric (vector), and symmetric (pseudotensor) parts [34]
\[\tilde{J}_{t}(\mathbf{r}) =\sum_{k=x,y,z}\tilde{\mathsf{J}}_{tkk}(\mathbf{r}), \tag{20}\] \[\tilde{\mathbf{J}}_{tk}(\mathbf{r}) =\sum_{b,c=x,y,z}\epsilon_{abc}\tilde{\mathsf{J}}_{tbc}(\mathbf{r}),\] (21) \[\tilde{\mathsf{J}}_{tab}(\mathbf{r}) =\frac{1}{2}\tilde{\mathsf{J}}_{tab}(\mathbf{r})+\frac{1}{2}\tilde{ \mathsf{J}}_{tba}(\mathbf{r})-\frac{1}{3}\tilde{J}_{t}(\mathbf{r})\delta_{ab}. \tag{22}\]
This decomposition of the spin-current quantity is also applied in the discussion of \({}^{3}P_{2}\) superfluidity [37; 38], and these three components are relevant to \({}^{3}P_{0}\), \({}^{3}P_{1}\), and \({}^{3}P_{2}\) superfluidity, respectively.
For a general pair EDF that is not based on an effective interaction, each coupling constant in the spin-single pair EDF (15) can be taken independently, except that the relation between \(\tilde{C}^{\Delta\rho}_{t}\) and \(\tilde{C}^{r}_{t}\) in Eq. (18) is a requirement from the local gauge invariance [34]. The spin-triplet pair EDF has a similar structure with the tensor functional in the particle-hole EDF does and can have a more general form [39]
\[E^{S=1}_{\text{pair},t}=\int d\mathbf{r}\tilde{C}^{J0}_{t}|\tilde{J }_{t}(\mathbf{r})|^{2}+\tilde{C}^{J1}_{t}|\tilde{J}_{t}(\mathbf{r})|^{2}+\tilde{C}^{J2} _{t}|\tilde{\mathsf{J}}_{t}(\mathbf{r})|^{2}. \tag{23}\]
Unlike the particle-hole part, these three coupling constants are not constrained by the local gauge invariance and can be taken independently. When the pair EDF is derived from a non-local effective interaction of the form (14), three coupling constants are related by \(\tilde{C}^{J}_{t}=3\tilde{C}^{J0}_{t}=2\tilde{C}^{J1}_{t}=\tilde{C}^{J2}_{t}\). However, the tensor and spin-orbit interactions, which are not in the form of Eq. (14), allow independent contributions to the three coupling constants.
### Mean field approach
The mean-field equations for protons and neutrons, obtained by the functional derivative of the EDF, are given in Refs. [34; 36]. The pair Hamiltonian has the following form
\[\tilde{h}^{(t)}_{ss^{\prime}}(\mathbf{r}) =\left[\tilde{U}_{t}(\mathbf{r})-\mathbf{\nabla}\tilde{M}_{t}(\mathbf{r}) \cdot\mathbf{\nabla}\right]\delta_{ss^{\prime}}\] \[\quad+\frac{1}{2i}\left\{\mathbf{\nabla}\cdot\left[\tilde{\mathsf{B} }_{t}(\mathbf{r})\cdot\mathbf{\sigma}_{ss^{\prime}}\right]+\left[\tilde{\mathsf{B}}_{t}( \mathbf{r})\cdot\mathbf{\sigma}_{ss^{\prime}}\right]\cdot\mathbf{\nabla}\right\}, \tag{24}\]
where the potential energy \(\tilde{U}_{t}\), effective inertia parameter \(\tilde{M}_{t}\) and the spin-orbit form factors \(\tilde{\mathsf{B}}_{t}\) are given by
\[\tilde{U}_{t}(\mathbf{r}) =2\tilde{C}_{t}^{\mu}\tilde{\rho}_{t}(\mathbf{r})+2\tilde{C}_{t}^{ \Lambda\rho}\Delta\tilde{\rho}_{t}(\mathbf{r})+\tilde{C}_{t}^{\prime}\tilde{\tau}_{ t}(\mathbf{r}), \tag{25}\] \[\tilde{M}_{t}(\mathbf{r}) =\tilde{C}_{t}^{\tau}\tilde{\rho}_{t}(\mathbf{r}),\] (26) \[\tilde{\mathsf{B}}_{tab}(\mathbf{r}) =2\tilde{C}_{t}^{\prime J0}\tilde{J}_{t}(\mathbf{r})\delta_{ab}-2 \tilde{C}_{t}^{J1}\sum_{c=x,y,z}\epsilon_{acb}\tilde{\mathbf{J}}_{tc}(\mathbf{r})\] \[\quad+2\tilde{C}_{t}^{J2}\tilde{\mathsf{J}}_{ab}(\mathbf{r}). \tag{27}\]
As the pair Hamiltonian (24) depends on the spins \(s\) and \(s^{\prime}\) when spin-triplet pair EDF is considered, we define the pairing gap by averaging out the pair Hamiltonian using the spin-dependent lower component of the quasiparticle wave function \(\phi_{2}^{(t)}(\mu,\mathbf{r}s)\) as
\[\Delta_{t} =\frac{\int d\mathbf{r}\sum_{ss^{\prime}\mu}\phi_{2}^{(t)*}(\mu,\mathbf{r }s^{\prime})\tilde{h}_{s^{\prime}s}^{(t)}(\mathbf{r})\phi_{2}^{(t)}(\mu,\mathbf{r}s)} {\int d\mathbf{r}\sum_{s\mu}|\phi_{2}^{(t)}(\mu,\mathbf{r}s)|^{2}}\] \[=\frac{1}{N_{t}}\int d\mathbf{r}\left[\tilde{U}_{t}(\mathbf{r})\rho_{t}( \mathbf{r})+\tilde{M}_{t}(\mathbf{r})\tau_{t}(\mathbf{r})+\tilde{\mathsf{B}}_{t}(\mathbf{r}) \cdot\mathsf{J}_{t}(\mathbf{r})\right], \tag{28}\]
where \(N_{t}=N\) or \(Z\), and the density \(\rho_{t}\), kinetic density \(\tau_{t}\), and the tensor (spin-current) density \(\mathsf{J}_{t}\) are defined using the non-local particle-hole densities [defined in a similar way as Eqs. (2) and (3) but for the particle-hole density matrix] as
\[\rho_{t}(\mathbf{r}) =\rho_{t}(\mathbf{r},\mathbf{r}), \tag{29}\] \[\tau_{t}(\mathbf{r}) =(\mathbf{\nabla}_{1}\cdot\mathbf{\nabla}_{2})\rho_{t}(\mathbf{r}_{1},\mathbf{r} _{2})\Big{|}_{\mathbf{r}_{1}=\mathbf{r}_{2}=\mathbf{r}},\] (30) \[\mathsf{J}_{t}(\mathbf{r}) =\frac{1}{2i}(\mathbf{\nabla}_{1}-\mathbf{\nabla}_{2})\otimes\mathbf{s}_{t}( \mathbf{r}_{1},\mathbf{r}_{2})\Big{|}_{\mathbf{r}_{1}=\mathbf{r}_{2}=\mathbf{r}}. \tag{31}\]
Although Eq. (28) is a natural extension of the average gap for a generalized pair Hamiltonian [40], the discrepancy of the pairing gap and experimental Odd-even staggering (OES) has been pointed out when the singlet-pair EDF contains the kinetic terms \(\tilde{C}_{t}^{\prime}\) and \(\tilde{C}_{t}^{\Lambda\rho}\)[41].
### Expression within spherical symmetry
We assume the spherical symmetry for the HFB state for simplicity. The spherical symmetry cancels the two of the spin-current pair densities \(\tilde{J}_{t}(\mathbf{r})\) and \(\tilde{\mathsf{J}}_{t}(\mathbf{r})\), and only the radial component of the vector spin-current pair density can exist \(\tilde{\mathbf{J}}_{t}(\mathbf{r})=\tilde{J}_{tr}(\mathbf{r})\mathbf{e}_{\mathbf{r}}\)[42]. Within the spherical symmetry, the quasiparticle wave function can be decomposed into the radial and angular part
\[\phi_{i}^{(t)}(E,\mathbf{r}s)=\frac{u_{i}^{(t)}(nlj,r)}{r}Y_{lm_{i}}(\hat{\mathbf{r}} )\langle lm_{1}\frac{1}{2}s|jm\rangle\quad(i=1,2), \tag{32}\]
where \(i=1\) and \(2\) correspond to the upper and lower component respectively. The local pair density and the spin-current pair density are given by
\[\tilde{\rho}_{t}(r) =-\frac{1}{4\pi r^{2}}\sum_{nlj}(2j+1)u_{1}^{(t)}(nlj,r)u_{2}^{(t )}(nlj,r), \tag{33}\] \[\tilde{J}_{tr}(r) =-\frac{1}{4\pi r^{3}}\sum_{nlj}(2j+1)\langle\mathbf{l}\cdot\mathbf{s} \rangle u_{1}^{(t)}(nlj,r)u_{2}^{(t)}(nlj,r), \tag{34}\]
where \(\langle\mathbf{l}\cdot\mathbf{s}\rangle=j(j+1)-l(l+1)-\frac{3}{4}\). Notice that these quantities have different dimensions.
## III Numerical calculations
### Spin-singlet pair EDF
We utilize the HFBRAD code [43] for spherical Skyrme-HFB calculations in the following. The SLy4 and spin-singlet volume-type contact pair EDF with the strength \(\tilde{C}_{n}^{\rho}=-46.625\) MeV fm\({}^{3}\) (\(V_{0}=-186.5\) MeV fm\({}^{3}\)) is employed within the cutoff energy of 60 MeV. This strength has been adjusted to reproduce the neutron pairing gap 1.245 MeV in \({}^{120}\)Sn.
We evaluate the spin-singlet and spin-triplet pair con
Figure 1: Spin-singlet and triplet pairing components of neutrons, \(S_{\rho_{n}}\) and \(S_{J_{n}}\), respectively, in the Ca and Sn isotope chains.
densities with
\[S_{\rho_{n}} = \int d\mathbf{r}|\tilde{\rho}_{t}(\mathbf{r})|^{2}, \tag{35}\] \[S_{J_{n}} = R^{2}\int d\mathbf{r}|\tilde{\mathbf{J}}_{t}(\mathbf{r})|^{2}. \tag{36}\]
They have exactly the same local density dependence that appears in the pair energy. The constant \(R^{2}=10\) fm\({}^{2}\) is estimated from the ratio \(|\tilde{C}_{t}^{J}/\tilde{C}_{t}^{\prime}|\) of the G3RS potential and is introduced to make the units of the two quantities identical.
In Fig. 1, the results from neutron pair densities in the Ca and Sn isotopes are presented. The spin-singlet component \(S_{\rho_{n}}\) shows finite values except in neutron closed-shell nuclei. Even though the attractive pair EDF is present only in the spin-singlet channel, our results show non-zero values for the spin-triplet component \(S_{J_{n}}\), namely a coexistence of the spin-singlet and spin-triplet condensates is suggested in finite nuclei. This is also expected in Eqs. (33) and (34). A similar feature has been discussed in condensed matter [18] and ultracold Fermi gas [19; 20; 44] in the presence of the spin-orbit interaction. Notice that a direct comparison of \(S_{\rho_{n}}\) and \(S_{J_{n}}\) does not make sense as their relative value depends on the introduced constant \(R^{2}\). However, the isotopic dependence indicates that the spin-triplet pairing is more sensitive to the shell orbits involved than the spin-singlet one is. \(S_{\rho_{n}}\) is enhanced in the mid-shell region with high degeneracy, such as in the \(f_{7/2}\) and \(f_{5/2}\) orbits in Ca isotopes and \(50<N<82\) and \(82<N<126\) in Sn isotopes, while \(S_{J_{n}}\) shows a stronger orbital dependence; we see an enhancement (a reduction) in \(S_{J_{n}}\) in the isotope where the neutron Fermi energy is around \(j_{>}\) (\(j_{<}\)) orbit in \(f_{7/2}\) and \(f_{5/2}\) in Ca isotopes and an enhancement in the intruder region in the middle shell in Sn isotopes.
To analyze the contributions from the \(j_{>}\) and \(j_{<}\) orbits, we take \({}^{42}\)Ca and \({}^{56}\)Ca as representative cases, where two particles are supposed to occupy the \(f_{7/2}\) and \(f_{5/2}\) orbits mainly. The pair density distributions \(\tilde{\rho}_{n}(r)\) and \(\tilde{J}_{nr}(r)\) together with the contributions from \(f_{7/2}\) and \(f_{5/2}\) orbits are plotted in Fig. 2. Both the spin-singlet and spin-triplet neutron pair densities have finite values in the \({}^{42}\)Ca and \({}^{56}\)Ca nuclei. There, the \(f_{7/2}\) and \(f_{5/2}\) neutrons have dominant contributions as expected. For the spin-singlet density, they have a coherent contribution, and the total pair densities are composed of the coherent addition from the other orbits as well (not shown in the figure), whereas the neutrons in the \(f_{7/2}\) and \(f_{5/2}\) orbits contribute in a destructive way to the spin-triplet density. The dominant contribution for \(\tilde{J}_{nr}(r)\) in \({}^{42}\)Ca is from \(f_{7/2}\) orbit, while the \(\tilde{J}_{nr}(r)\) in \({}^{56}\)Ca is composed of the multiple orbits including those not shown in the figure. This indicates the magicity at \(N=34\) is weaker than that at \(N=28\).
### Spin-triplet pair EDF
In place of the spin-singlet pair EDF, we employ the spin-triplet pair EDF. The coupling constant of the spin-triplet pair EDF is adjusted to reproduce the pairing energy of \({}^{44}\)Ca obtained in the spin-singlet pair EDF (\(\tilde{C}_{n}^{J1}=-46.125\) MeV fm\({}^{5}\)).
Figure 3 shows the spin-singlet and spin-triplet pairing components calculated either with the spin-singlet or spin-triplet pair EDF for the Ca isotopes. The spin-singlet pairing component calculated with the spin-singlet pair EDF and the spin-triplet pairing component calculated with the spin-triplet pair EDF have very similar properties: one sees the collapse of the pairing at the magic numbers \(N=20,28,32\), and \(40\) (there is a tiny difference in \(N=34\)) and the neutron-number dependence of the relative size of the pairing component. We also note that there is little difference in the pairing energy, chemical potential, and other observables of the particle-hole type. These results indicate that the spin-triplet pair EDF alone can reproduce the general properties of nuclei within a similar quality to the spin-singlet one. Observables that are more sensitive to the type of pairing components are desired. The coupling constant of the spin-triplet pair EDF \(\tilde{C}_{t}^{J1}\) can be related to the Skyrme parameters as \(\tilde{C}_{t}^{J1}=[t_{2}(1+x_{2})+5t_{o}+2W_{0}]/8\), and are repulsive for many Skyrme interactions such as SIII
Figure 2: The neutron local pair density \(\tilde{\rho}_{n}(r)\) and the radial component of the neutron tensor pair density \(\tilde{J}_{nr}(r)\) of \({}^{42}\)Ca and \({}^{56}\)Ca and the contributions from \(1f_{7/2}\) and \(1f_{5/2}\) orbits.
(18.125 MeV fm\({}^{5}\)) [45], SLy4 (30.75 MeV fm\({}^{5}\)), and SLy5 (31.5 MeV fm\({}^{5}\)) [46], and SkP (5.486 MeV fm\({}^{5}\)), but can be attractive for the Skyrme interactions that include the tensor interaction such as SLy5 + T (\(-\)53.5 MeV fm\({}^{5}\)) [47], and 14 Skyrme parameters out of 36 T\(IJ\) parameter sets in Ref. [48]. Although the coupling constants of the EDF can be taken arbitrarily in the framework of the nuclear DFT, the tensor interaction will have a large impact on the property of the spin-triplet pairing coupling constant.
### Roles of the spin-orbit EDF
The spin-orbit splitting is expected to play an important role in the spin-triplet pairing as anticipated from the expression of the tensor pair density (34). In Fig. 3, we also present the pairing components \(S_{\rho_{n}}\) and \(S_{J_{n}}\) obtained by changing the coupling constant of the spin-orbit EDF while keeping the pairing coupling constants to the original values. Those are from the original spin-orbit EDF in 1.0 LS; multiplied by half in 0.5 LS; neglected in No LS. First, suppose that the spin-orbit EDF is neglected. Then, the spin-singlet pair EDF promotes only the spin-singlet pair condensate, and the spin-triplet pairing component \(S_{J_{n}}\) is zero [Fig. 3(a) and (b)]. One is tempted to the opposite conclusion when the spin-triplet pair EDF is considered. However, the appearance of the spin-triplet pairing component inevitably induces the spin-orbit splitting and the spin-singlet pairing component. This results in a non-zero spin-singlet pairing component although the induced amount is tiny. As a result, commonly with the spin-singlet and spin-triplet EDFs, the corresponding pairing component takes the maximum at around \(N=28\), which is around the half-filled situation of the 14-fold degenerated \(f\)-orbit [Fig. 3(a) and (d)]. The pairing component becomes zero at LS-closed shells \(N=20\) and \(N=40\).
The behavior of the pairing components \(S_{\rho_{n}}\) and \(S_{J_{n}}\) shows a drastic change with the value of the coupling constant of the spin-orbit EDF although we do not change the pair EDF itself. The spin-orbit EDF decreases the pairing component due to the lower degeneracy of the single-particle levels [Fig. 3(a) and (d)], but it enhances the coexistence of \(S_{\rho_{n}}\) and \(S_{J_{n}}\) [Fig. 3(b) and (c)]. By comparing the 0.5 LS and 1.0 LS cases, one can see an enhancement of the mixing in the region \(20<N<28\) and the suppression in the neutron-rich side with \(N>30\). The suppression is common for the main pairing component and the induced pairing component due to the lesser degeneracy of the single-particle levels with increasing the coupling constant of the spin-orbit EDF.
We also plot the pairing gap and the pairing energy in Fig. 4. Figures 3 (a) and 4 (a) show that the singlet pairing components \(S_{\rho_{n}}\) and the pairing gap \(\Delta_{n}\) behave in a similar way in the case of singlet-pair EDF, while the triplet-pairing component \(S_{J_{n}}\) [Fig. 3 (d)] and the pairing gap [Fig. 4 (b)] in the case of the triplet-pair EDF behave in a different way. A strong reduction of the averaged gap for the triplet pairing in the "No LS" case [Fig. 4 (b)] is due to a low tensor density J. The pairing energies for the singlet-pair EDF [Fig. 4 (c)] and the triplet-pair EDF [Fig. 4 (d)] take similar values as a function of the neutron number although the coupling constants of the two pair EDFs are adjusted only at \(N=24\). The agreement of the pairing energy shows that the triplet-pair EDF can include a similar pairing contribution to the energy as the singlet-pair EDF does, while the small values of the pairing gap defined by Eq. (28) may not correspond to the experimental OES for the triplet-pair EDF [Fig. 4 (b)] similar to when the singlet-pair Hamiltonian contains derivative terms [41]. We note that another average gap in which \(\phi_{2}^{(t)*}\) is replaced by \(\phi_{1}^{(t)*}\) in Eq. (28) behaves even worse in the case of the triplet-pair EDF, because of the singlet-pair amplitude in the denominator that is very small as expected from Fig. 3 (c). Other pairing observables such as the moments of inertia of the pairing rotation [49] would be more useful for constraining the coupling constants of the spin-triplet pair EDF.
Figure 3: Spin-singlet and spin-triplet pairing components, \(S_{\rho_{n}}\) and \(S_{J_{n}}\), calculated in the Ca isotopes by changing the coupling constant of the spin-orbit EDF: original in 1.0 LS; multiplied by half in 0.5 LS; neglected in No LS.
Figure 4: Pairing gap \(|\Delta_{n}|\) and pairing energy \(E_{\text{pair},n}^{S=0,1}\) calculated for the singlet- and triplet-pair EDF in the Ca isotopes by changing the coupling constant of the spin-orbit EDF.
Summary
We analyzed the spin-triplet pair condensation of like particles in singly-closed nuclei. The relevant quantity to the spin-triplet pair condensation is the 9-component spin-current pair density. One component can be finite in the HFB calculation within the spherical symmetry. We have demonstrated that the spin-singlet and spin-triplet pairing condensates coexist in open-shell nuclei, and one component of the pair EDF can induce the other component. The spin-orbit splitting is shown to play an essential role in the coexistence of the two types of pair condensates, because the spin is no more a good quantum number.
The inclusion of both the spin-singlet and spin-triplet pair EDFs into the nuclear EDF will enable us more detailed description of the nuclear pairing condensation including the isotope and isotone dependence, and deepen the understanding of the origin of the spin-orbit splitting and the role of the tensor force in the pairing channel. In describing open-shell nuclei with deformation, the pseudoscalar and pseudotensor components should be considered. In subsequent works, we will present such developments.
###### Acknowledgements.
The authors express their sincere gratitude to W. Nazarewicz and H. Tajima for their conscientious review of the manuscript and invaluable insights. Additionally, the authors acknowledge the invaluable contributions of members of the PHANES collaboration (M. Dozono, M. Matsuo, S. Ota, and S. Shimoura) for their insightful discussions. This work was supported by the JSPS KAKENHI (Grants No. JP19K03824, No. JP19K03872, No. JP19KK0343, and No. JP20K03964).
|
2302.08407 | Strong and Broadband Pure Optical Activity in 3D Printed THz Chiral
Metamaterials | Optical activity (polarization rotation of light) is one of the most desired
features of chiral media, as it is important for many polarization related
applications. However, in the THz region, chiral media with strong optical
activity are not available in nature. Here, we study theoretically, and
experimentally a chiral metamaterial structure composed of pairs of vertical
U-shape resonators of "twisted" arms, and we reveal that it demonstrates large
pure optical activity (i.e. optical activity associated with negligible
transmitted wave ellipticity) in the low THz regime. The experimental data show
polarization rotation up to 25 (deg) for an unmatched bandwidth of 1 THz
(relative bandwidth 80 %), from a 130 um-thickness structure, while theoretical
optimizations show that the rotation can reach 45 (deg). The enhanced chiral
response of the structure is analyzed through an equivalent RLC circuit model,
which provides also simple optimization rules for the enhancement of its chiral
response. The proposed chiral structures allow easy fabrication via direct
laser writing and electroless metal plating, making them suitable candidates
for polarization control applications. | Ioannis Katsantonis, Maria Manousidaki, Anastasios D. Koulouklidis, Christina Daskalaki, Ioannis Spanos, Constantinos Kerantzopoulos, Anna C. Tasolamprou, Costas M. Soukoulis, Eleftherios N. Economou, Stelios Tzortzakis, Maria Farsari, Maria Kafesaki | 2023-02-16T16:38:02Z | http://arxiv.org/abs/2302.08407v1 | # Strong and Broadband Pure Optical Activity in 3D Printed THz Chiral Metamaterials
###### Abstract
Optical activity (polarization rotation of light) is one of the most desired features of chiral media, as it is important for many polarization related applications. However, in the THz region, chiral media with strong optical activity are not available in nature. Here, we study theoretically, and experimentally a chiral metamaterial structure composed of pairs of vertical U-shape resonators of "twisted" arms, and we reveal that it demonstrates large pure optical activity (i.e. optical activity associated with negligible transmitted wave ellipticity) in the low THz regime. The experimental data show polarization rotation up to 25\({}^{\circ}\) for an unmatched bandwidth of 1 THz (relative bandwidth 80%), from a 130 \(\mu m\)-thickness structure, while theoretical optimizations show that the rotation can reach 45\({}^{\circ}\). The enhanced chiral response of the structure is analyzed through an equivalent RLC circuit model, which provides also simple optimization rules for the enhancement of its chiral response. The proposed chiral structures allow easy fabrication via direct laser writing and electroless metal plating, making them suitable
## 1 Introduction
Terahertz science is an active field of research both for its theoretical aspects but also due to the many associated applications [1; 2; 3; 4; 5; 6], especially in sensing, imaging and future communications. The exploitation of THz waves in those applications, besides efficient sources and detectors, requires high-performance optical components, such as modulators, wave-plates, lenses, etc. Hence, devices that can manipulate the amplitude and the phase of terahertz waves in an efficient and flexible way have gathered great attention [7; 8; 9]. Many innovating THz devices [10] and applications [11; 12] require polarization control, such as polarization rotation. Proposed ways of realizing polarization rotators are based mainly on non-chiral anisotropic metasurfaces [13; 14; 15] or multilayer wire-grid polarizers oriented in different directions [16; 17]. However, a crucial drawback of these approaches is that they work only for one polarization of the incident wave. In order to realize reciprocal polarization rotating devices insensitive to the polarization of the incident wave (at least for normal incidence) a chiral structure is necessary [18; 19; 20]. Chiral metamaterial structures, i.e. metamaterials with unit-cells lacking any mirror-symmetry plane, have been shown in recent years able to give, among other effects, strong polarization rotation (often called optical activity) even from ultrathin structures [21; 22; 23; 24; 25; 26], providing high efficiency which is limited only by dissipation loss and design optimization. We have to stress here that not only optical rotation but all the metamaterials-originated strong chiro-optical effects, e.g. circular dichroism [24; 27; 28] and asymmetric transmission [29; 30; 31; 32], have been shown to be instrumental for many applications requiring wave polarization control [33; 21; 34; 35], including sensing [36; 37; 39; 40; 41] and spectroscopy [42; 43]. These features may also be reconfigurable if combined with a THz tunable material like graphene [44; 45].
The key point in the metamaterial-based chiro-optical devices is that they exhibit chiro-optical effects orders of magnitude larger than natural chiral media; this is due to their macroscopic nature, where the currents responsible for chirality are not restricted by the atomic size. Furthermore, the scalability of metamaterials allows for operation in almost all frequency spectrum, provided that the available technology allows for their physical implementation. Indeed, chiral metamaterials based on bilayer-metal chiral meta-atoms [24; 26], such as pairs of crosses [46] or gamadtions [34], have demonstrated (with proper scaling) large optical activity in frequencies ranging from microwaves to the optical range. This type of chiral metamaterials supports multiple low-frequency resonant modes dominated by either an electric dipole resonant response or a magnetic dipole response, accompanied with chirality resonance; thus all chiro-optical effects are maximized at the resonances. Resonances though are associated also with high wave absorption and large impedance mismatch with the surrounding medium. Thus, the high optical rotation at resonance is always accompanied by low transmittance and by high circular-dichroism [21; 47] (absorption difference between left-handed and right-handed circularly polarized waves), leading to unavoidable non-negligible ellipticity of the transmitted wave, an effect undesirable for many applications requiring linearly polarized waves. This high ellipticity of the bilayer metallic structures is boosted by the dielectric spacer separating the two metallic layers of the bilayer, where there is high electromagnetic field concentration and thus maximization of the absorption response. Thus, achievement of pure optical rotation (optical rotation associated with close to zero ellipticity) in bilayer-metal and in most of
the resonant chiral structures seems possible only in frequency bands between resonances, with usually moderate rotation values and narrow-band response, features that inhibit the practical exploitation of the related structures.
In this work we propose a chiral metamaterial structure/design which shows large and ultra-broadband pure optical activity in the low THz regime. The structure is a metasurface made of three-dimensional (3D) metallic elements; it is dielectrics-free and possesses four-fold rotational symmetry. The building block (meta-atom) is a pair of vertical U-shape resonators of "twisted" arms, as shown in Figure 1. This type of meta-atoms can be easily fabricated via Direct Laser Writing (DLW) [48; 49; 50] and subsequent selective metallization by, e.g., electroless plating. The geometry of our chiral design allows co-linear electric dipole and magnetic dipole moments (for normally-incident waves, i.e. along-z in Figure 1), resulting to bi-isotropic chiral response. We present an extensive theoretical and numerical analysis of the design and demonstrate its potential for strong and broad-band optical rotation accompanied with very low ellipticity. The numerical results are validated by corresponding experimental data, validating also the large potential of our structure in the control of THz wave polarization.
The paper is organized as follows: Initially we present the proposed structure and demonstrate numerically its strong and broad-band pure optical activity response. Then, we describe the fabrication procedure and present the experimental results that validate the theory and reveal also experimentally the enhanced performance of the structure in terms of optical activity. Further, to explain the response of the structure, we employ a simple equivalent RLC circuit model for chiral metamaterials, which allows to derive simple optimization rules for achieving enhanced chiral response. In the conclusion, we summarize the main results and suggest future perspectives of this work.
## 2 Chiral structure and its calculated electromagnetic response
A schematic representation of the chiral metamaterial (CMM) design proposed in the present study is illustrated in Figure 1. It consists of a square (in \(x-y\) plane) arrangement of chiral meta-atoms, where each meta-atom is formed by two perpendicular metallic U-shape rings of "twisted vertical" arms; i.e., each initially vertical (to \(x-y\) plane - see Fig. 1(b)) arm of the U-rings is rotated by an angle \(\phi\) anti-clockwise (as seen from top - see Fig. 1(c)), in respect to its initial U-plane. The twist (rotation) of the vertical arms induces a magnetoelectric coupling in the structure, resulting to the chiral response. In the absence of this twist (i.e. \(\phi=0\)) the system behaves as typical non-chiral, split-ring-resonator-type metamaterial. The metal of the U-shape rings constituting the meta-atoms is Silver (Ag); its conductivity is considered in the simulations to be linearly dependent on frequency, ranging from \(\sigma=5.0\times 10^{7}\) S/m at 0.1 THz to \(\sigma=0.86\times 10^{7}\) S/m at 2.2 THz [51; 52; 53; 54]. The structure stands on a Silicon substrate (relative permittivity \(\epsilon_{Si}=11.9\) and loss tangent \(\tan\delta=0.02\)), while the short (of height \(h=18\leavevmode\nobreak\ \mu m\)) vertical leg joining the rings with the substrate (it is electromagnetically inactive) serves the complete metal-plating of the horizontal U-arms in the fabricated structure. The lattice periodicity is \(\mathbf{s}=120\leavevmode\nobreak\ \mu m\) and the length of each arm is \(l=96\leavevmode\nobreak\ \mu m\). The arm diameter is \(D=24\leavevmode\nobreak\ \mu m\).
Scattering experiments/simulations provide a complete description of electromagnetic wave transmission and reflection by a structure. For chiral structures, where the eigenwaves are the circularly polarized waves, the scattering problem is usually formulated for circularly polarized light. However, there are many applications requiring linearly polarized waves while the experimental data taken, e.g., from the spectrometers are usually obtained for linearly polarized electromagnetic fields. Having the reflection and transmission coefficients either for circularly or from linearly polarized waves one can calculate the main chirality related phenomena, i.e. optical activity and circular dichroism.
To demonstrate the broadband pure optical activity of our structure, we consider a unit cell as the one shown in Figure 1, with periodic boundary conditions along \(x\) and \(y\) directions, and calculate the transmitted fields for nor
mally incident linearly polarized waves (using the CST Studio commercial software). In that case the incident (in) and transmitted (tr) electric fields (E) are related via \(T_{L}\) matrix as
\[\left[\begin{array}{c}E_{x}^{(tr)}\\ E_{y}^{(tr)}\end{array}\right]=\left[\begin{array}{cc}t_{xx}&t_{xy}\\ t_{yx}&t_{yy}\end{array}\right]\left[\begin{array}{c}E_{x}^{(in)}\\ E_{y}^{(in)}\end{array}\right]=T_{L}\left[\begin{array}{c}E_{x}^{(in)}\\ E_{y}^{(in)}\end{array}\right]. \tag{1}\]
In Eq. (1) \(t_{xx}\), \(t_{yy}\),\(t_{xy}\) and \(t_{yx}\) are the complex transmission coefficients, where the first subscript indicates the output wave polarization and the second the incident wave polarization. Due to the fourfold rotational symmetry of our structure \(t_{xx}=t_{yy}\) and \(t_{xy}=-t_{yx}\).
To evaluate the chiral response of our structure, we need to calculate the transmitted wave ellipticity, \(\eta=0.5\tan^{-1}\left[(|t_{++}|^{2}-|t_{--}|^{2})/(|t_{++}|^{2}+|t_{--}|^{2})\right]\), directly connected to the circular dichroism, \(CD=|t_{++}|^{2}-|t_{--}|^{2}\), as well as the polarization rotation angle, \(\theta=(1/2)\left[\arg\left(t_{++}\right)-\arg\left(t_{--}\right)\right]\), a measure of the optical activity. To obtain \(\eta\) and \(\theta\) we need the corresponding transmission coefficients for right-handed and left-handed circularly polarized waves, \(t_{++}\) and \(t_{--}\) respectively; they can be obtained from the corresponding linear polarization coefficients using the general formula [29]
\[T_{CP}=\left[\begin{array}{cc}t_{++}&t_{+-}\\ t_{-+}&t_{--}\end{array}\right]=\frac{1}{2}\left[\begin{array}{cc}(t_{xx}+t_ {yy})+i(t_{xy}-t_{yx})&(t_{xx}-t_{yy})-i(t_{xy}+t_{yx})\\ (t_{xx}-t_{yy})+i(t_{xy}+t_{yx})&(t_{xx}+t_{yy})-i(t_{xy}-t_{yx})\end{array} \right]. \tag{2}\]
which is simplified in our case taking into account the symmetries \(t_{xx}=t_{yy}\) and \(t_{xy}=-t_{yx}\) (these symmetries result to \(t_{+-}=t_{-+}=0\)).
The calculated co- and cross-polarized transmittances (\(T_{xx}=|t_{xx}|^{2}\), \(T_{yx}=|t_{yx}|^{2}\)) as well as the corresponding optical rotation, \(\theta\), and ellipticity, \(\eta\), for our structure are illustrated in Figure 2 (a)-(c). We observe two strong resonances, at 0.8 THz and 1.8 THz, associated with dips in co-polarized transmittance and peaks in the cross-polarized one. In
Figure 1: (a) Illustration of our chiral metamaterial structure. Yellow color indicates the metallic components and light-gray the silicon substrate. (b) Perspective view of our structure unit cell before (left panel) and after (right panel) the twist of the vertical arms. (c) Top view of the chiral unit-cell and (d) side-view of the unit-cell. The geometrical parameters are the following: Lattice constant \(a=120\)\(\mu m\), arm length \(l=96\)\(\mu m\) (for both horizontal, i.e. at \(x-y\) plane, and non-horizontal arms), arm diameter \(D=24\)\(\mu m\) and vertical support-leg height \(h=18\)\(\mu m\) (Note that the support-leg does not affect the EM response of the structure.).
the case of no-twisted U-arms (\(\phi=0\)) both resonances are coming from the U-ring which is perpendicular to the incident magnetic field, as shown by field and current simulations. The first resonance is a typical magnetic resonance, excited both by the incident magnetic field and the incident electric field (the latter due to the bianisotropy of the ring [7 56], coming from the asymmetry of the U-shape in the direction of the electric field). The second resonance is predominantly the electric dipole resonance of the parallel to the electric field U-ring; it is though strongly affected by the coupling between neighboring unit-cells (note that the "vertical" U-arms of neighboring cells form pairs of parallel cut-wires supporting also resonant magnetic response). The twisting of the U-arms (\(\phi\neq 0\)) results to excitation of both U-rings forming our meta-atom; for both rings co-linear electric and magnetic dipoles are excited, resulting to chiral response and thus to the resonant cross-polarized transmission shown in Figure 2(a).
Using the transmission data for linearly polarized waves, we obtain the corresponding ones for criculary polarized waves (via Eq. (2)) and through them we evaluate the optical activity and transmitted wave ellipticity for our structure. The corresponding results are shown in Figures. 2(b) and 2(c); they reveal a quite impressive pure optical activity (i.e. optical activity associated with negligible ellipticity). The optical activity is larger than 45\({}^{\circ}\) in the range from 0.75 to 1.75 THz, i.e. in a relative bandwidth \(\Delta f\simeq 80\%\) (\(\Delta f=(f_{max}-f_{min})/(f_{max}+f_{min})/2\)). The ellipticity in this range is always lower than 2\({}^{\circ}\). This small ellipticity is attributed partially to the dielectrics-free nature of the structure.
Thus, we can summarize that for our proposed design, numerical simulations demonstrate large, pure and broadband optical rotation of linearly polarized waves. This enhanced performance will be verified by the associated experimental studies presented below, and will be discussed further and analyzed in Section 4.
## 3 Fabrication and electromagnetic characterization
In order to fabricate the metamaterial structure depicted in Figure 1, we employed Direct Laser Writing (DLW) by Multiphoton Polymerization (MPP) [57; 58], followed by selective metallization through electroless plating (EP) with silver [59]. The detailed experimental setup used for the fabrication and the subsequent metallization of the fabricated structures is presented at the Experimental Section of this paper. Scanning Electron Microscope Images (SEM) of the fabricated metalized chiral THz metamaterial structures are shown in Figure 3 in top and side view. The geometrical parameters of the metallic structures are measured to be the following: the lattice constant \(a=121.4\)\(\mu m\), the arm length \(l=92.8\)\(\mu m\), the arm diameter \(D=24\)\(\mu m\) and the angle of the arm \(\phi=21^{\circ}\) (note the deviations in the
Figure 2: Panel (a) depicts the numerically calculated co- and cross-polarized transmittance spectra (black-line, \(T_{xx}\), and red-line, \(T_{yx}\), respectively) for the structure of Fig. 1. Panels (b) and (c) illustrate the corresponding optical activity and ellipticity of the transmitted wave.
geometrical parameters from the optimized simulated structure of Figure 2).
To determine the optical characteristics of our chiral metamaterial, we used a terahertz time-domain spectroscopy (THz-TDS) setup based on photoconductive antennas (TOPTICA TeraFlash pro) operating in transmission mode. A schematic representation of the experimental setup in shown in Figure 4. A broadband THz pulse, linearly polarized along the x-axis, is generated by the photoconductive emitter (TX). The THz pulse passes through a wire grid polarizer (GP1) that further defines the polarization along the same axis and impinges on the sample at normal incidence. The transmitted by the sample wave, then, passes through a second wire grid polarizer (GP2) which can be either parallel or perpendicular to GP1. This way, cross- or co-polarized transmission measurements can be performed. The THz wave is finally guided to the photoconductive detector (RX). Since RX is highly sensitive to linearly polarized THz waves, it was rotated by 45\({}^{\circ}\) with respect to x-axis. This ensures that there is always an equal component of the THz field along the x- and y-axis [60; 61]. The THz transmission spectra were normalized to a co-polarized transmission spectrum through a bare silicon substrate. Figure 5 shows the measured transmission power spectra (\(T_{xx}=|t_{xx}|^{2}\) and \(T_{yx}=|t_{yx}|^{2}\)) of our metamaterial at linear polarization incidence, well as the corresponding optical activity, \(\theta\), and ellipticity \(\eta\).
To compare our experimental results to the numerical ones, the same calculations presented before are used.
Figure 3: Scanning Electron Images (SEM) of the metalized Chiral THz Metamaterial structures fabricated on a silicon substrate. (a, b, c) Top view, (d) Side view. The geometrical dimensions of the structure are measured to be: lattice periodicity \(a=121.4\ \mu m\), arm length \(l=92.8\ \mu m\), arm diameter \(D=24\ \mu m\) and angle of the arm \(\phi=21^{\circ}\).
The corresponding data are shown also in Figure 5, next to the experimental ones. For the calculations We assume the following geometrical parameters which are the same as the fabricated sample (with a deviation less than 1 %) with bulk Ag conductivity in the range from \(\sigma=8.7\times 10^{6}\) S/m at 0.1 THz to \(\sigma=3.5\times 10^{6}\) S/m at 2.2 THz as in [59]. Particularly, the lattice constant is \(a=120\)\(\mu m\), the arm length is \(l=90\)\(\mu m\) while each arm is rotated by an angle \(\phi=20^{\circ}\), the arm diameter is \(D=24\)\(\mu m\) and the support vertical leg is \(h=18\)\(\mu m\). As expected, strong polarization rotation with almost zero ellipticity is obtained, for a bandwidth of 1 THz. The optical measurements of our fabricated sample represent this trend very nicely, as can be observed in the left column of Figure 5. In particular, the measured co-polarization transmission amplitude,\(T_{xx}\) in the spectra range from 1 to 2 THz is compressed below 50 %. At the same time, the cross-polarization transmission amplitude, \(T_{yx}\), reaches its maximum value of 20%, indicating effectively the polarization rotation of linearly polarized waves. Using the experimental data, we evaluate the polarization rotation angle and ellipticity as depicted in the left-hand side panels of Figure 5. We observe optical activity up to \(25^{\circ}\) for a frequency range over 1 THz accompanied by zero ellipticity. However, the experimental results are a little weaker as well, as the first resonance appears at somewhat higher frequency than expected from simulations. This difference can be attributed mainly to the difference in the conductivity of the Silver coated sample compared to the bulk conductivity used in the simulations as well as to fabrication imperfections, especially in the cross-sections of the cylindrical arms which are in some cases elliptical instead of circular. Furthermore, at low THz the skin-depth of Silver, strongly depends on the value of the conductivity [62], and in some cases the skin depth can be larger than the achieved of the deposited metal.
## 4 Theoretical analysis and discussion
In this section we discuss first the sensitivity of the electromagnetic response of our design for different geometrical parameters. In particular, we demonstrate the effect of changing the in-plane lattice constant (affecting the coupling with the nearest neighbors) and the vertical arms rotation angle (\(\phi\)) on the transmittance features as a function of frequency. The co- and cross- polarized transmittances (\(T_{xx}=|t_{xx}|^{2},T_{yx}=|t_{yx}|^{2}\)) for different lattice-constant
Figure 4: Schematic representation of the experimental setup. TX:THz emitter, GP1,2:Wire grid polarizers, RX: THz receiver, rotated at \(45^{\circ}\) with respect to x-axis.
values (_a_) as well as the corresponding optical rotations, \(\theta\), are illustrated in Figure 6 (a)-(c). We observe that the increase of the lattice constant leads to lower cross-polarized transmittance and optical rotation values. It does not affect strongly though the position of the resonances, indicating that the nearest-neighbor coupling is not a parameter determining or dominating the structure response.
Regarding the effect of the rotation angle, \(\phi\), it is demonstrated in Figs. 6 (d)-(f). Since the system is non-chiral in the absence of rotation, \(\phi=0\), of the arms, we expect that by tuning the angle \(\phi\) we can highly control the optical activity. Indeed, the numerical data of Figs. 6 (d)-(f) verify this tunability potential, demonstrating once again the strong, broadband and relatively flat pure optical activity of our structure.
To understand and explain the calculated and measured response of our structure, we perform a theoretical analysis for obtaining qualitative formulas for the material parameters determining the structure response, and in particular for evaluating the effective chirality close to the first structure resonance.
We consider our meta-atom element consisting of two perpendicular U-shaped rings, as shown in Figure 7, and we examine the response of both rings to an incident EM field. Each U-ring is considered as a polarizable particle showing electric and magnetic dipole response; the electric and magnetic dipole moments, \(\mathbf{p}\) and \(\mathbf{m}\) respectively, are connected with the local fields by [18; 63]
\[\mathbf{p}=\bar{\mathbf{\alpha}}_{ee}\mathbf{E}+\bar{\mathbf{\alpha}}_{em} \mathbf{H},\qquad\mathbf{m}=\bar{\mathbf{\alpha}}_{mm}\mathbf{H}+\bar{\mathbf{ \alpha}}_{me}\mathbf{E} \tag{3}\]
where \(\bar{\mathbf{\alpha}}_{ee}\), \(\bar{\mathbf{\alpha}}_{mm}\), \(\bar{\mathbf{\alpha}}_{em}\), \(\bar{\mathbf{\alpha}}_{me}\) are the electric, magnetic, electromagnetic, and magnetoelectric polarizability tensors of
Figure 5: The three left-hand side panels show experimentally measured co- and cross- polarized transmittances (top left), optical activity, \(\theta\) (middle left), and ellipticity, \(\eta\) (bottom left). The corresponding simulation results are illustrated in the right-panels and are in good agreement with the experimental data.
the U-ring and the E, H indicate the local fields (i.e. external fields plus fields generated by the induced currents at the ring; we omit for simplicity any coupling between unit cells, and we consider operation in the quasistatic limit).
Considering the incident field configuration shown in Fig. 7, the induced currents at the lowest frequency resonance are as shown by the red arrows. Calculating the currents and/or the accumulated charges one can evaluate the electric and magnetic dipole moments relevant to each ring and through them the individual polarizability elements. To evaluate the currents we consider each U-ring as an effective RLC circuit described by the basic Kirchhoff equation
\[L\frac{dI_{a,b}}{dt}+RI_{a,b}+\frac{Q_{a,b}}{C}=U_{a,b} \tag{4}\]
where the subscripts a and b refer to the configuration of Figures 7a and 7b, respectively, \(I\) is the current, \(Q\) the corresponding charge (\(I=dQ/dt\) ), \(L\) the inductance, \(R\) the resistance and \(C\) the capacitance of each configuration. The source term, \(U\), is the electromotive force (\(EMF=\int\mathbf{E}\cdot d\mathbf{l}_{\mathrm{c}}+\mu_{0}\mathbf{S}\cdot d \mathbf{H}/dt\); \(l_{c}\): conductor length, \(S\): loop-current-enclosed area) resulting from both the external and the induced fields. \(U\) for the configurations a and b can be written as
\[U_{a}=-E_{y}d+E_{x}2l\sin\phi-\mu_{0}l\cos\phi\frac{dH_{x}}{dt}-\mu_{0}l^{2} \cos\phi\sin\phi\frac{dH_{y}}{dt} \tag{5}\]
Figure 6: The top panels (a)-(c) depict the numerically calculated co- and cross- transmittance spectra and the corresponding optical activities at different lattice-constants (\(a\)) of our metamaterial. In the bottom panels (d)-(f), the numerically calculated co- and cross-polarized transmittance spectra and the corresponding optical activities at three different rotation angles of the U-rings arms: \(\phi=10^{o}\), \(\phi=20^{o}\) and \(\phi=30^{o}\), of arms.
\[U_{b}=E_{x}d+E_{y}2l\sin\phi+\mu_{0}d\cos\phi\frac{dH_{y}}{dt}-\mu_{0}l^{2}\cos \phi\sin\phi\frac{dH_{x}}{dt} \tag{6}\]
where \(\mu_{0}\) is the vacuum permeability and \(l,d,\phi\) are the geometrical parameters defined in Figure 7.
Note that, for \(\phi=0\), for the configuration a the magnetic resonance of the structure can not be excited by the incident fields (\(E_{x}\), \(H_{y}\)), while for the configuration b both incident electric and magnetic fields are able to excite loop currents in the U-ring, i.e. the structure is bianisotropic [25].
Considering harmonic time dependence of the form of \(e^{-i\alpha t}\), one can calculate the current in Eq. 4, the corresponding charge \(Q=I/(-i\alpha)\) and, through them, the electric and magnetic dipole moments, \(\mathbf{p}=Q\mathbf{d_{cs}}\), \(\mathbf{m}=I\mathbf{S}\) (\(\mathbf{d_{cs}}\) is the charge separation and \(\mathbf{S}\) the area enclosed by loop currents), as a function of the fields, and the polarizabilities as defined by Eq. 3. The details of the corresponding calculations are given in the Supporting Information.
For qualitative analysis we can consider the total electric and magnetic dipole moments of our double-ring meta-atom as the sum of the dipole moments of configurations a and b (any coupling between the two perpendicular rings is implicitly taken into account through the modification of induced currents in each ring). Having at hand the total electric and magnetic dipole moments of the unit cell, the electric polarization and the magnetic polarization can be calculated via \(\mathbf{P}=\mathbf{p}/V\) and \(\mathbf{M}=\mathbf{m}/V\), respectively, where \(V\) denotes the volume of the unit cell. (Note that we again ignore here, for simplicity, the coupling between unit cells.)
Then the macroscopic (average) material parameters of our structure, including the chirality, which is crucial for the understanding of its chiral response, can be obtained taking into account the standard constitutive relations \(\mathbf{D}=e_{0}\mathbf{E}+\mathbf{P}=\bar{\varepsilon}\mathbf{E}+i(\bar{ \varepsilon}/c)\mathbf{H}\) and \(\mathbf{B}=\mu_{0}(\mathbf{H}+\mathbf{M})=\bar{\mu}\mathbf{H}-i(\bar{ \varepsilon}^{T}/c)\mathbf{E}\). Applying this procedure (see Supporting Information), the effective permittivity and permeability for the double ring result to be scalar quantities (i.e. diagonal tensors with equal elements; note that for each isolated U-ring both diagonal and off-diagonal permittivity and permeability tensor elements appear). The chirality parameter though, \(\bar{\varepsilon}\), has both diagonal and off-diagonal elements, demonstrating
Figure 7: Topologies of the two basic chiral metallic elements constituting the meta-atom of our structure (the meta-atom is the addition of the two elements). Both elements are the same U-shaped ring of twisted (by an angle \(\phi\)) vertical arms. The components of the incident electromagnetic wave are also marked in the figure, along with the currents they excite (red arrows) in the frequencies around the first structure resonance (magnetic-type resonance).
the structure bianisotropy. The diagonal elements, which are the ones involved in the cross-polarized transmission [18; 63] are given by
\[\kappa_{xx}=\kappa_{yy}=\frac{\omega c\mu_{0}l^{2}d\cos\phi\sin\phi}{VL[\omega^{2 }-\omega_{0}^{2}+i\omega(R/L)]} \tag{7}\]
where \(\omega_{0}^{2}=1/LC\) is the resonance frequency of the structure. The off-diagonal elements are given by
\[\kappa_{xy}=-\kappa_{yx}=\frac{\omega c\mu_{0}l\cos\phi(2l^{2}\sin^{2}\phi+d^{ 2})}{VL[\omega^{2}-\omega_{0}^{2}+i\omega(R/L)]} \tag{8}\]
From Eq. 7 one can see that the chirality strength is larger the larger the meta-atom "filing ratio" within the unit cell, a result consistent with the observed in Fig. 6 dependence of the cross-polarized transmittance and optical activity on the unit cell size. It seems also that the length of the vertical meta-atom arms plays more pronounced role in the chirality than that of the horizontal arms (for that we consider that the inductance \(L\) in Eq. (7) is proportional to the ring area (\(=Id\) for \(\phi=0\)). Finally, as is expected, the chirality is generated by the non-zero values of the twist angle \(\phi\) and it is highly affected by \(\phi\), in consistency also with the results of Fig. 6.
Regarding the off-diagonal terms of the chirality tensor, the twist angle \(\phi\), as is seen by Eq. (8), enhances the strength of these terms, i.e. it enhances the strength of the bianisotropy. It is not clear though whether this enhancement is the origin of the reduced ellipticity, an effect observed also in [23].
## 5 Conclusion
We have proposed a chiral metamaterial design made of 3D metallic elements for efficient polarization control of electromagnetic waves in a broad frequency range. Particularly, we have demonstrated both numerically and experimentally that our 3D metamaterial, composed of vertical U-shaped resonators of twisted arms, exhibits strong (\(>45^{\circ}\)) and ultrabroadband (relative bandwidth 80%) optical activity with very low ellipticity in the low THz region. The fabrication of the structure was performed through Direct Laser Writing and subsequent electroless silver plating, while the experimental electromagnetic response was demonstrated via THz time domain spectroscopy. Finally, the structures was studied also analytically, though an equivalent RLC circuit model, and the parameters determining its response were identified.
The large, broadband and pure optical activity of our structure equips it with unique potential for polarization control applications, particularly important in the THz region where there is still a serious lack for efficient optical components. At the same time, our extensive theoretical and numerical study will be instrumental in providing guidance for further expansion and generalization in more complicated systems, where the combination of chirality with other special symmetries or asymmetries may open a new direction in the field of THz photonics.
## 6 Methods and Experimental Section
### Samples Preparation
#### Photosensitive Material:
The material used for the fabrication of the 3D chiral metamaterial was a zirconium-silicon organic-inorganic hy
brid composite doped with metal-binding moieties [59]. It was produced by the addition of methacryloxypryl trimethoxysilane (MAPTMS) to zirconium propoxide (ZPO, 70% in propanol). 2-(dimethylamino)ethyl methacrylate (DMAEMA) was also added to provide the metal-binding moieties that enabled the selective metallization of the dielectric structures. MAPTMS and DMAEMA were used as the organic photopolymerizable monomers, while ZPO and the alkoxysilane groups of MAPTMS served as the inorganic network forming moieties. 4,4-bis(diethylamino) benzophenone (BIS) was used as a photoinitiator. The photopolymerizable material was synthesized as described in detail in Ref. [64]. The samples were prepared by drop-casting onto a 530 \(\mu\)m thick silanized high-resistivity silicon substrate, and the resultant films were dried on a hot plate at 55\({}^{\circ}\) for 60 min before the photopolymerization.
### Direct Laser Writing by Multiphoton Polymerization
In DLW, the beam of an ultrafast laser is tightly focused into the volume of a transparent photopolymerizable resin. Polymerization is initiated only within the focal volume element, viz. the voxel, where the intensity is high enough to trigger multi-photon absorption. Scanning the laser beam inside the material, 3D structures can be directly printed, in a layer-by-layer fashion. After the fabrication process is completed, the sample is immersed into appropriate solvents and the unexposed resin is dissolved to reveal the freestanding 3D structure. A droplet of the photosensitive material was placed onto a 530 \(\mu\)m thick silanized high-resistivity silicon substrate for the photopolymerization. A Femtosecond Fiber Laser (FemtoFiber pro NIR, Toptica Photonics AG) emitting at 780 nm with a pulse duration of 150 fs, average output power 500 mW, and a repetition rate of 80 MHz was employed as a light source [65]. A 40x microscope objective lens (Zeiss, Plan Apochromat, N.A. = 0.95) was used to focus the laser beam into the volume of the photosensitive material. A Galvanometric scanner-based system (Scanlabs Hurryscanll 10, computer-controlled) was used to scan the focused laser beam through the polymeric sample following the predefined metamaterial structure design path. Z-axis scanning and larger-scale x-y movements were possible with the use of a high-precision three-axis linear translation stage (Physik Instrumente). The structures were fabricated in a layer-by-layer fashion starting from the bottom (vertical leg) towards the arms of the structure with the first layer adhering to the surface of the silicon substrate. The scanning speed used was 4000 \(\mu\)m/s. The power for the fabrication of the structures was measured, before the objective, to be 175 mW. The live-monitoring of the fabrication process was achieved using a CCD camera, with appropriate imaging optics. Finally, an overall 3 x 3\(mm^{2}\) metasurface of 24x24 3D chiral meta-atoms array with a periodicity constant of 121.4 \(\mu m\) on the surface of a high resistivity silicon substrate was produced, as is shown in Figure 3.
### Metallization:
After the fabrication of the chiral metamaterials array was completed, the metallization process of the sample followed in order for the structures to become conductive and gain optical activity. The metallization process of the 3D chiral metamaterials was based on selective electroless silver plating according to a modified protocol based on Ref. [59]. This protocol has been shown to offer conductivity to the microstructures of \(\sigma=(5.71\pm 3.01)\times 10^{6}\) (S/m) [64]. EP is a fairly simple process that does not require any specialized equipment, and the metal deposition can be done without using any electrical potential. In general, it is characterized by the selective reduction of metal ions at the surface of a catalytic substrate immersed into an aqueous solution of metal ions, with continued deposition on the substrate through the catalytic action of the deposit itself. In detail, EP comprises three main steps: seeding, reduction, and silver plating.
Seeding: The samples were immersed in a 0.05 mol/L AgNO3 aqueous solution at room temperature for 38 hours. This was followed by thorough rinsing with double distilled (d.d.) water and then left to dry at room temperature.
Reduction: An aqueous sodium borohydride (NaBH4) solution 6.6 M was prepared 24hours before the immersion of
the samples. The solution was very well mixed and kept uncovered to get rid of trapped air bubbles. The samples were subsequently dipped in the solution for 22 hours to reduce the silver ions and form silver nanoparticles. The samples were washed thoroughly in fresh d.d. water and left to dry.
Silver plating: A 0.2 M AgNO3 aqueous solution was mixed with 5.6% NH3 (28% in water) and 1.9 M glucose (C\({}_{6}\)H\({}_{12}\)O\({}_{6}\) > 98%) as a reducing agent, at a volumetric ratio 5:3:8. The samples were immersed in the solution for a few minutes, before the solution turned to dark. In the meanwhile, a fresh solution was prepared to replace the old one. This process was repeated 5 times.
After the metallization process was completed, the dielectric structure was coated by a thin silver nanoparticles sheet. The thickness of this metal coating on the structures was measured from SEM images, and found to be in the range of 80 to 130 nm. This means that the core of the structure was still dielectric. The diversity of the thickness parameter resulted from the fact that the metallic coat is not a smooth and bulk layer of silver on the structure surface, but a film of sprinkled metal nanoparticles of varying diameters, that are attached to the structures.
## 1 Numerical calculations
Numerical calculations were carried out with the commercial software package CST Microwave Studio as well as with the commercial software package COMSOL Multiphysics, employing a finite element solver in the frequency domain. A fine size of tetrahedral spatial mesh was chosen according to the COMSOL physics-controlled mesh. The transmitted and reflected coefficients were simulated for one square unit cell with the periodic boundary conditions on the x- and y- sides. The incident waves were excited on the top of the simulation domain by excitation of x- or y-polarized components sweeping the input port 1 or 2, respectively. These ports also measured the reflected x- and y- polarization waves. The detectors at the bottom simulation domain, output port 3 and 4, measured the transmitted x- and y- polarization waves. The permittivity of the Silicon substrate was assumed to be constant over the frequency range with \(\epsilon=11.9\) and \(\tan(\delta)=0.02\).
## Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
## Acknowledgements
This research work was partly supported by the Hellenic Foundation for Research and Innovation (H.F.R.I) under the "2nd Call for H.F.R.I. Research Projects to support Faculty members and Researchers" (Project Number: 4542), and by the European Union, projects In2Sight (FETOPEN-01-2018-2019-2020, GA:964481) and FABulous (HORIZON-CL4-2022-TWIN-TRANSITION-01-02, GA:101091644). Ms. A. Manousaki provided expert SEM support. Useful communication with Prof. Thomas Koschny is also acknowledged.
## Author Contributions
I.K, A.T. and M.K. carried out the numerical simulations. I.K. and M.K. developed the theoretical model. M.M. and I.S. fabricated the samples and took the SEM images. A.K., C.D. and K.K. carried out optical characterization experiments. C.S., E.E., S.T., M.F. and M.K. contributed to physical insight, supervised the project, guided manuscript organization and edited the manuscript. All authors contributed to the preparation of the manuscript.
## Conflict of Interest
The authors declare no conflict of interest.
|
2310.07525 | ViT-A*: Legged Robot Path Planning using Vision Transformer A* | Legged robots, particularly quadrupeds, offer promising navigation
capabilities, especially in scenarios requiring traversal over diverse terrains
and obstacle avoidance. This paper addresses the challenge of enabling legged
robots to navigate complex environments effectively through the integration of
data-driven path-planning methods. We propose an approach that utilizes
differentiable planners, allowing the learning of end-to-end global plans via a
neural network for commanding quadruped robots. The approach leverages 2D maps
and obstacle specifications as inputs to generate a global path. To enhance the
functionality of the developed neural network-based path planner, we use Vision
Transformers (ViT) for map pre-processing, to enable the effective handling of
larger maps. Experimental evaluations on two real robotic quadrupeds (Boston
Dynamics Spot and Unitree Go1) demonstrate the effectiveness and versatility of
the proposed approach in generating reliable path plans. | Jianwei Liu, Shirui Lyu, Denis Hadjivelichkov, Valerio Modugno, Dimitrios Kanoulas | 2023-10-11T14:24:20Z | http://arxiv.org/abs/2310.07525v1 | # ViT-A*: Legged Robot Path Planning using Vision Transformer A*
###### Abstract
Legged robots, particularly quadrupeds, offer promising navigation capabilities, especially in scenarios requiring traversal over diverse terrains and obstacle avoidance. This paper addresses the challenge of enabling legged robots to navigate complex environments effectively through the integration of data-driven path-planning methods. We propose an approach that utilizes differentiable planners, allowing the learning of end-to-end global plans via a neural network for commanding quadruped robots. The approach leverages 2D maps and obstacle specifications as inputs to generate a global path. To enhance the functionality of the developed neural network-based path planner, we use Vision Transformers (ViT) for map pre-processing, to enable the effective handling of larger maps. Experimental evaluations on two real robotic quadrupeds (Boston Dynamics Spot and Unitree Go1) demonstrate the effectiveness and versatility of the proposed approach in generating reliable path plans.
## I Introduction
Legged robots, and especially quadrupeds, have seen tremendous progress over the past few years allowing them to carry out a wide range of tasks, ranging from package delivery [1], agricultural production [2], and search-rescue missions [3]. Path planning plays a crucial role in enabling legged robots to navigate autonomously and effectively in various complex environments. Several studies, e.g., [4, 5], were dedicated to the development of efficient path planning algorithms for quadrupedal robots, aiming to ensure their safe navigation while avoiding collisions with obstacles. Many of these works have utilized traditional methods such as Rapidly-exploring Random Trees (RRT) and \(A^{*}\)-based methods. Despite all the great efforts, achieving efficient and reliable path plans for mobile robots continues to present an ongoing challenge when using such traditional approaches. For example, established planning methods often struggle to effectively handle the complexities and uncertainties associated with real-time sensor inputs [6].
In contrast, the emergence of planners that integrate data-driven methods, fostered by the advancements in Deep Learning (DL), offers a promising avenue for addressing some of these challenges, e.g., empowering robots to learn and adapt from real-world data [7, 8]. This ability to gather knowledge from real-world experiences equips robots with the capacity to make more informed decisions in diverse and challenging situations.
In this paper, we present an approach that builds upon recent advancements in differentiable planners [8], enabling the learning of end-to-end mapping. Specifically, we focus on generating global paths for quadrupedal robots, by feeding 2D maps with obstacles into a deep neural network for \(A^{*}\)-based learning. Moreover, we enhance the functionality of our neural network-based planner by using a map pre-processing step with Visual Transformers (ViT) [9]. By introducing such an encoding, our method can leverage the strengths of transformers, particularly in capturing long-range dependencies and learning complex relationships in the input map images, while enabling the handling of larger maps efficiently. In the remainder of this paper, we refer to the proposed method as _ViT-\(A^{*}\)_ Path Planner. The contributions of this work can be summarized as follows. We introduce:
* a ViT-based Neural \(A^{*}\) Path Planner (ViT-\(A^{*}\)) that operates efficiently on maps of any dimension;
* a control stack to ensure the successful application of the proposed method on real quadruped robots in numerous application scenarios.
The subsequent sections are structured as follows. Sec. II provides an overview of the relevant literature, discussing previous works in the field, while Sec. III outlines the proposed method in detail. Sec. IV presents the experimental setup, including both simulation and real robot experiments. Finally, in Sec. V, we present the conclusions and discuss future developments of our research.
Fig. 1: The two robots (left: Boston Dynamics Spot, right: Unitree Go1) used for validating the proposed method.
## II Related Work
### _Classical Path Planning_
There are two main approaches to classical path planning algorithms: search-based and sampling-based methods. Search-based path planning provides mathematical guarantees of converging to a solution if it exists. \(A^{*}\) and its modifications have since found extensive use in robot navigation due to their simplicity in implementation and effectiveness in finding valid paths. For instance, in [10] the authors introduced an extension of \(A^{*}\) to drive a mobile platform to sanitize rooms. In [11, 12]\(A^{*}\) algorithms were used to find collision-free paths for a legged or legged-wheeled [13, 14] robots to achieve autonomous navigation. Extensions to these include footstep perception [15, 16, 17] and planning [18], or even navigation among movable obstacles [19, 20, 21]. Traditional methods heavily rely on a fixed heuristic function, such as the Euclidean distance, which lacks adaptability to varying robot configurations and environments. In our work, we propose a novel approach where we learn a heuristic based on the visual appearance of the application scenarios allowing the robot to make more informed decisions and thus reducing the overall search area and planning time.
Sampling-based planners efficiently create paths in high-dimensional space by sampling points in the state space. They can effectively work with continuous spaces. The literature in this context is vast, especially for applications in legged robotics. Some notable contributions in the field include [4], where an extension of an RRT-based algorithm is used for controlling a quadruped robot during the DARPA Challenge in 2015. More recently [5] introduced a novel sampling-based planner that shortens the computational time to find a new path in quadrupedal robots. While these approaches demonstrate satisfactory performance and probabilistic convergence, their limitations lie in the inability to incorporate image-based information directly into the planning process. As a result, their application is restricted in scenarios where planning based on visual data is not essential.
### _Data-Driven Path Planning_
In contrast to the classical path-planning methods, state-of-the-art research in the field has shifted towards more practical solutions, which involve incorporating machine learning techniques. Data-driven methods have emerged as robust solutions to address these challenges by directly learning the behavior of pathfinding. These methods employ approaches such as expert demonstration [22] or imitation learning [6] to learn how to plan paths. Recent works directly address the issue of lack of semantically labeled maps in classical search-based methods by using data-driven approaches directly on raw image [23, 6, 24]. Specifically, Yonetani et al. [7] introduced Neural \(A^{*}\) - a differentiable variant of the canonical \(A^{*}\), coupled with a neural network trained end-to-end. The method works by encoding natural image inputs into guidance maps and searching for path-planning solutions on them, resulting in significant performance improvements over previous approaches both in terms of efficiency and optimality. Our work expands upon this paper.
### _Vision Transformers_
While methods such as the Neural \(A^{*}\)[7] have shown great promise in terms of performance improvements, they face limitations in processing larger maps due to the use of Convolutional Neural Networks (CNNs), where dealing with maps with increasing size could lead to a reduction in performance. This has posed some constraints in terms of processing larger maps. Transformers have emerged as a promising alternative, exhibiting significant performance improvements in various computer vision tasks [25, 26, 27] and robot vision tasks [28, 29, 30]. Transformers have the ability to capture long-range dependencies in images, thanks to their self-attention layers that enable them to attend to any part of the image regardless of the distance from the current location [9, 31]. This is in contrast to CNNs, which are confined to focus on local image patches. Moreover, due to their stacked layers, transformers can learn more complex relationships between different parts of the images while assuming fewer inductive biases [31]. In this work, we exploit the capability of the transformers to learn long-range dependencies to enhance the Neural A* performances with larger maps.
## III ViT-Based Neural \(A^{*}\) Path Planner
### _Neural \(A^{*}\) Planner_
This work expands upon the Neural \(A^{*}\) Path Planner, introduced in [7]. Our method aims to provide global path plans as depicted in Fig. 2. In our approach, we introduce a ViT network, instead of the original CNN-based encoder-decoder structure, to process 2D maps of the environment. Unlike classification tasks using transformers that output a fixed-sized vector, our design allows a path planner to operate on variable-sized map inputs. To achieve this, we incorporate a decoder architecture that converts the embedded vectors for individual image patches back to the required guidance map. By utilizing the attention mechanism, our planner can effectively focus on key features in the planner task, such as obstacles, as well as start and goal positions, while exploiting the differentiable \(A^{*}\)'s ability to learn the decoding efficiently.
Neural \(A^{*}\) is a path planning algorithm that combines the convergence guarantees of \(A^{*}\) with the flexibility that characterizes a neural network to learn how to exploit visual cues in order to find near-optimal paths. In our setting, a \(i\)-th path planning problem is defined as
\[Q^{i}=(X^{i},v^{i}_{s},v^{i}_{g},\overline{P}^{i}), \tag{1}\]
where \(X^{i}\) represents a 2D map of the current scenario, \(v^{i}_{s}\) and \(v^{i}_{g}\) are respectively the start and goal position, and \(\overline{P}^{i}\) is a ground truth binary map representing the desired path. The Neural \(A^{*}\) path planner is composed of distinct sequential steps. Firstly, the 2D map \(X^{i}\) which has dimensions of \(H\times W\times C\), where H and M are the dimensions of the map
and C indicates the number of color channels (\(C=3\) for RGB maps and \(C=1\) for binary occupancy maps) is fed to a CNN-based encoder. The encoder learns to map the raw image input to a guidance map defined as
\[f:\mathbb{R}^{H\times W\times C}\rightarrow\mathbb{R}^{H\times W}.\]
The guidance map represents the cost of traveling to adjacent nodes in the map, which is equivalent to the sum of the heuristic cost and the grid travel cost in regular \(A^{*}\) algorithms. Finally, a minimal cost path is found, following the guidance map and using the traditional \(A^{*}\) algorithm to explore the search space and find a valid path.
The differentiability of the path-finding process in the neural \(A^{*}\) path planner plays a crucial role in enabling the training of the CNN-based encoder pre-processor. This allows the neural network to learn and capture the essential features and patterns required for efficient path planning. The differentiability is achieved through a full matrix reformulation of the \(A^{*}\) algorithm, enabling the computation of gradients accounting for every search step during the back-propagation stage. In this paper, we focus solely on the node selection step for the original \(A^{*}\) for the sake of simplicity (for Neural \(A^{*}\) the cost terms in the node selections step are replaced with the guidance map). Therefore, given the regular \(A^{*}\) node selection rule:
\[v^{*}=argmin_{v\in\mathcal{O}}\left(g(v)+h(v)\right), \tag{2}\]
where \(\mathcal{O}\) represents the list of candidate nodes (assuming a 2D map represented a graph where each pixel is a node), \(g(v)\) refers to the accumulated total cost on the optimal path up to the node \(v\), and \(h(v)\) is the heuristic function that provides an estimation from the candidate node to the goal, the node selection step of the \(A^{*}\) path planner can be redefined in a matrix form as:
\[\mathcal{V}^{*}=\mathcal{I}_{max}\left(\frac{exp\left(-(G+H)/\tau\right) \odot O}{\left(exp\left(-(G+H)/\tau\right),O\right)}\right), \tag{3}\]
where the functions \(G\) and \(H\) are the matrix formulation of the functions \(g(v)\) and \(h(v)\) respectively. The one-hot-encoding scheme is used in the following steps, which acts as a matrix mask such that only the selected node has the value one and zero otherwise. The one-hot-encoding for the next optimal node is denoted as \(\mathcal{V}^{*}\). The parameter \(\tau\) is determined empirically, and the symbol \(A\odot B\) indicates an element-wise product between matrices \(A\) and \(B\). During the forward pass, \(\mathcal{I}_{max}\) is determined using the \(argmax\) function, while during back-propagation, it is treated as an identity.
The loss function is computed as the average \(L_{1}\) loss between the selected nodes from the \(A^{*}\) denoted as \(P\) (which represents a global path), and the ground-truth path map \(\overline{P}^{i}\), which is given as input:
\[\mathcal{L}=\|P-\overline{P}^{i}\|_{1}/\mathcal{V} \tag{4}\]
This loss function serves to supervise the guidance map driven nodes selection by penalizing two types of errors: the absence of nodes that should have been included in \(P\) to correctly reconstruct \(\overline{P}^{i}\), and the presence of an excessive number of nodes in \(P\) that do not belong to \(\overline{P}^{i}\).
### _Vision Transformer for Guidance Map Encoding_
To achieve a sophisticated and attention-based encoding for raw image inputs, we have introduced a Vision Transformer (ViT) [9] to extend the capability of the proposed method. The purpose of the model is to encode an input image into a guidance map by taking into account the visual cues. Detailed schematics of the ViT module can be found in Fig. 2.
The input map is initially represented as a tensor
\[X^{i}\in\mathbb{R}^{H\times W\times C},\]
where \(H\), \(W\), and \(C\) have already been defined in Sec III-A. Since the ViT module expects a sequential input, we reshape the map matrix into a flattened sequence of 2D patch vectors denoted as \(x_{s}\), with the shape of \(x_{s}\in R^{N\times(S^{2}.C)}\). Here, \(S\) represents a hyper-parameter indicating the patch dimension, and \(N=HW/S^{2}\) is the resulting number of patches from the map input.
Fig. 2: Overall system, tested on the real robots. The 2D map is decomposed into patches and then fed to the ViT module. After the encoding-decoding process, the resulting Guidance Map is given to the \(A^{*}\) and it is used to find a global path. Finally, the global path is executed by the navigation stack, which controls the real robot to ensure small tracking errors.
To ensure that the input map size is compatible with the required patches, we have introduced padding. This is necessary when the dimensions of the input map, \(H\) and \(W\), are not an integer multiple of patch size \(S\), and thus may not be segmented into the required patches.
In the sequence of patches, we incorporate a positional embedding, following the approach used in the ViT models [9]. This positional embedding serves to indicate the positional relationship between the patches, mimicking the spatial information presented in the original raw image.
To address the challenge of variable size inputs, we follow the idea proposed in the work presented in [32] where a positional upper bound representing the maximum number of possible patches is introduced. This ensures that the model can handle inputs of varying sizes without sacrificing training efficiency. Subsequently, each vector in the sequence is subjected to encoding, leading to the generation of an embedded vector projected into the hidden dimension. This encoding process effectively converts the input patches into a latent representation that captures their significant features.
Finally, the embedded vector sequence is decoded into vectors of size \(S^{2}\) using the reconstruction decoder. The purpose of this decoder is to reconstruct the guidance map, which is required to have the same dimensions as the input map. The guidance map is a crucial component as it provides essential information for the planning cost. Each individual entry in the guidance map represents a corresponding guidance cost for planning, facilitating the decision-making process based on the encoded visual cues captured by the model.
## IV Experimental Results
In this section, we test our method. Firstly, in Sec. IV-A, we run a path planning comparison between standard \(A^{*}\), Neural Network-based \(A^{*}\) and ViT-based \(A^{*}\). In Sec. IV-B, we test our method on two real robots, Boston Dynamics Spot and Unitree Go1.
### _Benchmarking Comparison_
Here, we compare our proposed ViT-based \(A^{*}\) path planner (ViT-\(A^{*}\)) as described in Sec. III-B, against two baselines: the Neural Network-based \(A^{*}\) (N-\(A^{*}\)) [7] and a classic \(A^{*}\) planner, as described in Sec. III-A. The evaluation is based on 2D maps coming from the MRPB benchmark dataset [33]. As we enabled our planner to work on variable map dimensions without specific training, the maps selected have different sizes that range from \(280\times 280\) to \(760\times 760\) pixels, and are all depicting realistic scenarios, such as offices or rooms (Fig. 3), which helps to bridge the reality gap when deploying the proposed method on real quadrupedal robots.
To guarantee an effective comparison, we defined a precise generation protocol for testing cases. First, to test the planners' generalization capability (especially for maps with different sizes) we generate random samples of start and goal positions. The randomly generated start and goal must not intersect with obstacles in order to define a valid test case. This constraint, together with the fact that there are no closed
\begin{table}
\begin{tabular}{c c c c} \hline maps & ViT-\(A^{*}\) & N-\(A^{*}\) & \(A^{*}\) \\ \hline \hline (a) & 5.68 & **4.70** & 6.03 \\ (b) & 17.31 & **14.73** & 17.51 \\ (c) & **4.81** & 5.17 & 15.59 \\ (d) & **69.97** & 75.20 & 84.84 \\ (e) & **12.73** & 16.57 & 36.24 \\ \hline \end{tabular}
\end{table} TABLE I: **Planning time**: Average run-time (in sec) required to solve a single planning problem on maps from Fig. 3.
Fig. 4: Visualization of planning results by Regular A*, Neural A*, and ViT A*. In green, the search area.
Fig. 3: Maps used for comparing different planning methods with their sizes.
regions in the map we used, ensures the completeness of the plan so it is ensured that a solution path exists. Furthermore, to avoid trivial planning tasks that are too short in length, we force the generation process to separate the start and goal positions by a threshold value. Hence, every planning task exceeds a certain length in the experiments.
In the experiments, we applied the loss function defined in Eq. 4 to train both models. To train the models, the RMSprop optimizer is selected and for both models, a learning rate of 0.001 is used. The CNN model and the ViT model are trained on the same dataset for 300 epochs or until converge criteria are met.
Each planner is evaluated by considering the _planning time_ metric. The planning time is a key feature in evaluating the quality of a planner since it is a measure of the algorithm's efficiency.
The results shown in Table I are obtained by running each planner on the maps shown in Fig. 3. For each map, we compute the average planning time by repeating the path planning task 25 times with different start and goal positions. from Table I it is clear how our approach outperforms N-\(A^{*}\) and \(A^{*}\) especially for maps with larger size. Moreover, in Fig. 4 we show one instance of global path planning for the small map (Fig. 3e). In this figure, it is apparent that our method is more efficient in finding a path due to the reduction of the search area depicted in green.
### _Experiments on Real Quadrupeds_
To integrate the ViT-based \(A^{*}\) path planner with the legged robot's navigation system, the planner is incorporated into an existing 2D ROS Navigation stack1 as a global path planner module. The overall architecture of the navigation stack is illustrated in Fig. 6.
Footnote 1: [http://wiki.ros.org/navigation](http://wiki.ros.org/navigation)
Within the stack, the ViT-\(A^{*}\) module generates the globally optimal path given the occupancy map. This path is then refined via the local planner to ensure compliance with the robot's kinodynamic constraints. In this case, the Timed-Elastic-Band (TEB)[34, 35] local planner is employed. To mitigate the impact of state estimation on the quality of evaluation of the path planning module, an external tracking system, specifically the Phasespace tracking cameras2, is utilized. These cameras offer real-time localization for the robot at 960 Hz. Examples of the robots with active LED tracking markers can be seen in Fig. 1.
Footnote 2: [https://www.phasespace.com/](https://www.phasespace.com/)
Note that in order to integrate the Neural \(A^{*}\) module as a global planner plugin, several modifications were necessary to interface it with the rest of the navigation stack. Firstly, the navigation stack utilizes the _OccupancyGrid_ message to encode the map, which represents each cell in the map with the obstacle probability \(p_{o}\in[0,100]\). However, the current version of the ViT-\(A^{*}\) module can only take a binary occupancy map as input, where each cell\({}_{i}\in\{0,1\}\). To convert the map, an occupancy threshold \(t\) is applied, in accordance to
\[\text{cell}_{i}=\begin{cases}1,&p_{o}\geq t\\ 0,&\text{otherwise}\end{cases} \tag{5}\]
Furthermore, the ViT-\(A^{*}\) module only produces paths as a sequence of positions, without considering the robot's orientation. Consequently, to generate the 3 Degrees of Freedom path compatible with the ROS stack, a simple
Fig. 5: From left to right we show a sequence of the Unitree Go1 (top) and Boston Dynamics Spot (bottom) robots navigating around obstacles using the ViT-\(A^{*}\) driven navigation stack.
Fig. 6: The schematic structure of the navigation stack for the real robot.
forward-only orientation filter is implemented. This filter defines the orientation as the direction facing forward along the path, given a sequence of positions \(\{v_{i}\}\) extracted from a global path, excluding the start and goal positions \(v_{s}\) and \(v_{g}\) (as their orientations are fixed by inputs - i.e. the current pose of the robot and desired goal pose). Hence, the orientation \(\theta_{i}\) is defined as:
\[\theta_{i}=cos^{-1}\frac{v_{i}\cdot v_{i+1}}{|v_{i}||v_{i+1}|} \tag{6}\]
To validate the complete pipeline, tests were conducted on two real robots: Boston Dynamics Spot and Unitree Go1. In these tests, a predefined map is fed into the ViT-\(A^{*}\) module, which generates a global path. Subsequently, the robot executes the planned path via the ROS stack described in Figure 6. The effectiveness of the pipeline has been demonstrated in navigation scenarios around obstacles in our laboratory, as shown in Figure 5.
## V Conclusion
In this study, we present the ViT-\(A^{*}\) planning strategy that enables quadrupedal robots to autonomously and safely navigate in various and complex scenarios. Our proposed method builds upon recent advancements in differential planning and introduces a pre-processing model based on ViT, enabling our approach to handle maps of any size. The effectiveness of the proposed approach has been validated through successful comparison in simulation and on real quadrupedal robots across different scenarios. In future work, we intend to evaluate the performance of our method in outdoor or more complex settings (e.g., autonomous task planning [36]) and explore the benefits of planning directly on an RGB map with a ground truth path designed by humans.
|
2301.11406 | Beyond Arabic: Software for Perso-Arabic Script Manipulation | This paper presents an open-source software library that provides a set of
finite-state transducer (FST) components and corresponding utilities for
manipulating the writing systems of languages that use the Perso-Arabic script.
The operations include various levels of script normalization, including visual
invariance-preserving operations that subsume and go beyond the standard
Unicode normalization forms, as well as transformations that modify the visual
appearance of characters in accordance with the regional orthographies for
eleven contemporary languages from diverse language families. The library also
provides simple FST-based romanization and transliteration. We additionally
attempt to formalize the typology of Perso-Arabic characters by providing
one-to-many mappings from Unicode code points to the languages that use them.
While our work focuses on the Arabic script diaspora rather than Arabic itself,
this approach could be adopted for any language that uses the Arabic script,
thus providing a unified framework for treating a script family used by close
to a billion people. | Alexander Gutkin, Cibu Johny, Raiomond Doctor, Brian Roark, Richard Sproat | 2023-01-26T20:37:03Z | http://arxiv.org/abs/2301.11406v1 | # Beyond Arabic: Software for Perso-Arabic Script Manipulation
###### Abstract
This paper presents an open-source software library that provides a set of finite-state transducer (FST) components and corresponding utilities for manipulating the writing systems of languages that use the Perso-Arabic script. The operations include various levels of script normalization, including visual invariance-preserving operations that subsume and go beyond the standard Unicode normalization forms, as well as transformations that modify the visual appearance of characters in accordance with the regional orthographies for eleven contemporary languages from diverse language families. The library also provides simple FST-based romanization and transliteration. We additionally attempt to formalize the typology of Perso-Arabic characters by providing one-to-many mappings from Unicode code points to the languages that use them. While our work focuses on the Arabic script diagram rather than Arabic itself, this approach could be adopted for any language that uses the Arabic script, thus providing a unified framework for treating a script family used by close to a billion people.
## 1 Introduction
While originally developed for recording Arabic, the Perso-Arabic script has gradually become one of the most widely used modern scripts. Throughout history the script was adapted to record many languages from diverse language families, with scores of adaptations still active today. This flexibility is partly due to the core features of the script itself which over the time evolved from a purely consonantal script to include a productive system of diacritics for representing long vowels and optional marking of short vowels and phonological processes such as gemination Bauer (1996); Kurzon (2013). Consequently, many languages productively evolved their own adaptation of the Perso-Arabic script to better suit their phonology by not only augmenting the set of diacritics but also introducing new consonant shapes.
This paper presents an open-source software library designed to deal with the ambiguities and inconsistencies that result from representing various regional Perso-Arabic adaptations in digital media. Some of these issues are due to the Unicode standard itself, where a Perso-Arabic character can often be represented in more than one way Unicode Consortium (2021). Others are due to the lack or inadequacies of input methods and the instability of modern orthographies for the languages in question Aazim et al. (2009); Liljegren (2018). Such issues percolate through the data available online, such as Wikipedia and Common Crawl Patel (2020), negatively impacting the quality of NLP models built with such data. The script normalization software described below goes beyond the standard language-agnostic Unicode approach for Perso-Arabic to help alleviate some of these issues.
The library design is inspired by and consistent with prior work by Johny et al. (2021), introduced in SS2, who provided a suite of finite-state grammars for various normalization and (reversible) romanization operations for the Brahmic family of scripts.1 While the Perso-Arabic script and the respective set of regional orthographies we support - Balochi, Kashmiri, Kurdish Sorani, Malay (Jawi), Pashto, Persian, Punjabi Shahmukhii, Sindhi, South Azerbaijan, Urdu and Uyghur - is significantly different from those Brahmic scripts, we pursue a similar finite-state interpretation,2 as described in SS3. Implementation details and simple validation are provided in SS4.
Footnote 1: [https://github.com/google-research/nisaba](https://github.com/google-research/nisaba)
Footnote 2: [https://github.com/google-research/nisaba/tree/main/nisaba/scripts/abjad_alphabet](https://github.com/google-research/nisaba/tree/main/nisaba/scripts/abjad_alphabet)
Related Work
The approach we take in this paper follows in spirit the work of Johnry et al. (2021) and Gutkin et al. (2022), who developed a finite-state script normalization framework for Brahmic scripts. We adopt their taxonomy and terminology of low-level script normalization operations, which consist of three types: Unicode-endorsed schemes, such as NFC; further visually-invariant transformations (_visual_ normalization); and transformations that modify a character's shape but preserve pronunciation and the overall word identity (_reading_ normalization).
The literature on Perso-Arabic script normalization for languages we cover in this paper is scarce. The most relevant work was carried out by Ahmadi (2020) for Kurdish, who provides a detailed analysis of orthographic issues peculiar to Sorani Kurdish along with corresponding open-source script normalization software used in downstream NLP applications, such as neural machine translation (Ahmadi and Masoud, 2020). In the context of machine transliteration and spell checking, Lehal and Saini (2014) included language-agnostic minimal script normalization as a preprocessing step in their open-source \(n\)-gram-based transliterator from Perso-Arabic to Brahmic scripts. Bhatti et al. (2014) introduced a taxonomy of spelling errors for Sindhi, including an analysis of mistakes due to visually confusable characters. Razak et al. (2018) provide a good overview of confusable characters for Malay Jawi orthography. For other languages the regional writing system ambiguities are sometimes mentioned in passing, but do not constitute the main focus of work, as is the case with Punjabi Shahmukhi (Lehal and Saini, 2012) and Urdu (Humayoun et al., 2022). The specific Perso-Arabic script ambiguities that abound in the online data are often not exhaustively documented, particularly in work focused on multilingual modeling (N. C., 2022; Bapna et al., 2022). As one moves towards lesser-resourced languages, such as Kashmiri and Uyghur, the NLP literature provides no treatment of script normalization issues and the only reliable sources of information are the proposal and discussion documents from the Unicode Technical Committee (e.g., Bashir et al., 2006; Aazim et al., 2009; Pournader, 2014). A forthcoming paper by Doctor et al. (2022) covers the writing system differences between these languages in more detail than we can include in this short paper.
One area particularly relevant to this study is the work by the Internet Corporation for Assigned Names and Numbers (ICANN) towards developing a robust set of standards for representing various Internet entities in Perso-Arabic script, such as domain names in URLs. Their particular focus is on _variants_, which are characters that are visually confusable due to identical appearance but different encoding, due to similarity in shape or due to common alternate spellings (ICANN, 2011). In addition, they developed the first proposal to systematize the available Perso-Arabic Unicode code points along the regional lines (ICANN, 2015). These studies are particularly important for cybersecurity (Hussain et al., 2016; Ginsberg and Yu, 2018; Ahmad and Erdodi, 2021), but also inform this work.
This software library is, to the best our knowledge, the first attempt to provide a principled approach to Perso-Arabic script normalization for multiple languages, for downstream NLP applications and beyond.
## 3 Design Methodology
The core components are implemented as individual FSTs that can be efficiently combined together in a single pipeline (Mohri, 2009). These are shown in Table 1 and described below.3
Footnote 3: When referring to names of Unicode characters we low-ercase them and omit the common prefix _Arabic (letter)_.
Unicode NormalizationFor the Perso-Arabic string encodings which yield visually identical text, the Unicode standard provides procedures that normalize text to a conventionalized normal form, such as the well-known Normalization Form C (NFC), so that visually identical words are mapped to a conventionalized representative of their equivalence class (Whistler, 2021). We implemented the NFC standard as an FST, denoted \(\mathcal{N}\) in Table 1, that handles three broad types of transformations: compositions, re-orderings and
\begin{table}
\begin{tabular}{l l l l} \hline \hline Op. Type & FST & Language-dep. & Includes \\ \hline NFC & \(\mathcal{N}\) & no & – \\ Common Visual & \(\mathcal{V}_{c}\) & no & \(\mathcal{N}\) \\ Visual & \(\mathcal{V}\) & yes & \(\mathcal{V}_{c}\) \\ Reading & \(\mathcal{R}\) & yes & \(-\) \\ Romanization & \(\mathcal{M}\) & no & \(\mathcal{V}_{c}\) \\ Transliteration & \(\mathcal{T}\) & no & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of script transformation operations.
combinations thereof.
As an example of a first type, consider the _alef with madda above_ letter \(\langle\vec{i}\rangle\) that can be composed in two ways: as a single character (U+0622) or by adjoining _maddah above_ to _alef_ ({ U+0627, U+0653 }). The FST \(\mathcal{N}\) rewrites the adjoined form into its equivalent composed form. The second type of transformation involves the canonical reordering of the Arabic combining marks, for example, the sequence of _shadda_ (U+0651) followed by _kasra_ (U+0650) is reversed by \(\mathcal{N}\). More complex transformations that combine both compositions and re-orderings are possible. For example, the sequence { _alef_ (U+0627), _superscript alef_ (U+0670), _maddah above_ (U+0653) } normalizes to its equivalent form { _alef with madda above_ (U+0622), _superscript alef_ (U+0670) }.
Crucially, \(\mathcal{N}\) is language-agnostic because the NFC standard it implements does not define any transformations that violate the writing system rules of respective languages.
Visual NormalizationAs mentioned in SS2, Johny et al. (2021) introduced the term _visual_ normalization in the context of Brahmic scripts to denote visually-invariant transformations that fall outside the scope of NFC. We adopt their definition for Perso-Arabic, implementing it as a single language-dependent FST \(\mathcal{V}\), shown in Table 1, which is constructed by FST composition: \(\mathcal{V}=\mathcal{N}\circ\mathcal{V}_{c}\circ\mathcal{V}_{l}\), where \(\circ\) denotes the composition operation (Mohri, 2009).4
Footnote 4: See Johny et al. (2021) for details on FST composition and other operations used in this kind of script normalization.
The first FST after NFC, denoted \(\mathcal{V}_{c}\), is language-agnostic, constructed from a small set of normalizations for visually ambiguous sequences found online that apply to all languages in our library. For example, we map the two-character sequence _wav_ (U+0648) followed by _damma_ (U+064F) or _small damma_ (U+0619) to \(u\) (U+06C7).
The second set of visually-invariant transformations, denoted \(\mathcal{V}_{l}\), is language-specific and additionally depends on the position within the word. Four special cases are distinguished that are represented as FSTs: position-independent rewrites (\(\mathcal{V}_{l}^{*}\)), isolated-letter rewrites (\(\mathcal{V}_{l}^{i}\)), rewrites in the word-final position (\(\mathcal{V}_{l}^{\text{f}}\)), and finally, rewrites in "non-final" word positions, which include visually-identical word-initial and word-medial rewrites (\(\mathcal{V}_{l}^{\text{n}}\)). The FST \(\mathcal{V}_{l}\) is composed as \(\mathcal{V}_{l}^{\text{i}}\circ\mathcal{V}_{l}^{\text{f}}\circ\mathcal{V}_{l}^ {\text{n}}\circ\mathcal{V}_{l}^{*}\). Some examples of these transformations for Urdu orthography are shown in Table 2, where the variants shown in the third column are rewritten to their canonical Urdu form in the fourth column.
Reading NormalizationThis type of normalization was introduced for Brahmic scripts by Gutkin et al. (2022), who noted that regional orthographic conventions or lack thereof, which oftentimes conflict with each other, benefit from normalization to some accepted form. Whenever such normalization preserves visual invariance, it falls under the rubric of visual normalization, but other cases belong to _reading_ normalization, denoted \(\mathcal{R}\) in Table 1. Similar to visual normalization, \(\mathcal{R}\) is compiled from language-specific context-dependent rewrite rules. One example of such a rewrite is a mapping from _yeh_ \(\langle\raisebox{-0.86pt}{\scalebox{.7}{$\bullet$}}\rangle\) (U+064A) to _farsi yeh_ \(\langle\raisebox{-0.86pt}{\scalebox{.7}{$\bullet$}}\rangle\) (U+06CC) in Kashmiri, Persian, Punjabi, Sorani Kurdish and Urdu. For Malay, Sindhi and Uyghur, the inverse transformation is implemented as mandated by the respective orthographies.
For efficiency reasons \(\mathcal{R}\) is stored independently of visual normalization \(\mathcal{V}\). At run-time, the reading normalization is applied to an input string \(s\) as \(s^{\prime}=(s\circ\mathcal{V})\circ\mathcal{R}\), which is more efficient than \(s^{\prime}=s\circ\mathcal{R}^{\prime}\), where \(\mathcal{R}^{\prime}=\mathcal{V}\circ\mathcal{R}\).
our scheme the Uyghur \(\textit{yu}\)\(\langle\)\(\mathfrak{g}\)\(\rangle\) (U+06C8) maps to \(\langle\)\(\mathfrak{u}\)\(\rangle\). The transliteration FST \(\mathcal{T}\) converts the strings from Unicode Latin into Perso-Arabic. It is smaller than \(\mathcal{M}\) and is defined as \(\mathcal{T}=\mathcal{M}_{c}^{-1}\).
Character-Language MappingThe geography and scope of Perso-Arabic script adaptations is vast. To document the typology of the characters we developed an easy-to-parse mapping between the characters and the respective languages and/or macroareas that relate to a group of languages building on prior work by ICANN (2015). For example, using this mapping it is easy to find that the letter _beh with small v below_\(\langle\)\(\mathfrak{v}\)\(\rangle\) (U+08A0) is part of the orthography of Wolof, a language of Senegal (Ngom, 2010), while _gaf with ring_\(\langle\)\(\mathfrak{S}\)\(\rangle\) (U+06B0) belongs to Saraiki language spoken in Pakistan Bashir and Conners (2019). This mapping can be used to auto-generate the orthographic inventories for lesser-resourced languages.
## 4 Software Details and Validation
Our software library is implemented using Pynini, a Python library for constructing finite-state grammars and for performing operations on FSTs Gorman and Sproat (2021); Gorman and Sproat (2021). Each FST is compiled from the collections of individual context-dependent letter rewrite rules Mohri and Sproat (1996) and is available in two versions: over an alphabet of UTF-8 encoded bytes and over the integer Unicode code points. The FSTs are stored uncompressed in binary FST archives (FARs) in OpenFst format Allauzen et al. (2007).
The summaries of language-agnostic and language-dependent FSTs over UTF-8 strings are shown in Table 3 and Table 4, respectively. As can be seen from the tables, the language-agnostic and reading normalization FSTs are relatively uncomplicated and small in terms of number of states, arcs and the overall (uncompressed) size on disk. The visual normalization FSTs are significantly larger, which is explained by the number of composition operations used in their construction (see SS3). The reading normalization FSTs for South Azerbaijan and Malay shown in Table 4 implement the identity mapping. This is because we could not find enough examples requiring reading-style normalization in online data (see the Limitations section for more details).
As an informal sanity check we validate the prevalence of normalization on word-frequency lists for Sorani Kurdish (ckb), Sindhi (sd) and Uyghur (ug) from project Crubadan Scannell (2007). Table 5 shows the percentages of tokens and types changed (\(s^{\prime}\neq s\)) by visual normalization on one hand and the combined visual and reading normalization on the other. Urdu has the fewest number of modifications compared to Sorani Kurdish and Sindhi, most likely due to a more regular orthography and stable input methods manifest in the crawled data. Significantly more extensive analysis and experiments in statistical language modeling and neural machine translation for the languages covered in this paper are presented in a forthcoming study Doctor et al. (2022).
ExampleThe use of the library is demonstrated by the following Python example that implements a simple command-line utility for performing reading normalization on a single string using Pynini APIs. The program requires two FAR files that
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline \multicolumn{2}{l}{Language Information} & \multicolumn{3}{c}{Visual Normalization (\(\mathcal{V}\))} & \multicolumn{3}{c}{Reading Normalization (\(\mathcal{R}\))} \\ Code & Name & \# states & \# arcs & \# Mb & \# states & \# arcs & \# Mb \\ \hline azb & South Azerbaijan & 315 933 & 635 647 & 16.49 & 21 & 735 & 0.012 \\ bal & Balochi & 620 226 & 1 244 472 & 32.31 & 24 & 738 & 0.013 \\ ckb & Kurdish (Sorani) & 1097 937 & 2 199 732 & 57.15 & 39 & 753 & 0.013 \\ fa & Persian & 940 436 & 1 884 347 & 48.96 & 36 & 750 & 0.013 \\ ks & Kashmiri & 172 494 & 3 547 448 & 92.21 & 44 & 794 & 0.014 \\ ms & Malay & 199 777 & 403 373 & 10.45 & 21 & 735 & 0.012 \\ pa & Punjabi & 2 050 154 & 4 105 465 & 106.69 & 24 & 738 & 0.013 \\ ps & Pashto & 291 564 & 587 552 & 15.23 & 24 & 738 & 0.013 \\ sd & Sindhi & 1703 726 & 3 403 283 & 88.53 & 34 & 748 & 0.013 \\ ug & Uyghur & 125 054 & 2 513 231 & 65.31 & 24 & 738 & 0.013 \\ ur & Urdu & 2 071 139 & 4 138 950 & 107.65 & 31 & 745 & 0.013 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of FSTs over UTF-8 strings for visual and reading normalization.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Lang.} & \multicolumn{2}{c}{\(s^{\prime}=s\circ\mathcal{V}\)} & \multicolumn{2}{c}{\(s^{\prime}=(s\circ\mathcal{V})\circ\mathcal{R}\)} \\ & \% tokens & \% types & \% tokens & \% types \\ \hline ckb & 18.27 & 25.84 & 30.07 & 41.26 \\ sd & 17.32 & 14.83 & 21.74 & 17.31 \\ ur & 0.09 & 1.16 & 0.10 & 1.23 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Percentage of tokens and types changed.
store compiled visual and reading normalization grammars, the upper-case BCP-47 language code for retrieving the FST for a given language, and an input string:5
Footnote 5: The infrastructure for compiling the Pynini grammars is described in Johny et al. (2021).
The visual and reading FSTs for a given language are retrieved from the relevant FAR files using load_fst function. The input string is first converted to a linear FST. The visual and reading normalization FSTs are then sequentially composed with the input FST and a shortest path algorithm is applied on the result, which is then converted from a linear FST back to a Python string in apply function to yield the final normalized output.
Some examples of reading normalization produced using the example.py utility above for some of the supported languages are shown in Table 6. For each language, the input string in the second column of the table is normalized to a string shown in the third column. The final column shows the name of a particular letter in the output string that replaced the original letter from the input string, e.g., for Sorani Kurdish (ckb) the following rewrite occurs: _swash kaf_ (U+06AA) \(\rightarrow\)_keheh_ (U+06A9), while for Punjabi (pa), _yeh_ (U+064A) \(\rightarrow\)_farsi yeh_ (U+06CC).
## 5 Conclusion and Future Work
We have presented a flexible FST-based software package for low-level processing of orthographies based on Perso-Arabic script. We described the main components of the architecture consisting of various script normalization operations, romanization/transliteration, and character-language index. We expect to increase the current language coverage of eleven languages to further relatively well-documented orthographies, but also provide treatment for resource-scarce orthographies, such as the Ajami orthographies of Sub-Saharan Africa (Mumin, 2014).
### Limitations
When developing the visual and reading normalization rules for the eleven languages described in this paper we made use of publicly available online data consisting of the respective Wikipedias, Wikipron (Lee et al., 2020), Crubadan (Scannell, 2007) and parts of Common Crawl (Patel, 2020). The latter corpus is particularly noisy and requires non-trivial filtering (Kreutzer et al., 2022). Furthermore, many Wikipedia and Common Crawl documents contain code-switched text in several languages that are recorded in Perso-Arabic. Robust language identification (LID) is required to distinguish between tokens in such sentences (for example, Kashmiri vs. Pashto or Balochi) in order not to confuse between the respective orthographies. Since we did not have access to robust LID models for the languages under study, for lesser-resourced languages such as Kashmiri, Malay in Jawi orthography, South Azerbaijan and Uyghur, it is likely that some of the words we used as examples requiring normalization may have been misclassified resulting in normalizations that should not be there.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Lang. & Input & Output & Correct Output \\ \hline bal & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ ckb & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ fa & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ ks & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ pa & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ sd & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ ug & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ ur & \(\texttt{call}\) & \(\texttt{call}\) & \(\texttt{call}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Some examples of reading normalization. |
2302.10248 | VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge | This paper summarises the findings from the VoxCeleb Speaker Recognition
Challenge 2022 (VoxSRC-22), which was held in conjunction with INTERSPEECH
2022. The goal of this challenge was to evaluate how well state-of-the-art
speaker recognition systems can diarise and recognise speakers from speech
obtained "in the wild". The challenge consisted of: (i) the provision of
publicly available speaker recognition and diarisation data from YouTube videos
together with ground truth annotation and standardised evaluation software; and
(ii) a public challenge and hybrid workshop held at INTERSPEECH 2022. We
describe the four tracks of our challenge along with the baselines, methods,
and results. We conclude with a discussion on the new domain-transfer focus of
VoxSRC-22, and on the progression of the challenge from the previous three
editions. | Jaesung Huh, Andrew Brown, Jee-weon Jung, Joon Son Chung, Arsha Nagrani, Daniel Garcia-Romero, Andrew Zisserman | 2023-02-20T19:27:14Z | http://arxiv.org/abs/2302.10248v2 | # VoxSRC 2022: The Fourth VoxCeleb Speaker Recognition Challenge
###### Abstract
This paper summarises the findings from the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22), which was held in conjunction with INTERSPEECH 2022. The goal of this challenge was to evaluate how well state-of-the-art speaker recognition systems can diaries and recognise speakers from speech obtained "in the wild". The challenge consisted of: (i) the provision of publicly available speaker recognition and diarisation data from YouTube videos together with ground truth annotation and standardised evaluation software; and (ii) a public challenge and hybrid workshop held at INTERSPEECH 2022. We describe the four tracks of our challenge along with the baselines, methods, and results. We conclude with a discussion on the new domain-transfer focus of VoxSRC-22, and on the progression of the challenge from the previous three editions.
Jaesung Huh\({}^{1}\), Andrew Brown\({}^{1}\), Jee-weon Jung\({}^{2}\), Joon Son Chung\({}^{1,3}\), Arsha Nagrani\({}^{1}\)+,
Daniel Garcia-Romero\({}^{4}\), Andrew Zisserman\({}^{1}\)+
Footnote †: Also at Google Research.
\({}^{1}\)Visual Geometry Group, Department of Engineering Science, University of Oxford, UK
\({}^{2}\)Naver Corporation, South Korea
\({}^{3}\)Korea Advanced Institute of Science and Technology, South Korea
\({}^{4}\)AWS AI Labs, USA
[https://mm.kaist.ac.kr/datasets/voxceleb/voxsrc/competition2022.html](https://mm.kaist.ac.kr/datasets/voxceleb/voxsrc/competition2022.html)
**Index Terms**: speaker verification, diarisation, unconstrained conditions
## 1 Introduction
The fourth edition of the VoxCeleb Speaker Recognition Challenge was held in 2022 (VoxSRC-22). The main objectives of this series are to: (i) investigate and advance new speaker recognition research "in the wild"; (ii) gauge and calibrate the performance of current technology through open evaluation tools; and (iii) provide open-source data that is available to all members of the research community.
Each year, VoxSRC introduces a new special focus. In the second installation (VoxSRC-20 [1]), we introduced two new tracks: (i) the self-supervised verification track (inspired by the successes in self-supervised learning [2, 3]), where no speaker labels can be used during the pretrain phase; and (ii) a speaker diarisation track which exploits the VoxConverse [4] dataset. In the third edition (VoxSRC-21 [5]), we added a multi-lingual focus to the verification tracks, to encourage fairness and diversity and to build a more challenging test set.
This year, we introduced a new track focused on semi-supervised domain adaptation. The goal was to assess how models pretrained on large labelled data in a _source_ domain can adapt to a new _target_ domain, given (i) a large set of unlabelled data from the target domain and (ii) a small set of labelled data from the target domain. This is especially relevant and important to low resource real-world scenarios, where large scale labelled data is not available in a target domain, but a sufficiently large dataset from another domain such as VoxCeleb [6] is available.
For the existing speaker verification tracks, we also applied two novel techniques for making more challenging positive and negative pairs for the speaker verification test set, by using a face age classifier and a speaker diarisation dataset, respectively.
This paper details the four evaluation tasks, provided datasets, the submissions, and winners of VoxSRC-22 challenge. Please refer to our website for more information.
## 2 Task Description
### Tracks
The challenge consisted of the following four tracks:
1. Speaker Verification (Closed)
2. Speaker Verification (Open)
3. Semi-supervised domain adaptation (Closed)
4. Speaker diarisation (Open)
For the verification tracks, the open and closed training conditions refer to the training data that is allowed. The tasks of Tracks 1, 2, and 4 were identical to those of last year's challenge, whereas the track 3 task of semi-supervised domain adaptation was newly introduced this year. Please see the following section for further details.
### Data
#### 2.2.1 Speaker Verification - Track 1 and 2
The VoxCeleb datasets [6, 7, 8] contain speech utterances from YouTube videos, including celebrity interviews and TV shows. Please refer to [7] for more detailed descriptions.
**Training sets:** For track 1 (closed) participants were permitted only to use the VoxCeleb2 dev set [8], which contains more than a million utterances from 5,994 speakers. For track 2 (open), participants were permitted to use any other external datasets in addition to the VoxCeleb2 dev set for training, but not the challenge's test data.
**Validation and Test sets:** This year, we focused on making the validation and the test sets more challenging by introducing two new trial types - hard positives and hard negatives.
We constructed hard positives where the age of the speaker differs considerably between the two utterances. The hard positives were found by selecting utterance pairs from the same speaker that have a large age gap (i.e. two audio files for the same identity where the age is very different) via a two step process on private VoxCeleb video data. In this VoxCeleb data, for each video segment we have the face, identity and speech.
First, the age of the speaker is estimated by predicting the age for a random set of frames, using an open-source age prediction network [9], and averaging the result. Second, we sample positive pairs from utterances for the same speaker with large age gaps.
We constructed hard negatives using utterances from the same video. When sampling a negative pair using utterances from different videos, speaker verification systems may be able to rely on cues from the different microphones or room environments to help discriminate the different identities of the speakers, which can make the task easier. Our goal here was therefore to construct harder negative pairs by sampling utterances from different speakers that are from the same audio file. In this case, the microphone and environment noise are shared across the two utterances, and only the identity of the speaker changes. We sampled the hard negative pairs using speaker diarisation datasets, where each audio file consists of multiple short speech segments from different speakers. To generate these trials, we first cropped short speech segments. We then removed segments that are either too short (\(<\)1.5s) or contains overlapping speech. Finally, we selected trials using two segments within an audio file. Full details are given in [10].
We released 305,196 validation pairs and 317,973 test pairs, including these hard positives and negatives. We also included the VoxSRC-19 test pairs in our test set to track the state-of-the-art performance on the same trials. No overlapping speakers exist between validation set and test set. Statistics of the val / test sets are reported in Table 1.
#### 2.2.2 Semi-supervised domain adaptation - Track 3
This year, we introduced a new track focused on semi-supervised domain adaptation. Here, we focused on the problem of how models, pretrained on a large set of data with labels in a source domain, can adapt to a new target domain given: (i) a large set of unlabelled data from the target domain, and (ii) a small set of labelled data from the target domain. Specifically, the domain adaptation that we focused on is from one language in a source domain (mainly English), to a different language in a target domain (Chinese), for the task of speaker verification. Here we use VoxCeleb [7] for the source domain, and CN-Celeb [11] for the target domain.
**Train set:** Participants were allowed to use three types of datasets in this track:
* VoxCeleb2 dev set _with_ speaker labels (Source domain). This can be used for pretraining.
* A large subset of CN-Celeb _without_ speaker labels (Target domain). This can be used for domain adaptation.
* A small subset of CN-Celeb _with_ speaker labels (Target domain) consisting of 20 utterances each from 50 different speakers.
VoxCeleb2 data consists mainly of interview-style utterances, whereas CN-Celeb consists of several different genres. To focus on the language domain adaptation task, we have therefore removed utterances in the "singing", "play", "movie", "advertisement", and "drama" genres from CN-Celeb.
**Validation and Test sets:** For the validation and test set, we provided a list of trial speech pairs from identities in the target domain. We created and released a validation set consisting of 40,000 validation pairs. The test set consists of 30,000 pairs from disjoint identities not present in either CN-Celeb1 or CN-Celeb2. Each trial contains two single-speaker speech segments, of variable length. See Table 1 for detailed statistics.
#### 2.2.3 Speaker Diarisation - Track 4
VoxConverse [4] is a speaker diarisation dataset from diverse domains such as panel discussions, news segments and talk shows. It consists of multi-speaker audio segments with challenging background conditions and overlapping speech. Please refer to [4] for more details.
**Training set:** Similar to previous years, participants were allowed to train their models on _any_ data, except for the test set of the challenge.
**Validation set:** Participants were allowed to use both dev / test sets of the VoxConverse dataset. The total duration of VoxConverse is approximately 64 hours, and the average number of speakers per audio segment ranges between 4 and 6. The average percentage of speech per each audio file is 91%.
**Test set:** The test set contains 360 audio files, created with the identical semi-automatic pipeline used for creating VoxConverse. The Track 4 VoxSRC-2021 test set is included as a subset of the test set. In addition, we included an additional 96 audio files from YouTube videos in diverse categories, including news, documentary, lecture and commercial. Details for both validation and test set are described in Table 2.
## 3 Challenge Mechanics
### Evaluation metrics
A validation toolkit* was provided for both speaker verification and speaker diarisation. Participants were advised to test their models on the validation set for each track using this open-sourced code. The evaluation metrics are identical to VoxSRC 2021 [5].
**Speaker verification.** We reported two evaluation metrics: (i)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Track** & **Split** & **\# Pairs** & **\# Utter.** & **Segment length (s)** \\ \hline \multirow{2}{*}{1 \& 2} & val & 305,196 & 110,366 & 2.00 / 8.43 / 314.44 \\ & test & 317,973 & 34,684 & 1.98 / 7.36 / 282.16 \\ \hline \multirow{2}{*}{3} & val & 40,000 & 2,400 & 0.44 / 9.38 / 224.65 \\ & test & 30,000 & 18,377 & 1.23 / 9.06 / 89.83 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the speaker verification validation and test sets (Tracks 1–3). **# Pairs** refers to the number of evaluation trial pairs, whereas **# Utter.** refers to the total number of unique speech segments. Segment lengths are reported as min/mean/max.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Split** & **\# audios** & **\# spks** & **Duration (s)** & **speech \%** \\ \hline val & 448 & 1 / 5.5 / 21 & 22.0 / 512.9 / 1200.0 & 11 / 91 / 100 \\ test & 360 & 1 / 5.5 / 28 & 27.5 / 449.2 / 1777.8 & 9 / 88 / 100 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of the speaker diarisation val and test sets (Track 4). Entries that have 3 values are reported as min/mean/max. **# spks:** Number of speakers per video. **Duration (s):** Length of videos in seconds. **speech %:** Percentage of video time that is speech.
the Equal Error Rate (EER) which is a location on a ROC or DET curve where the false acceptance rate and false rejection rate are equal; and (ii) minDCF (\(C_{DET}\)) used by the previous VoxSRC [5, 19] evaluations. We used \(C_{miss}=C_{fa}=1\) and \(P_{tar}=0.05\) in our cost function. The main metric for Tracks 1 and 2 was minDCF, and the final ranking was based only on this score. EER was used as the main metric for Track 3.
**Speaker diarisation.** We chose two diarisation metrics, Diarisation Error Rate (DER) and Jaccard Error Rate (JER). DER is the sum of speaker error, false alarm speech and missed speech. We used a 0.25-second forgiving collar, and overlapping speech was not disregarded. JER is based on the Jaccard index, which is defined as the ratio between the intersection and union of two segmentations. It is computed as a 1 minus the average of Jaccard index of optimal mappings between reference and system speakers [20].
### Baselines
We used same the baseline models as for last year's challenge, so please refer to [5] for more details.
We used the publicly released speaker verification network trained only with VoxCeleb2 dev set for verification tracks [21]. The model is ResNet-34 [22] with ASP pooling [23] and is trained with a combination of angular prototypical loss [24] and cross-entropy loss. This baseline achieved a minDCF of 0.346 and an EER of 5.63% on track 1 and 2 test pairs, but a minDCF of 0.823 and an EER of 16.9% on track 3 test pairs. This performance gap shows the necessity of domain adaptation on different language utterances.
For the diarisation track, we adopted a system described in [25] using a speaker embedding extractor to our baseline speaker model, publicly available py-webrtcvd [26], and an agglomerative hierarchical clustering (AHC) of speaker representation. The resulting model achieved 19.6% DER and 41.4% JER on the challenge test set.
### Submission
The challenge was hosted based on publicly available CodaLab code 1, but hosting on our own evaluation instance for efficient maintenance. Similar to last year, we introduced two phases: "Challenge workshop" and "Permanent" and the challenge results were based on the former phase. Participants could only submit one submission per day and ten submissions in total. Submission for the "Challenge workshop" phase was available until 14\({}^{th}\) of September, 2022. Participants were required to submit reports of their methods and results by 20\({}^{th}\) of September 2022.
Footnote 1: [https://github.com/codalab/codalab-competitions](https://github.com/codalab/codalab-competitions)
[https://mm.kaist.ac.kr/datasets/voxceleb/voxsrc/interspeech2022.html](https://mm.kaist.ac.kr/datasets/voxceleb/voxsrc/interspeech2022.html)
## 4 Workshop
VoxSRC-22 was a hybrid workshop with both in-person and virtual attendance options. The in-person workshop was held on the 22\({}^{nd}\) of September in Incheon Songdo Convensia, the conference venue of INTERSPEECH 2022. The workshop was free of cost for attendees.
The workshop began with an introductory talk from the organisers, followed by a keynote speech from professor Junichi Yamagishi, titled "The use of speaker embeddings in neural audio generations". The winners then gave short presentations about their methods and results. All slides and presentation videos are available on our workshop website2.
Footnote 2: [https://github.com/codalab/codalab-competitions](https://github.com/codalab/codalab-competitions)
## 5 Methods and Results
There were a total of 554 submissions across all four tracks this year. The performances of the top three ranked teams for each track are reported in Table 3 and Table 4, along with their scores. In this section, we give details on the methods used by
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Rank** & **Team Name** & **Organisation** & **DER** & **JER** \\ \hline
3 & AiTER [17] & Gwangju Institute of Science and Technology & 5.12 & 30.82 \\
2 & KristonAI [13] & KristonAI Lab & 4.87 & 25.49 \\
1 & DKU-DukeECE [18] & Duke Kunshan University, Duke University & 4.75 & 27.85 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Winners for the speaker diarisation track (Track 4). The primary metric is **DER**. For both metrics, a lower score is better.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Track** & **Rank** & **Team Name** & **Organisation** & **minDCF** & **EER** \\ \hline \multirow{4}{*}{1} & 3 & SJTU-AISPEECH [12] & Shanghai Jiao Tong University, AISpeech Ltd & 0.101 & 1.911 \\ & 2 & KristonAI [13] & KristonAI Lab & 0.090 & 1.401 \\ & 1 & ravana - ID R\&D [14] & ID R\&D Lab & 0.088 & 1.486 \\ \hline \multirow{4}{*}{2} & 3 & Strasbourg-Spk & Microsoft & 0.073 & 1.436 \\ & 2 & KristonAI [13] & KristonAI Lab & 0.072 & 1.119 \\ & 1 & ravana - ID R\&D [14] & ID R\&D Lab & 0.062 & 1.212 \\ \hline \hline \multirow{4}{*}{3} & 3 & SJTU-AISPEECH [12] & Shanghai Jiao Tong University, AISpeech Ltd & 0.437 & 8.087 \\ & 2 & DKU-Tencent [15] & Duke Kunshan University, Tencent AI Lab & 0.389 & 7.153 \\ & 1 & zadddz [16] & Chinese Academy of Sciences & 0.388 & 7.030 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Winners for the speaker verification tracks (Tracks 1, 2 and 3). The primary metric for Track 1 & 2 is **minDCF** while the primary metric for Track 3 is **EER**. For both metrics, a lower score is better. Note that Track 1 & 2 have an identical test set.
the top two ranked teams from each track.
### Speaker Verification (Track 1 and 2)
This year, Tracks 1 and 2 had the same winners and runner-up. The winning team [14] adopted a fusion of deep ResNets and ECAPA-TDNN [27] along with extensive data augmentations using the MUSAN noise database [28] and RIR [29] responses. For Track 2, they trained the model with their own _Self-VoxCeleb_ dataset inspired by the data collection pipeline from VoxCeleb but only using speech-based filtering. The inclusion of Self-VoxCeleb improved 20-50% of relative performance compared to the models trained only with VoxCeleb2. ASNorm and QMF functions were employed for post-processing the scores.
The second-place [13] employed ResNet variants with diverse input features, model depths and kernel sizes in Track 1. Data augmentation was carried out using the MUSAN noise database and RIR responses, which are similar to the winner's method. Moreover, they applied 3-fold speed augmentation to enlarge the training dataset, resulting in obtaining 17,982 speakers. For Track 2, they utilised several recently proposed pretrained networks, such as WavLM [30] and variants of Wav2Vec2 [31] and ensembled these networks with the models that they trained for Track 1. All of their models followed two-step training, training the model only with short utterances followed by training the model including longer ones with large margin fine-tuning. Their submission performed better than the winning team in terms of EER, but performed slightly worse in minDCF, which is our primary metric.
**Effect of self-supervised speaker models.** This year, the top two winning teams in Track 2 obtained impressive performance gains by utilising models trained with self-supervision on large scale data, which had not been observed in previous additions.
Following the great success in the fields of vision [32, 33, 34] and NLP [35, 36, 37], self-supervised learning has also shown prominent results in speech processing [30, 31, 38]. Since the supervised training of speech networks with labels and annotations disregards rich information in the input signal, self-supervised methods instead enable the model to learn a universal representation, such as speaker information. The winner [14] utilised WavLM [30] and Hubert [38] pretrained models and achieved 30% relative improvement on minDCF, our primary metric. The second place leveraged pretrained WavLM [30] and Wav2Vec2 [31] and finetuned them with VoxCeleb before fusing with other models. They also achieved 20% relative improvement on our primary metric (0.090 to 0.062).
**Analysis on hard positive and negative pairs.** This year we introduced new trial types to make the test set harder, as described in Section 2.2.1. Here we analyse how these pairs affect the winners' performance. The VoxSRC-22 test set consists of four types of trials, (i) hard positive pairs taken from the same speaker at different ages (**P-H**), (ii) hard negative pairs taken from the same environment (**N-H**), (iii) positive pairs from VoxSRC-19 test set (**P-Vox19**), and (iv) negative pairs from VoxSRC-19 test set (**N-Vox19**). We compare the performance of our baseline model and the top 2 winners of track 1 on these subsets.
Table 5 shows the results. The 1st [14] and 2nd place [13] performed better than our baseline model by a large margin. Comparing the performance of E-1 to the others shows that both the hard positives and the hard negatives made the challenge more difficult. For the most challenging set, E-4 with both hard positive and negative pairs, the 1st place method (which achieves an impressive 0.9 % EER on the VoxSRC-19 test set) could only achieve 2.07% on the E-4 eval set. Interestingly, the 2nd place method performed better in E-1, E-2 and E-3 than the 1st place but achieved worse results in E-4. In fact, there is not much difference in overall performance between the first and second placed methods (See Table 3).
### Semi-Supervised Domain Adaptation (Track 3)
The first placed team [16] used two frameworks, pseudo labelling and self-supervised learning, to achieve the winning performance on the target domain. A novel sub-graph clustering algorithm based on two Gaussian fitting and multi-model voting was used for generating pseudo-labels. The model was trained with two stages, first using the labelled source domain data and pseudo-labelled target domain data, and second finetuning CN-Celeb data by fixing the VoxCeleb weights of the classification layer using circle loss. Then the pseudo label correction method was adopted and the model was retrained with them. They also tried various types of domain adaptation techniques, such as CORAL [39] or CORAL+ [40], but the performance did not improve.
The second placed team [15] followed the FFSVC baseline system method [41]. The clustering-based method was used to generate the pseudo-labels of unlabelled target domain data. They used the track 1 speaker model as their initial checkpoint and finetuned with CN-Celeb data and pseudo labels. Sub-center ArArcFace was used for the loss function which was persistent with noisy labels. QMF-based score calibration and score normalisation were used as post-processing steps.
**Discussion.** Table 6 shows the top two teams' performance on the test set compared to several baselines. We trained three baseline models using same architecture and loss functions described in Section 3.2 but with different training sets. **Baseline**
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Train dataset** & **minDCF** & **EER** \\ \hline Baseline 1 & **L-S** & 0.823 & 16.88 \\ Baseline 2 & **L-T** & 0.999 & 32.47 \\ Baseline 3 & **L-S + L-T** & 0.687 & 13.93 \\ \hline
1st place [16] & **L-S + U-T + L-T** & 0.388 & 7.03 \\
2nd place [15] & **L-S + U-T + L-T** & 0.389 & 7.15 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of winning methods in Track3 with baselines. **L-S** : Labelled data in Source domain, **U-T**: Unlabelled data in Target domain and **L-T** Labelled data in Target domain.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Eval.** & **Positive** & **Negative** & **Baseline** & **1st** & **2nd** \\
**set** & **Pairs** & **Pairs** & **Baseline** & **place** & **place** \\ \hline E-1 & **P-Vox19** & **N-Vox19** & 1.47 & 0.90 & 0.65 \\ E-2 & **P-Vox19** & **N-H** & 3.25 & 1.35 & 1.15 \\ E-3 & **P-H** & **N-Vox19** & 4.50 & 1.33 & 1.18 \\ E-4 & **P-H** & **N-H** & 9.27 & 2.07 & 2.28 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of baseline model and winning methods in Track 1 on four subsets of the test set. We report % EER. Lower is better. **P-Vax19** : positive pairs from VoxSRC-19 test set, **N-Vox19** : negative pairs from VoxSRC-19 test set, **P-H** : hard positive pairs taken from same speaker at different ages, and **N-H** : hard negative pairs taken from the same environment.
1 is identical to the baseline model for Tracks 1 and 2 which was trained only with the VoxCeleb2 dev set, the labelled data in the _source_ domain (**L-S**). We also provide **Baseline 2**, which was trained only with the labelled data in the _target_ domain (**L-T**) from scratch. **Baseline 3** was trained starting from Baseline 1 and finetuned with labelled data in the target domain using a low learning rate (1e-5). None of these baselines utilised the large amounts of unlabelled target domain data that was available to participants.
A comparison on Baseline 1 and 3 shows that including the labelled data in the target domain results in a performance improvement, relatively 2% in terms of EER, even though the size of the labelled target domain data is negligible. However, Baseline 2 shows that using only the labelled target domain data results in a substantial performance decrease due to over-fitting. Finally, the two winners' performances show that utilising the extensive unlabelled target domain data is essential for performance improvement in the train set, such as in the form of pseudo-labelling during training.
### Speaker diarisation (Track 4)
Track 4 saw 101 submissions from 17 different teams this year. The performances of the top three ranked teams are shown in Table 4.
The winner [18] of this track employed a similar approach to their previous year's system [42] in VoxSRC-21. They adopted several conventional clustering-based diarisation system pipelines, which were fused using DOVER-LAP [43]. The differences from their last year's submission are two-fold. First, they adopted a better speaker embedding extractor to bridge the domain gap between VoxCeleb and VoxConverse. Second, they used four different voice activity detection models, ResNet-based, Conformer-based, VAD from _pyamote.audio 2.0_[44] and ASR-based VAD, and performed majority voting from the results of these models. The winner achieved 4.57% DER on the challenge test set.
The second place [13] team also adopted a conventional clustering-based system pipeline. They re-trained the VAD models explained in [42] but with different acoustic features, including 30-dim MFCC and 80-dim filterbank, and fused the results with _pyamote.audio 2.0_. A speaker embedding extractor, also used in their track 1 submission that achieves an EER 0.44% in VoxCeleb1-O has been employed. They applied two steps of clustering, initially with AHC, followed by a re-clustering step using a Bayesian hidden Markov model. Unlike the winning team, they employed an additional module for handling overlapped speech where its training process was similar to that of their VAD models. They assigned the two most likely speakers to the overlapping speech.
## 6 Discussion
This year, the number of workshop participants was high because we offered two options for participation, in-person or virtual attendance. 50 participants attended in person, and 100 participants attended virtually on average. For the winners' talks, the virtual attendees sent the organisers pre-recorded videos which explained their methods and results, while the in-person attendees gave their talks in the workshop venue. Questions were collected from both Zoom and people who attended in-person. All the slides and recorded talks are now available on our website.
For Track 3 which was newly introduced this year, we received a large number of submissions: 89 submissions from 42 participants. This indicates a great interest from the speaker verification community in building methods for bridging the gap between two different domains. The winning methods here leveraged pseudo-labelling using a model pretrained on the abundant labelled source domain data. Interestingly, the winning methods did not see performance boosts from using classic domain adaptation techniques such as CORAL [39]. Explanations for this could be the recent availability of powerful self-supervised speaker models, which have been shown to achieve outstanding speaker verification performance in recent years. We hope more methods specific to domain adaptation could be explored next year.
We have included the entire VoxSRC-19 test set in the verification test set every year, allowing us to compare the techniques from previous rounds of the challenge. Table 7 shows the improvement in the challenge-winning methods over the last four years. By comparing the Track 1 winning methods over the years, we see that this year the winners' performance were slightly worse than in both the 2020 and 2021 editions. However, when comparing Track 2 submissions, the performance improved significantly, possibly due to the inclusion of the self-supervised trained speaker models, such as WavLM [30] or Wav2Vec [31]. Note that the 2nd place performs better on EER than the first place this year but performs worse on the Detection Cost Function (minDCF), which is our primary metric.
provement of state-of-the-art methods in a year. (DER 5.07% vs 4.16%). We also compare the performance between top-2 winners' submissions on the 2021 and 2022 test sets. It demonstrates that this year's test set is more challenging than last year's. Somewhat surprisingly, the second place [13] achieved better performance on the 2021 test set compared to the winner.
**Acknowledgements.** We thank the authors of CN-Celeb for their help and support. We also thank Rajan from Elancer and his team, [http://elancerits.com/](http://elancerits.com/), for their huge assistance with diarisation annotation, Kihyn Nam, Doyeop Kwak and Youngjoon Jang for double-checking the diarisation test set labels and David Pinto for supporting the evaluation server. We are grateful to Mitchell McLaren and Doug Reynolds for their continued support to VoxSRC. This work is funded by the EPSRC programme grant EP/T028572/1 VisualAI. Jaesung Huh is funded by a Global Korea Scholarship. We are grateful to Naver for sponsoring the workshop.
|
2301.09249 | Exploring Active 3D Object Detection from a Generalization Perspective | To alleviate the high annotation cost in LiDAR-based 3D object detection,
active learning is a promising solution that learns to select only a small
portion of unlabeled data to annotate, without compromising model performance.
Our empirical study, however, suggests that mainstream uncertainty-based and
diversity-based active learning policies are not effective when applied in the
3D detection task, as they fail to balance the trade-off between point cloud
informativeness and box-level annotation costs. To overcome this limitation, we
jointly investigate three novel criteria in our framework Crb for point cloud
acquisition - label conciseness}, feature representativeness and geometric
balance, which hierarchically filters out the point clouds of redundant 3D
bounding box labels, latent features and geometric characteristics (e.g., point
cloud density) from the unlabeled sample pool and greedily selects informative
ones with fewer objects to annotate. Our theoretical analysis demonstrates that
the proposed criteria align the marginal distributions of the selected subset
and the prior distributions of the unseen test set, and minimizes the upper
bound of the generalization error. To validate the effectiveness and
applicability of Crb, we conduct extensive experiments on the two benchmark 3D
object detection datasets of KITTI and Waymo and examine both one-stage (i.e.,
Second) and two-stage 3D detectors (i.e., Pv-rcnn). Experiments evidence that
the proposed approach outperforms existing active learning strategies and
achieves fully supervised performance requiring $1\%$ and $8\%$ annotations of
bounding boxes and point clouds, respectively. Source code:
https://github.com/Luoyadan/CRB-active-3Ddet. | Yadan Luo, Zhuoxiao Chen, Zijian Wang, Xin Yu, Zi Huang, Mahsa Baktashmotlagh | 2023-01-23T02:43:03Z | http://arxiv.org/abs/2301.09249v2 | # Exploring Active 3D Object Detection from a Generalization Perspective
###### Abstract
To alleviate the high annotation cost in LiDAR-based 3D object detection, active learning is a promising solution that learns to select only a small portion of unlabeled data to annotate, without compromising model performance. Our empirical study, however, suggests that mainstream uncertainty-based and diversity-based active learning policies are not effective when applied in the 3D detection task, as they fail to balance the trade-off between point cloud informativeness and box-level annotation costs. To overcome this limitation, we jointly investigate three novel criteria in our framework **Crb** for point cloud acquisition - _label conciseness_, _feature representativeness_ and _geometric balance_, which hierarchically filters out the point clouds of redundant 3D bounding box labels, latent features and geometric characteristics (_e.g._, point cloud density) from the unlabeled sample pool and greedily selects informative ones with fewer objects to annotate. Our theoretical analysis demonstrates that the proposed criteria aligns the marginal distributions of the selected subset and the prior distributions of the unseen test set, and minimizes the upper bound of the generalization error. To validate the effectiveness and applicability of **Crb**, we conduct extensive experiments on the two benchmark 3D object detection datasets of KITTI and Waymo and examine both one-stage (_i.e._, **Second**) and two-stage 3D detectors (_i.e._, **Pv**r**cnn). Experiments evidence that the proposed approach outperforms existing active learning strategies and achieves fully supervised performance requiring \(1\%\) and \(8\%\) annotations of bounding boxes and point clouds, respectively. Source code: [https://github.com/Luoyadan/CRB-active-3Ddet](https://github.com/Luoyadan/CRB-active-3Ddet).
## 1 Introduction
LiDAR-based 3D object detection plays an indispensable role in 3D scene understanding with a wide range of applications such as autonomous driving (Deng et al., 2021; Wang et al., 2020) and robotics (Ahmed et al., 2018; Montes et al., 2020; Wang et al., 2019). The emerging stream of 3D detection models enables accurate recognition at the cost of large-scale labeled point clouds, where 7-degree of freedom (DOF) 3D bounding boxes - consisting of a position, size, and orientation information- for each object are annotated. In the benchmark datasets like Waymo (Sun et al., 2020), there are over 12 million LiDAR boxes, for which, labeling a precise 3D box takes more than 100 seconds for an annotator (Song et al., 2015). This prerequisite for the performance boost greatly hinders the feasibility of applying models to the wild, especially when the annotation budget is limited.
To alleviate this limitation, active learning (AL) aims to reduce labeling costs by querying labels for only a small portion of unlabeled data. The criterion-based query selection process iteratively selects the most beneficial samples for the subsequent model training until the labeling budget is run out. The criterion is expected to quantify the sample informativeness using the heuristics derived from _sample uncertainty_(Gal et al., 2017; Du et al., 2021; Caramalau et al., 2021; Yuan et al., 2021; Choi et al., 2021; Zhang et al., 2020; Shi and Li, 2019) and _sample diversity_(Ma et al., 2021; Gudovskiy et al., 2020; Gao et al., 2020; Sinha et al., 2019; Pinsler et al., 2019). In particular, uncertainty-driven approaches focus on the samples that the model is the least confident of their labels, thus searching for the candidates with: maximum entropy (MacKay, 1992; Shannon, 1948; Kim et al., 2021; Siddiqui et al., 2020; Shi and Yu, 2019), disagreement among different experts (Freund et al., 1992; Tran et al., 2019), minimum posterior probability of a predicted class (Wang et al., 2017), or the samples
with reducible yet maximum estimated error (Roy and McCallum, 2001; Yoo and Kweon, 2019; Kim et al., 2021). On the other hand, diversity-based methods try to find the most representative samples to avoid sample redundancy. To this end, they form subsets that are sufficiently diverse to describe the entire data pool by making use of the greedy coreset algorithms (Sener and Savarese, 2018), or the clustering algorithms (Nguyen and Smeulders, 2004). Recent works (Liu et al., 2021; Citovsky et al., 2021; Kirsch et al., 2019; Houlsby et al., 2011) combine the aforementioned heuristics: they measure uncertainty as the gradient magnitude of samples (Ash et al., 2020) or its second-order metrics (Liu et al., 2021) at the final layer of neural networks, and then select samples with gradients spanning a diverse set of directions. While effective, the hybrid approaches commonly cause heavy computational overhead, since gradient computation is required for each sample in the unlabeled pool. Another stream of works apply active learning to 2D/3D object detection tasks (Feng et al., 2019; Schmidt et al., 2020; Wang et al., 2022; Wu et al., 2022; Tang et al., 2021), by leveraging ensemble (Beluch et al., 2018) or Monte Carlo (MC) dropout (Gal and Ghahramani, 2016) algorithms to estimate the classification and localization uncertainty of bounding boxes for images/point clouds acquisition (more details in Appendix I). Nevertheless, those AL methods generally favor the point clouds with more objects, which have a higher chance of containing uncertain and diverse objects. With a fixed annotation budget, it is far from optimal to select such point clouds, since more clicks are required to form 3D box annotations.
To overcome the above limitations, we propose to learn AL criteria for cost-efficient sample acquisition at the 3D box level by empirically studying its relationship with optimizing the generalization upper bound. Specifically, we propose three selection criteria for cost-effective point cloud acquisition, termed as Crb, _i.e., label conciseness_, _feature representativeness_ and _geometric balance_. Specifically, we divide the sample selection process into three stages: (1) To alleviate the issues of label redundancy and class imbalance, and to ensure _label conciseness_, we firstly calculate the entropy of bounding box label predictions and only pick top \(\mathcal{K}_{1}\) point clouds for Stage 2; (2) We then examine the _feature representativeness_ of candidates by formulating the task as the \(\mathcal{K}_{2}\)-medoids problem on the gradient space. To jointly consider the impact of classification and regression objectives on gradients, we enable the Monte Carlo dropout (Mc-dropout) and construct the hypothetical labels by averaging predictions from multiple stochastic forward passes. (3) Finally, to maintain the _geometric balance_ property, we minimize the KL divergence between the marginal distributions of point cloud density of each predicted bounding box. This makes the trained detector predict more accurate localization and size of objects, and recognize both close (_i.e.,_ dense) and distant (_i.e._, sparse) objects at the test time, using minimum number of annotations. We base our criterion design on our theoretical analysis of optimizing the upper bound of the generalization risk, which can be reformulated as distribution alignment of the selected subset and the test set. Note that since the empirical distribution of the test set is not observable during training, WLOG, we make an appropriate assumption of its prior distribution.
**Contributions**. Our work is a pioneering study in active learning for 3D object detection, aiming to boost the detection performance at the **lowest cost of bounding box-level annotations**. To this end, we propose a hierarchical active learning scheme for 3D object detection, which progressively filters candidates according to the derived selection criteria without triggering heavy computation. Extensive experiments conducted demonstrate that the proposed Crb strategy can consistently outperform all the state-of-the-art AL baselines on two large-scale 3D detection datasets irrespective of the detector architecture. To enhance the reproducibility of our work and accelerate future work in this new research direction, we develop a active-3D-det toolbox, which accommodates various AL approaches and 3D detectors. The source code is available in the supplementary material, and will be publicly shared upon acceptance of the paper.
## 2 Methodology
### Problem Formulation
In this section, we mathematically formulate the problem of active learning for 3D object detection and set up the notations. Given an orderless LiDAR point cloud \(\mathcal{P}=\{x,y,z,e\}\) with 3D location \((x,y,z)\) and reflectance \(e\), the goal of 3D object detection is to localize the objects of interest as a set of 3D bounding boxes \(\mathcal{B}=\{b_{k}\}_{k\in[N_{B}]}\) with \(N_{B}\) indicating the number of detected bounding boxes, and predict the associated box labels \(Y=\{y_{k}\}_{k\in[N_{B}]}\in\mathcal{Y}=\{1,\dots,C\}\), with \(C\) being the number of classes to predict. Each bounding box \(b\) represents the relative center position \((p_{x},p_{y},p_{z})\) to the object ground planes, the box size \((l,w,h)\), and the heading angle \(\theta\). Mainstream 3D object detectors
use point clouds \(\mathcal{P}\) to extract point-level features \(\mathbf{x}\in\mathbb{R}^{W\cdot L\cdot F}\)(Shi et al., 2019; Yang et al., 2019; 2020) or by voxelization (Shi et al., 2020), with \(W\), \(L\), \(F\) representing width, length, and channels of the feature map. The feature map \(\mathbf{x}\) is passed to a classifier \(f(\cdot;\mathbf{w}_{f})\) parameterized by \(\mathbf{w}_{f}\) and regression heads \(g(\cdot;\mathbf{w}_{g})\) (_e.g.,_ box refinement and ROI regression) parameterized by \(\mathbf{w}_{g}\). The output of the model is the detected bounding boxes \(\widehat{\mathcal{B}}=\{\hat{b}_{k}\}\) with the associated box labels \(\widehat{Y}=\{\hat{y}_{k}\}\) from anchored areas. The loss functions \(\ell^{cls}\) and \(\ell^{reg}\) for classification (_e.g.,_ regularized cross entropy loss Oberman and Calder (2018)) and regression (_e.g.,_ mean absolute error/\(L_{1}\) regularization Qi et al. (2020)) are assumed to be Lipschitz continuous. As shown in the left half of Figure 1, in an active learning pipeline, a small set of labeled point clouds \(\mathcal{D}_{L}=\{(\mathcal{P},\mathcal{B},Y)_{i}\}_{i\in[m]}\) and a large pool of raw point clouds \(\mathcal{D}_{U}=\{(\mathcal{P})_{j}\}_{j\in[n]}\) are provided at training time, with \(n\) and \(m\) being a total number of point clouds and \(m\ll n\). For each active learning round \(r\in[R]\), and based on the criterion defined by an active learning policy, we select a subset of raw data \(\{\mathcal{P}_{j}\}_{j\in[N_{r}]}\) from \(\mathcal{D}_{U}\) and query the labels of 3D bounding boxes from an oracle \(\mathbf{\Omega}:\mathcal{P}\rightarrow\mathcal{B}\times\mathcal{Y}\) to construct \(\mathcal{D}_{S}=\{(\mathcal{P},\mathcal{B},Y)_{j}\}_{j\in[N_{r}]}\). The 3D detection model is pre-trained with \(\mathcal{D}_{L}\) for active selection, and then retrained with \(\mathcal{D}_{S}\cup\mathcal{D}_{L}\) until the selected samples reach the final budget \(B\), _i.e.,_\(\sum_{r=1}^{R}N_{r}=B\).
### Theoretical Motivation
The core question of active 3D detection is how to design a proper criterion, based on which a fixed number of unlabeled point clouds can be selected to achieve minimum empirical risk \(\mathfrak{R}_{T}[\ell(f,g;\mathbf{w})]\) on the test set \(\mathcal{D}_{T}\) and minimum annotation time. Below, inspired by (Mansour et al., 2009; Ben-David et al., 2010), we derive the following **generalization bound** for active 3D detection so that the desired acquisition criteria can be obtained by optimizing the generalization risk.
**Theorem 2.1**.: _Let \(\mathcal{H}\) be a hypothesis space of Vapnik-Chervonenkis (VC) dimension \(d\), with \(f\) and \(g\) being the classification and regression branches, respectively. The \(\widehat{\mathcal{D}}_{S}\) and \(\widehat{\mathcal{D}}_{T}\) represent the empirical distribution induced by samples drawn from the acquired subset \(\mathcal{D}_{S}\) and the test set \(\mathcal{D}_{T}\), and \(\ell\) the loss function bounded by \(\mathcal{J}\). It is proven that \(\forall\ \delta\in(0,1)\), and \(\forall f,g\in\mathcal{H}\), with probability at least \(1-\delta\) the following inequality holds,_
\[\mathfrak{R}_{T}[\ell(f,g;\mathbf{w})]\leq\mathfrak{R}_{S}[\ell(f,g;\mathbf{w})]+ \frac{1}{2}disc(\widehat{\mathcal{D}}_{S},\widehat{\mathcal{D}}_{T})+\lambda^ {*}+\text{const},\]
_where \(\text{const}=3\mathcal{J}(\sqrt{\frac{\log\frac{4}{3}}{2N_{r}}}+\sqrt{\frac{ \log 3}{2N_{t}}})+\sqrt{\frac{2d\log(N_{r}/d)}{N_{r}}}+\sqrt{\frac{2d\log(N_{t}/d)}{ N_{t}}}\)._
_Notably, \(\lambda^{*}=\mathfrak{R}_{T}[\ell(f^{*},g^{*};\mathbf{w}^{*})]+\mathfrak{R}_{S}[ \ell(f^{*},g^{*};\mathbf{w}^{*})]\) denotes the joint risk of the optimal hypothesis \(f^{*}\) and \(g^{*}\), with \(\mathbf{w}^{*}\) being the model weights. \(N_{r}\) and \(N_{t}\) indicate the number of samples in the \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\). The proof can be found in the supplementary material._
**Remark**.: _The first term indicates the training error on the selected subsets, which is assumed to be trivial based on the zero training assumption (Sener and Savarese, 2018). To obtain a tight upper bound of the generalization risk, the **optimal subset**\(\mathcal{D}_{S}^{*}\) can be determined via minimizing the discrepancy distance of empirical distribution of two sets, i.e.,_
\[\mathcal{D}_{S}^{*}=\operatorname*{arg\,min}_{\mathcal{D}_{S}\subset\mathcal{D} _{U}}disc(\widehat{\mathcal{D}}_{S},\widehat{\mathcal{D}}_{T}).\]
_Below, we define the discrepancy distance for the 3D object detection task._
Figure 1: An illustrative flowchart of the proposed Crb framework for active selection of point clouds. Motivated by optimizing the generalization risk, the derived strategy hierarchically selects point clouds that have non-redundant bounding box labels, latent gradients and geometric characteristics to mitigate the gap with the test set and minimize annotation costs.
**Definition 1**.: _For any \(f,g,f^{\prime},g^{\prime}\in\mathcal{H}\), the discrepancy between the distribution of the selected sets \(\mathcal{D}_{S}\) and unlabeled pool \(\mathcal{D}_{T}\) can be formulated as,_
\[disc(\widehat{\mathcal{D}}_{S},\widehat{\mathcal{D}}_{T})=\sup_{f,f^{\prime}\in \mathcal{H}}|\mathbb{E}_{\widehat{\mathcal{D}}_{S}}\ell(f,f^{\prime})-\mathbb{ E}_{\widehat{\mathcal{D}}_{T}}\ell(f,f^{\prime})|+\sup_{g,g^{\prime}\in \mathcal{H}}|\mathbb{E}_{\widehat{\mathcal{D}}_{S}}\ell(g,g^{\prime})-\mathbb{ E}_{\widehat{\mathcal{D}}_{T}}\ell(g,g^{\prime})|,\]
_where the bounded expected loss \(\ell\) for any classification and regression functions are symmetric and satisfy the triangle inequality._
**Remark**.: _As 3D object detection is naturally an integration of classification and regression tasks, mitigating the set discrepancy is basically aligning the inputs and outputs of each branch. Therefore, with the detector freezed during the active selection, finding an optimal \(\mathcal{D}_{S}^{*}\) can be interpreted as enhancing the acquired set's (1) **Label Conciseness**: aligning marginal label distribution of bounding boxes, (2) **Feature Representativeness**: aligning marginal distribution of the latent representations of point clouds, and (3) **Geometric Balance**: aligning marginal distribution of geometric characteristics of point clouds and predicted bounding boxes, and can be written as:_
\[\mathcal{D}_{S}^{*}\approx\operatorname*{arg\,min}_{\mathcal{D}_{S}\subset \mathcal{D}_{U}}\underbrace{d_{\mathcal{A}}(P_{\widehat{Y}_{S}},P_{Y_{T}})}_{ \text{Consciences}}+\underbrace{d_{\mathcal{A}}(P_{X_{S}},P_{X_{T}})}_{ \text{Representativeness}}+\underbrace{d_{\mathcal{A}}(P_{\phi(\mathcal{P}_{S}, \widehat{B}_{S})},P_{\phi(\mathcal{P}_{T},\mathcal{B}_{T})})}_{\text{Balance}}. \tag{1}\]
_Here, \(\mathcal{P}_{S}\) and \(\mathcal{P}_{T}\) represent the point clouds in the selected set and the ones in the test set. \(\phi(\cdot)\) indicates the geometric descriptor of point clouds and \(d_{\mathcal{A}}\) distance (Kifer et al., 2004) which can be estimated by a finite set of samples. For latent features \(X_{S}\) and \(X_{T}\), we only focus on the features that differ from the training sets, since \(\mathbb{E}_{\widehat{D}_{L}}\ell^{cls}=0\) and \(\mathbb{E}_{\widehat{D}_{L}}\ell^{reg}=0\) based on the zero training error assumption. Considering that test samples and their associated labels are not observable during training, we make an assumption on the prior distributions of test data. WLOG, we assume that the prior distribution of bounding box labels and geometric features are uniform. Note that we can adopt the KL-divergence for the implementation of \(d_{\mathcal{A}}\) assuming that latent representations follow the univariate Gaussian distribution._
**Connections with existing AL approaches.** The proposed criteria jointly optimize the discrepancy distance for both tasks with three objectives, which shows the connections with existing AL strategies. The uncertainty-based methods focus strongly on the first term, based on the assumption that learning more difficult samples will help to improve the suprema of the loss. This rigorous assumption can result in a bias towards hard samples, which will be accumulated and amplified across iterations. Diversity-based methods put more effort into minimizing the second term, aiming to align the distributions in the latent subspace. However, the diversity-based approaches are unable to discover the latent features specified for regression, which can be critical when dealing with a detection problem. We introduce the third term for the 3D detection task, motivated by the fact that aligning the geometric characteristics of point clouds helps to preserve the fine-grained details of objects, leading to more accurate regression. Our empirical study provided in Sec. 3.3 suggests jointly optimizing three terms can lead to the best performance.
### Our Approach
To optimize the three criteria outlined in Eq. 1, we derive an AL scheme consisting of three components. In particular, to reduce the computational overhead, we hierarchically filter the samples that meet the selection criteria (illustrated in Fig. 1): we first pick \(\mathcal{K}_{1}\) candidates by concise label sampling (**Stage 1**), from which we select \(\mathcal{K}_{2}\) representative prototypes (**Stage 2**), with \(\mathcal{K}_{1},\mathcal{K}_{2}<<n\). Finally, we leverage greedy search (**Stage 3**) to find the \(N_{r}\) prototypes that match with the prior marginal distribution of test data. The hierarchical sampling scheme can save \(\mathcal{O}((n-\mathcal{K}_{1})T_{2}+(n-\mathcal{K}_{2})T_{3})\) cost, with \(T_{2}\) and \(T_{3}\) indicating the runtime of criterion evaluation. The algorithm is summarized in the supplemental material. In the following, we describe the details of the three stages.
**Stage 1: Concise Label Sampling (CLs).** By using _label conciseness_ as a sampling criterion, we aim to alleviate label redundancy and align the source label distribution with the target prior label distribution. Particularly, we find a subset \(\mathcal{D}_{S_{1}}^{*}\) of size \(\mathcal{K}_{1}\) that minimizes Kullback-Leibler (KL) divergence between the probability distribution \(P_{Y_{S}}\) and the uniform distribution \(P_{Y_{T}}\). To this end, we formulate the KL-divergence with Shannon entropy \(H(\cdot)\) and define an optimization problem of
maximizing the entropy of the label distributions:
\[D_{KL}(P_{\widehat{Y}_{S_{1}}}\parallel P_{Y_{T}})=-H(\widehat{Y}_{S _{1}})+\log|\widehat{Y}_{S_{1}}|, \tag{2}\] \[\mathcal{D}^{*}_{S_{1}}=\operatorname*{arg\,min}_{\mathcal{D}_{S _{1}}\subset\mathcal{D}_{U}}D_{KL}(P_{\widehat{Y}_{S_{1}}}\parallel P_{Y_{T}} )=\operatorname*{arg\,max}_{\mathcal{D}_{S_{1}}\subset\mathcal{D}_{U}}H( \widehat{Y}_{S_{1}}), \tag{3}\]
where \(\log|\widehat{Y}_{S_{1}}|=log\mathcal{K}_{1}\) indicates the number of values \(Y_{S_{1}}\) can take on, which is a constant. Note that \(P_{Y_{T}}\) is a uniform distribution, and we removed the constant values from the formulations. We pass all point clouds \(\{(\mathcal{P})_{j}\}_{i\in[n]}\) from the unlabeled pool to the detector and extract the predictive labels \(\{\hat{y}_{i}\}_{i=1}^{N_{B}}\) for \(N_{B}\) bounding boxes, with \(\hat{y}_{i}=\operatorname*{arg\,max}_{y\in[C]}f(x_{i};\mathbf{w}_{f})\). The label entropy of the \(j\)-th point cloud \(H(\widehat{Y}_{j,S})\) can be calculated as,
\[H(\widehat{Y}_{j,S})=-\sum_{c=1}^{C}\mathbf{p}_{i,c}\log\mathbf{p}_{i,c},\quad\mathbf{p}_ {i,c}=\frac{e^{|\hat{y}_{i}=c|/N_{B}}}{\sum_{c=1}^{C}e^{|\hat{y}_{i}=c|/N_{B} }}. \tag{4}\]
Based on the calculated entropy scores, we filter out the top-\(\mathcal{K}_{1}\) candidates and validate them through the **Stage 2** representative prototype selection.
**Stage 2: Representative Prototype Selection (RPs).** In this stage, we aim to to identify whether the subsets cover the _unique_ knowledge encoded only in \(\mathcal{D}_{U}\) and not in \(\mathcal{D}_{L}\) by measuring the _feature representativeness_ with gradient vectors of point clouds. Motivated by this, we find the representative prototypes on the gradient space \(\mathcal{G}\) to form the subset \(\mathcal{D}_{S_{2}}\), where magnitude and orientation represent the uncertainty and diversity of the new knowledge. For a classification problem, gradients can be retrieved by feeding the hypothetical label \(\hat{y}=\operatorname*{arg\,max}_{y\in[C]}\mathbf{p}(y|x)\) to the networks. However, the gradient extraction for regression problem is not explored yet in the literature, due to the fact that the hypothetical labels for regression heads cannot be directly obtained. To mitigate this, we propose to enable Monte Carlo dropout (Mc-dropout) at the **Stage 1**, and get the averaging predictions \(\bar{B}\) of \(M\) stochastic forward passes through the model as the hypothetical labels for regression loss:
\[\bar{B}\approx\frac{1}{M}\sum_{i=1}^{M}g(\mathbf{x};\mathbf{w}_{d},\mathbf{w}_{g}),\mathbf{w} _{d}\sim\texttt{Bernoulli}(1-p), \tag{5}\]
\[G_{S_{2}}=\{\nabla_{\Theta}\ell^{reg}(g(\mathbf{x}),\bar{B};\mathbf{w}_{g}),\mathbf{x} \sim\mathcal{D}_{S_{2}}\}, \tag{6}\]
with \(p\) indicating the dropout rate, \(\mathbf{w}_{d}\) the random variable of the dropout layer, and \(\Theta\) the parameters of the convolutional layer of the shared block. The gradient maps \(G_{S_{2}}\in\mathcal{G}\) can be extracted from shared layers and calculated by the chain rule. Since the gradients for test samples are not observable, we make an assumption that its prior distribution follows a Gaussian distribution, which allows us to rewrite the optimization function as,
\[\mathcal{D}^{*}_{S_{2}} =\operatorname*{arg\,min}_{\mathcal{D}_{S_{2}}\subset\mathcal{D}_ {S_{1}}}D_{KL}(P_{X_{S_{2}}}\parallel P_{X_{T}})\approx\operatorname*{arg\,min} _{\mathcal{D}_{S_{2}}\subset\mathcal{D}_{S_{1}}}D_{KL}(P_{G_{S_{2}}}\parallel P _{G_{T}}) \tag{7}\] \[=\operatorname*{arg\,min}_{\mathcal{D}_{S_{2}}\subset\mathcal{D} _{S_{1}}}\log\frac{\sigma_{T}}{\sigma_{S_{2}}}+\frac{\sigma_{S_{2}}^{2}+(\mu_ {S_{2}}-\mu_{T})}{2\delta_{T}^{2}}-\frac{1}{2}\approx\mathcal{K}_{\texttt{2 -medoids}}(G_{S_{1}}),\]
with \(\mu_{S_{2}}\), \(\sigma_{S_{2}}\) (\(\mu_{T}\), and \(\sigma_{T}\)) being the mean and a standard deviation of the univariate Gaussian distribution of the selected set (test set), respectively. Based on Eq. 7, the task of finding a representative set can be viewed as picking \(\mathcal{K}_{2}\) prototypes (_i.e.,_\(\mathcal{K}_{2}\)-medoids) from the clustered data, so that the centroids (mean value) of the selected subset and the test set can be naturally matched. The variance \(\sigma_{S_{2}}\) and \(\sigma_{T}\), basically, the distance of each point to its prototypes will be minimized simultaneously. We test different approaches for selecting prototypes in Sec. 3.3.
**Stage 3: Greedy Point Density Balancing (GPDb).** The third criterion adopted is _geometric balance_, which targets at aligning the distribution of selected prototypes with the marginal distribution of testing point clouds. As point clouds typically consist of thousands (if not millions) of points, it is computationally expensive to directly align the meta features (_e.g.,_ coordinates) of points. Furthermore, in representation learning for point clouds, the common practice of using voxel-based architecture typically relies on quantized representations of point clouds and loses the object details due to the limited perception range of voxels. Therefore, we utilize the point density \(\phi(\cdot,\cdot)\) within each bounding box to preserve the geometric characteristics of an object in 3D point clouds. By
aligning the geometric characteristic of the selected set and unlabeled pool, the fine-tuned detector is expected to predict more accurate localization and size of bounding boxes and recognize both close (_i.e.,_ dense) and distant (_i.e.,_ sparse) objects at the test time. The probability density function (PDF) of the point density is not given and has to be estimated from the bounding box predictions. To this end, we adopt Kernel Density Estimation (KDE) using a finite set of samples from each class which can be computed as:
\[\textbf{p}(\phi(\mathcal{P},\widehat{\mathcal{B}}))=\frac{1}{N_{B}h}\sum_{j=1} ^{N_{B}}\mathcal{K}er(\frac{\phi(\mathcal{P},\widehat{\mathcal{B}})-\phi( \mathcal{P},\widehat{\mathcal{B}}_{j})}{h}), \tag{8}\]
with \(h>0\) being the pre-defined bandwidth that can determine the smoothing of the resulting density function. We use Gaussian kernel for the kernel function \(\mathcal{K}er(\cdot)\). With the PDF defined, the optimization problem of selecting the final candidate sets \(\mathcal{D}_{S}\) of size \(N_{r}\) for the label query is:
\[\mathcal{D}_{S}^{*}=\operatorname*{arg\,min}_{\mathcal{D}_{S}\subset\mathcal{ D}_{S_{2}}}D_{KL}(\phi(\mathcal{P}_{S},\widehat{\mathcal{B}}_{S})\parallel\phi( \mathcal{P}_{T},\mathcal{B}_{T})), \tag{9}\]
where \(\phi(\cdot,\cdot)\) measures the point density for each bounding box. We use greedy search to find the optimal combinations from the subset \(\mathcal{D}_{S_{2}}\) that can minimize the KL distance to the uniform distribution \(\textbf{p}(\phi(\mathcal{P}_{T},\mathcal{B}_{T}))\sim\texttt{uniform}( \alpha_{lo},\alpha_{hi})\). The upper bound \(\alpha_{hi}\) and lower bound \(\alpha_{lo}\) of the uniform distribution are set to the 95% density interval, _i.e.,_\(\textbf{p}(\alpha_{lo}<\phi(\mathcal{P},\widehat{\mathcal{B}}_{j})<\alpha_{hi})=95\%\) for every predicted bounding box \(j\). Notably, the density of each bounding box is recorded during the **Stage 1**, which will not cause any computation overhead. The analysis of time complexity against other active learning methods is presented in Sec. 3.4.
## 3 Experiments
### Experimental Setup
**Datasets.** KITTI (Geiger et al., 2012) is one of the most representative datasets for point cloud based object detection. The dataset consists of 3,712 training samples (_i.e.,_ point clouds) and 3,769 _val_ samples. The dataset includes a total of 80,256 labeled objects with three commonly used classes for autonomous driving: cars, pedestrians, and cyclists. The Waymo Open dataset (Sun et al., 2020) is a challenging testbed for autonomous driving, containing 158,361 training samples and 40,077 testing samples. The sampling intervals for KITTI and Waymo are set to 1 and 10, respectively.
**Generic AL Baselines**. We implemented the following five generic AL baselines of which the implementation details can be found in the supplementary material. (1) **Rand**: is a basic sampling method that selects \(N_{r}\) samples at random for each selection round; (2) **Entropy**(Wang and Shang, 2014): is an _uncertainty_-based active learning approach that targets the _classification_ head of the detector, and selects the top \(N_{r}\) ranked samples based on the entropy of the sample's predicted label; (3) **LLal**(Yoo and Kweon, 2019): is an _uncertainty_-based method that adopts an auxiliary network to predict an indicative loss and enables to select samples for which the model is likely to produce wrong predictions; (4) **Coreset**(Sener and Savarese, 2018): is a _diversity_-based method performing the core-set selection that uses the greedy furthest-first search on both labeled and unlabeled embeddings at each round; and (5) **Badge**(Ash et al., 2020): is a _hybrid_ approach that samples instances that are disparate and of high magnitude when presented in a hallucinated gradient space.
**Applied AL Baselines for 2D and 3D Detection**. For a fair comparison, we also compared three variants of the deep active learning method for 3D detection and adapted one 2D active detection method to our 3D detector. (6) **MC-mi**(Feng et al., 2019) utilized Monte Carlo dropout associated with mutual information to determine the uncertainty of point clouds. (7) **MC-reg**: Additionally, to verify the importance of the uncertainty in regression, we design an _uncertainty_-based baseline that determines the _regression_ uncertainty via conducting \(M\)-round MC-dropout stochastic passes at the test time. The variances of predictive results are then calculated, and the samples with the top-\(N_{r}\) greatest variance will be selected for label acquisition. We further adapted two applied AL methods for 2D detection to a 3D detection setting, where (8) **LT/C**(Kao et al., 2018) measures the class-specific localization tightness, _i.e._, the changes from the intermediate proposal to the final bounding box and (9) **Consensus**(Schmidt et al., 2020) calculates the variation ratio of minimum IoU value for each RoI-match of 3D boxes.
### Comparisons against Active Learning Methods
**Quantitative Analysis**. We conducted comprehensive experiments on the KITTI and Waymo datasets to demonstrate the effectiveness of the proposed approach. The \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) are empirically set to \(300\) and \(200\) for KITTI and \(2,000\) and \(1,200\) for Waymo. Under a fixed budget of point clouds, the performance of 3D and BEV detection achieved by different AL policies are reported in Figure 2, with standard deviation of three trials shown in shaded regions. We can clearly observe that Crb consistently outperforms all state-of-the-art AL methods by a noticeable margin, irrespective of the number of annotated bounding boxes and difficulty settings. It is worth noting that, on the KITTI dataset, the annotation time for the proposed Crb is 3 times faster than Rand, while achieving a comparable performance. Moreover, AL baselines for regression and classification tasks (_e.g._, Llal) or for regression only tasks (_e.g._, Mc-reg) generally obtain higher scores yet leading to higher labeling costs than the classification-oriented methods (_e.g._, Entropy).
Table 1 reports the major experimental results of the state-of-the-art generic AL methods and applied AL approaches for 2D and 3D detection on the KITTI dataset. It is observed that Llal and LT/c achieve competitive results, as the acquisition criteria adopted jointly consider the classification and regression task. Our proposed Crb improves the 3D mAP scores by 6.7% which validates the effectiveness of minimizing the generalization risk.
**Qualitative Analysis**. To intuitively understand the merits of our proposed active 3D detection strategy, Figure 10 demonstrates that the 3D detection results yielded by **Rand** (bottom left) and Crb selection (bottom right) from the corresponding image (upper row). Both 3D detectors are trained under the budget of 1K annotated bounding boxes. False positives and corrected predictions are indicated with red and green boxes. It is observed that, under the same condition, Crb produces more accurate and more confident predictions than Rand. Besides, looking at the cyclist highlighted in the orange box in Figure 10, the detector trained with Rand produces a significantly lower
\begin{table}
\begin{tabular}{c l l c c c c c c c c c c} \hline \hline & & & \multicolumn{2}{c}{Car} & \multicolumn{2}{c}{Pedestrian} & \multicolumn{2}{c}{Cyclist} & \multicolumn{2}{c}{Average} \\ \cline{3-14} & Method & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline \multirow{4}{*}{Crb} & Coreset & 87.77 & 77.73 & 72.95 & 47.27 & 41.97 & 38.19 & 81.73 & 59.72 & 55.64 & 72.26 & 59.81 & 55.59 \\ & Badge & 89.96 & 57.88 & 70.54 & 51.94 & 46.24 & 40.98 & 84.11 & 62.29 & 58.12 & 75.34 & 61.44 & 56.55 \\ & Llal & 89.95 & 78.65 & **75.32** & 56.34 & 49.87 & 45.97 & 75.55 & 60.35 & 55.36 & 73.94 & 62.95 & 58.88 \\ \cline{2-14} & MC-reg & 88.85 & 76.21 & 73.47 & 35.82 & 31.81 & 29.79 & 73.98 & 55.23 & 51.85 & 66.21 & 54.41 & 51.70 \\ & MC-all & 86.28 & 75.58 & 71.56 & 41.05 & 37.50 & 33.83 & 86.26 & 60.22 & 56.04 & 71.19 & 57.77 & 53.81 \\ & COSensus & 90.14 & 78.01 & 74.28 & 56.43 & 49.50 & 44.80 & 78.46 & 55.77 & 53.73 & 75.01 & 61.09 & 57.60 \\ & LT/c & 88.73 & 78.12 & 73.87 & 55.17 & 48.37 & 43.63 & 83.72 & 63.21 & 59.16 & 75.88 & 63.23 & 58.89 \\ \hline \hline \multicolumn{1}{c}{} & Crb & **90.98** & **79.02** & 74.04 & **64.17** & **54.80** & **50.82** & **86.96** & **67.45** & **63.56** & **80.70** & **67.81** & **62.81** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparisons (3D AP scores) with generic AL and applied AL for detection on KITTI _val_ set with 1% queried bounding boxes.
Figure 2: 3D and BEV mAP (%) of Crb and AL baselines on the KITTI and Waymo _val_ split.
confidence score compared to our approach. This confirms that the samples selected by Crb are aligned better with the test cases. More visualizations can be found in the supplemental material.
### Ablation Study
**Citeria.** Table 2 reports the performance comparisons of six variants of the proposed Crb method and the basic random selection baseline (1-st row) on the KITTI dataset. We report the 3D and BEV mAP metrics at all difficulty levels with 1,000 bounding boxes annotated. We observe that only applying Gpdb (4-th row) produces 12.5% lower scores and greater variance than the full model (the last row). However, with Cls (6-th row), the performance increases by approximately 10% with the minimum variance. This phenomenon evidences the importance of optimizing the discrepancy for both classification and regression tasks. It's further shown that removing any selection criteria from the proposed Crb triggers a drop on mAP scores, confirming the importance of each in a sample-efficient AL strategy.
**Sensitivity to Prototype Selection.** We examine the sensitivity of performance to different prototype selection methods used in the Rps module on the KITTI dataset (moderate difficulty level). Particularly, In Figure 4 (right), we show the performance of our approach using different prototype selection methods of the Gaussian mixture model (Gmm), K-means, and K-means++. To fairly reflect the trend in the performance curves, we run two trials for each prototype selection approach and plot the mean and the variance bars. K-means is slightly more stable than the other two, with higher time complexity and better representation learning. It is observed that there is very little variation (\(\sim 1.5\%\)) in the performance of our approach when using different prototype selection methods. This confirms that the Crb's superiority over existing baselines is not coming from the prototype selection method.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{3D Detection mAP} & \multicolumn{3}{c}{BEV Detection mAP} \\ \cline{3-8} Cls & Rps & Gpdb & Easy & Moderate & Hard & Easy & Moderate & Hard \\ \hline - & - & - & \(70.70_{\pm 1.60}\) & \(58.27_{\pm 1.04}\) & \(54.69_{\pm 1.30}\) & \(75.37_{\pm 1.65}\) & \(64.54_{\pm 1.69}\) & \(61.36_{\pm 1.61}\) \\ \(\surd\) & - & - & \(77.76_{\pm 1.70}\) & \(64.56_{\pm 1.39}\) & \(59.54_{\pm 1.13}\) & \(81.07_{\pm 1.67}\) & \(69.76_{\pm 1.45}\) & \(65.01_{\pm 1.31}\) \\ - & \(\surd\) & - & \(74.93_{\pm 1.31}\) & \(61.65_{\pm 1.95}\) & \(57.70_{\pm 1.52}\) & \(78.85_{\pm 2.31}\) & \(67.07_{\pm 1.36}\) & \(63.47_{\pm 1.21}\) \\ - & - & \(\surd\) & \(69.11_{\pm 13.22}\) & \(56.12_{\pm 17.44}\) & \(52.85_{\pm 11.49}\) & \(73.57_{\pm 10.45}\) & \(62.04_{\pm 10.62}\) & \(59.45_{\pm 9.78}\) \\ \(\surd\) & \(\surd\) & - & \(76.19_{\pm 2.13}\) & \(62.81_{\pm 1.31}\) & \(58.03_{\pm 1.18}\) & \(80.73_{\pm 0.92}\) & \(68.67_{\pm 0.21}\) & \(64.42_{\pm 0.22}\) \\ \(\surd\) & - & \(\surd\) & \(76.72_{\pm 0.78}\) & \(64.70_{\pm 1.07}\) & \(59.68_{\pm 0.93}\) & \(80.71_{\pm 0.26}\) & \(70.01_{\pm 0.40}\) & \(65.47_{\pm 0.56}\) \\ \(\surd\) & \(\surd\) & \(\surd\) & \(\mathbf{79.03_{\pm 1.39}}\) & \(\mathbf{65.86_{\pm 1.21}}\) & \(\mathbf{61.06_{\pm 1.43}}\) & \(\mathbf{82.60_{\pm 1.34}}\) & \(\mathbf{70.74_{\pm 0.57}}\) & \(\mathbf{66.41_{\pm 1.22}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablative study of different active learning criteria on the KITTI _val_ split. 3D and BEV AP scores (%) are reported when 1,000 bounding boxes are annotated.
Figure 4: Results on KITTI _val_ set with varying KDE bandwidth \(h\) (left) and prototype selection approaches (right) with increasing queried bounding boxes.
Figure 3: A case study of active 3D detection performance of **Rand** (bottom left) and **Crb** (bottom right) under the budge of 1,000 annotated bounding boxes. False positive (corrected predictions) are highlighted in red (green) boxes. The orange box denotes the detection with low confidence.
**Sensitivity to Bandwidth \(h\).** Figure 4 depicts the results of Crb with the bandwidth \(h\) varying in \(\{3,5,7,9\}\). Choosing the optimal bandwidth value \(h^{*}\) can avoid under-smoothing (\(h<h^{*}\)) and over-smoothing (\(h>h^{*}\)) in KDE. Except \(h=3\) which yields a large variation, Crb with the bandwidth of all other values reach similar detection results within the 2% absolute difference on 3D mAP. This evidences that the Crb is robust to different values of bandwidth.
**Sensitivity to Detector Architecture**. We validate the sensitivity of performance to choices of one-stage and two-stage detectors. Table 4 reports the results with the Second detection backbone on the KITTI dataset. With only 3% queried 3D bounding boxes, it is observed that the proposed Crb approach consistently outperforms the SOTA generic active learning approaches across a range of detection difficulties, improving 4.7% and 2.8% on 3D mAP and BEV mAP scores.
**Sensitivity Analysis of Thresholds \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\).** We examine the sensitivity of our approach to different values of threshold parameters \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\). We report the mean average precision (mAP) on the KITTI dataset, including both 3D and BEV views at all difficulty levels. We check four possible combinations of \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) and show the results in Table 3. We can observe that at Moderate and Hard levels, there is only 3.28% and 2.81% fluctuation on average mAP. In the last row, we further report the accuracy achieved by the backbone detector trained with all labeled training data and a larger batch size. With only 8% point clouds and 1% annotated bounding boxes, Crb achieves a comparable performance to the full model.
### Complexity Analysis
Table 5 shows the time complexity of training and active selection for different active learning approaches. \(n\) indicates the total number of unlabeled point clouds, \(N_{r}\) is the quantity selected, and \(E\) is the training epochs, with \(N_{r}\ll n\). We can clearly observe that, at training stage, the complexity of all AL strategies is \(\mathcal{O}(En)\), except Llal that needs extra epochs \(E_{l}\) to train the loss prediction module. At the active selection stage, Rand randomly generates \(N_{r}\) indices to retrieve samples from the pool. Coreset computes pairwise distances between the embedding of selected samples and unlabeled samples that yields the time complexity of \(\mathcal{O}(N_{r}n)\). Badge iterates through the gradients of all unlabeled samples passing gradients into K-means++ algorithm, with the complexity of \(\mathcal{O}(N_{r}n)\) bounded by K-means++. Given \(\mathcal{K}_{1},\mathcal{K}_{2}\approx N_{r}\), the time complexity of our method is \(\mathcal{O}(n\log n+2N_{r}^{2})\), with \(\mathcal{O}(n\log(n))\) being the complexity of sorting the entropy scores in Cls, and \(\mathcal{O}(N_{r}^{2})\) coming from \(\mathcal{K}_{2}\)-medoids and greedy search in Rps and Gpdb. Note that, in our case, \(\mathcal{O}(n\log n+2N_{r}^{2})<\mathcal{O}(N_{r}n)\). The complexity of simple ranking-based baselines is \(\mathcal{O}(n\log(n))\) due to sorting the sample acquisition scores. Comparing our method with recent state-of-the-arts, Llal has the highest training complexity, and Badge and Coreset have the highest selection complexity. Unlike the existing baseline, training and selection complexities of the proposed Crb are upper bounded by the reasonable asymptotic growth rates.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{3D Detection mAP} & \multicolumn{3}{c}{BEV Detection mAP} & \multicolumn{3}{c}{BEV Detection mAP} \\ \cline{2-13} & & Easy & Moderate & Hard & Easy & Moderate & Hard & Easy & Moderate & Hard \\ \hline Rand & 75.23 & 60.83 & 56.55 & 80.20 & 67.56 & 63.30 & & & & & & & \\ Llal & 72.02 & 58.96 & 54.21 & 79.50 & 66.82 & 62.48 & & & & & & & \\ Coreset & 74.74 & 58.86 & 54.61 & 79.71 & 65.53 & & & & & & & & \\ Badge & 75.38 & 61.65 & 56.72 & 80.81 & 68.33 & & & & & & & & \\ Crb & **78.96** & **64.27** & **59.60** & **83.28** & **70.49** & **66.09** & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: AL Results with one-stage 3D detector Second.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Car} & \multicolumn{3}{c}{Pedestrian} & \multicolumn{3}{c}{Cyclist} & \multicolumn{3}{c}{Average} \\ \cline{2-13} \(\mathcal{K}_{1}\) & \(\mathcal{K}_{2}\) & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline \(500\) & 400 & 90.04 & 79.08 & **74.66** & \(57.11\) & \(51.10\) & **51.12** & 81.97 & 63.40 & 59.62 & 76.50 & 64.53 & 60.10 \\ \(500\) & 300 & 90.58 & 79.02 & 74.04 & 64.17 & 54.80 & 50.82 & **86.96** & **67.45** & **63.56** & **80.70** & **67.81** & **62.81** \\ \(400\) & 300 & **91.30** & **79.21** & 74.00 & 62.93 & 55.67 & 49.27 & 79.02 & 60.50 & 56.74 & 77.75 & 65.13 & 60.00 \\ \(300\) & 200 & 90.45 & 78.81 & 73.44 & **65.00** & **55.91** & **51.12** & 84.82 & 65.77 & 61.53 & 80.09 & 67.32 & 62.05 \\ \hline PV-rcnn\({}^{\dagger}\) & 92.56 & 84.36 & 82.48 & 64.26 & 56.67 & 51.91 & 88.88 & 71.95 & 66.78 & 81.75 & 70.99 & 67.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparisons on KITTI _val_ set _w.r.t._ varying thresholds \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) after two rounds of active selection (8% point clouds, 1% bounding boxes). Results are reported with 3D AP with 40 recall positions. \({}^{\dagger}\) indicates the reported performance of the backbone trained with the full labeled set (100%).
\begin{table}
\begin{tabular}{l c c} \hline \hline AL Strategy & Training & Selection \\ \hline Rand & \(\mathcal{O}(En)\) & \(\mathcal{O}(N_{r})\) \\ \(\mathcal{O}(En)\) & \(\mathcal{O}(n\log n)\) \\ \(\mathcal{O}(En)\) & \(\mathcal{O}(n\log n)\) \\ \(\mathcal{O}(En)\) & \(\mathcal{O}(n\log n)\) \\ \(\mathcal{O}((E+E)_{l})\) & \(\mathcal{O}(n\log n)\) \\ \(\mathcal{O}(En)\) & \(\mathcal{O}(n\log n)\) \\ \(\mathcal{O}(En)\) & \(\mathcal{O}(N_{r}n)\) \\ \(\mathcal{O}(En)\) & \(\mathcal{O}(n\log n+2N^{2})\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Complexity Analysis.
## 4 Discussion
This paper studies three novel criteria for sample-efficient active 3D object detection, that can effectively achieve high performance with minimum costs of 3D box annotations and runtime complexity. We theoretically analyze the relationship between finding the optimal acquired subset and mitigating the sets discrepancy. The framework is versatile and can accommodate existing AL strategies to provide in-depth insights into heuristic design. The limitation of this work lies in a set of assumptions made on the prior distribution of the test data, which could be violated in practice. For more discussions, please refer to Sec. A.1 in Appendix. In contrast, it opens an opportunity of adopting our framework for active domain adaptation, where the target distribution is accessible for alignment. Addressing these two avenues is left for future work.
## Reproducibility Statement
The source code of the developed active 3D detection toolbox is available in the supplementary material, which accommodates various AL approaches and one-stage and two-stage 3D detectors. We specify the settings of hyper-parameters, the training scheme and the implementation details of our model and AL baselines in Sec. B of the supplementary material. We show the proofs of Theorem C.1 in Sec. C followed by the overview of the algorithm in Sec. D in the supplementary material. We repeat the experiments on the KITTI dataset 3 times with different initial labeled sets and show the standard deviation in plots and tables.
## Ethics Statement
Our work may have a positive impact on communities to reduce the costs of annotation, computation, and carbon footprint. The high-performing AL strategy greatly enhances the feasibility and practicability of 3D detection in critical yet data-scarce fields such as medical imaging. We did not use crowdsourcing and did not conduct research with human subjects in our experiments. We cited the creators when using existing assets (_e.g._, code, data, models).
In this appendix, we discuss the prior distribution selection, motivation of the Stage 2, and evaluation division of difficulty in Sec A.1 and Sec A.2, respectively. In the rest of the supplementary material, we provide the implementation details of all baselines and the proposed approach in Sec B followed by the proof of Theorem C.1. In Sec D, the overall algorithm is summarized. Additional experimental results on KITTI (Sec E) and Waymo (Sec F) datasets are reported and analyzed. We further conducted supplemental experiments on parameter sensitivity (Sec H) and visualizations (Sec G). In the end, we leave the related work and the associated discussion in Sec I.
## Appendix A Appendix
### More Discussions on Prior Distribution
In mainstream 3D detection datasets, the curated test set is commonly long-tailed distributed, with a few head classes (_e.g._, car) possessing a large number of samples and all the rest of the tail classes possessing only a few samples. As such, the trained detector can be easily biased towards head classes with massive training data, resulting in high accuracy on head classes and low accuracy on tail classes. This suggests that for 3D detection tasks, **mean average precision (mAP)** can be a **fairer** metric of evaluation, by taking an average of all AP values per class. When the test label is uniformly distributed, mAP scores will be equal to the AP scores for all samples. This motivates us to choose the uniform distribution as the prior distribution, rather than estimating the test label distribution from the initial labeled set \(\mathcal{D}_{L}\). In this case, the trained model tends to be more robust and resilient to the imbalanced training data, achieving higher mAP scores.
To justify the effectiveness of choosing the uniform distribution, we provide more comparisons with the SOTA active learning methods in Table 7 and Table 6, which do not take the uniform distribution as an assumption. We clearly observe that such AL methods perform poorly on **tail classes** (_e.g._, pedestrian and cyclist), confirming that the yielded models are biased towards learning car samples.
### More Discussions on Evaluation Division of Difficulty
On the KITTI dataset, the evaluation difficulty is set based on the visual look1 of the images, which is supposed to be unavailable for our LiDAR-based detection task. On the other hand, the Waymo dataset leverages a more reasonable and general setting of difficulty evaluation, with LEVEL 1 and LEVEL 2 difficulties indicating "more than five points" and "at least one point" inside labeled bounding boxes, respectively. This aligns with the design of the balance criterion (Stage 3), as the sparse point clouds or dense point clouds can be equally learned. In Table 8, we report the performance of the proposed approach with a small portion of point clouds and the fully supervised baseline reported in (Zhang et al., 2022), on the Waymo dataset. From Table 8, we can observe that the performance gap between the detectors trained with active learning (approx. 50K bounding box annotations) and fully supervised learning (approx. 8 million bounding box annotations) is smaller in LEVEL 2 (\(7.18\%\) in LEVEL 2 vs \(8.08\%\) in LEVEL 1), which aligns with the balance criteria in the proposed CRB framework.
Footnote 1: [http://www.cvlibs.net/datasets/kitti/eval_object.php](http://www.cvlibs.net/datasets/kitti/eval_object.php)
### More Discussions on the Motivation of the Stage 2
Our main objective of Stage 2, _i.e._, Representative Prototype Selection is to determine a subset \(\mathcal{D}^{*}_{S_{2}}\) from the pre-selected set \(S_{1}\) in the last stage, by minimizing the set discrepancy in the latent feature space. However, the test features are not observable during the training phase, and it is hard to guarantee that the feature distribution can be comprehensively captured. As stated in Remark section, we focus on the features that are not learned well from the training set due to the zero training error assumption and reconsider the feature matching problem from a gradient perspective. In particular, we split the test set into two group In the gradient space: (1) seen test samples that can be easily recognized will cluster near the origin, (2) while the novel test samples will diversely distribute in the subspace. As the first group of samples have been sufficiently covered by the initiated, in this stage, we focus on finding matching with the latter group. By assuming the prior distribution of gradients follows a Gaussian distribution, finding the K-metroids is naturally a choice to mitigate the gap between mean and variance. K-metroids algorithm breaks the dataset up into groups and attempts to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster (_i.e._, prototype). By selecting the prototypes in the second stage, we implicitly bridge the gap between the selected set and the test set at a latent feature level.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Car (\(\downarrow\))} & \multicolumn{4}{c}{Pedestrian (\(\downarrow\))} & \multicolumn{4}{c}{Cyclist (\(\downarrow\))} & \multicolumn{4}{c}{Average (\(\downarrow\))} \\ \cline{2-13} Method & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline Llal & \(2.61\) & \(5.71\) & \(\mathbf{7.16}\) & \(7.92\) & \(6.80\) & \(5.94\) & \(13.33\) & \(11.60\) & \(11.42\) & \(7.81\) & \(8.04\) & \(8.18\) \\ Coreset & \(4.79\) & \(6.63\) & \(9.53\) & \(16.99\) & \(14.70\) & \(13.72\) & \(7.15\) & \(12.23\) & \(11.14\) & \(9.49\) & \(11.18\) & \(11.47\) \\ Badge & \(2.60\) & \(8.58\) & \(11.94\) & \(12.32\) & \(10.43\) & \(10.93\) & \(4.77\) & \(9.66\) & \(8.66\) & \(6.41\) & \(9.55\) & \(10.51\) \\ CRB & **1.58** & **5.34** & \(8.44\) & **0.09** & **1.87** & **1.09** & **1.92** & **4.50** & **3.22** & **1.05** & **3.18** & **4.25** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance gap (%) between different AL methods and fully supervised backbone when acquiring approximately 1% queried bounding boxes on KITTI. Gaps are calculated by subtracting the performance of a fully supervised backbone from the performance of AL methods.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Car} & \multicolumn{4}{c}{Pedestrian} & \multicolumn{4}{c}{Cyclist} & \multicolumn{4}{c}{Average (\(\downarrow\))} \\ \cline{2-13} Method & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard & Easy & Mod. & Hard \\ \hline Llal & \(89.95\) & \(78.65\) & **75.32** & \(56.34\) & \(49.87\) & \(45.97\) & \(75.55\) & \(60.35\) & \(55.36\) & \(73.94\) & \(62.95\) & \(58.88\) \\ Coreset & \(87.77\) & \(77.73\) & \(72.95\) & \(47.27\) & \(41.97\) & \(38.19\) & \(81.73\) & \(59.72\) & \(55.64\) & \(72.26\) & \(59.81\) & \(55.59\) \\ Badge & \(89.96\) & \(75.78\) & \(70.54\) & \(51.94\) & \(46.24\) & \(40.98\) & \(84.11\) & \(62.29\) & \(58.12\) & \(75.34\) & \(61.44\) & \(56.55\) \\ CrB & **90.98** & **79.02** & \(74.04\) & **64.17** & **54.80** & **50.82** & **86.96** & **67.45** & **63.56** & **80.70** & **67.81** & **62.81** \\ \hline PV-rcnn\({}^{\dagger}\) & \(92.56\) & \(84.36\) & \(82.48\) & \(64.26\) & \(56.67\) & \(51.91\) & \(88.88\) & \(71.95\) & \(66.78\) & \(81.75\) & \(70.99\) & \(67.06\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance comparisons on KITTI _val_ set with different SOTA AL methods when acquiring approximately 1% queried bounding boxes. Results are reported with 3D AP with 40 recall positions. \({}^{\dagger}\) indicates the reported performance of the backbone trained with the full labeled set (100%).
## Appendix B Implementation Details
### Evaluation Metrics.
To fairly evaluate baselines and the proposed method on KITTI dataset (Geiger et al., 2012), we follow the work of (Shi et al., 2020): we utilize Average Precision (AP) for 3D and bird eye view (BEV) detection, and the task difficulty is categorized to Easy, Moderate, and Hard, with a rotated IoU threshold of \(0.7\) for cars and \(0.5\) for pedestrian and cyclists. The results evaluated on the validation split are calculated with \(40\) recall positions. To evaluate on Waymo dataset (Sun et al., 2020), we adopt the officially published evaluation tool for performance comparisons, which utilizes AP and the average precision weighted by heading (APH). The respective IoU thresholds for vehicles, pedestrians, and cyclists are set to 0.7, 0.5, and 0.5. Regarding detection difficulty, the Waymo test set is further divided into two levels. Level 1 (and Level 2) indicates there are more than five inside points (at least one point) in the ground-truth objects.
### Implementation Details of Training
To ensure the reproducibility of the baselines and the proposed approach, we develop a PyTorch-based active 3D detection toolbox (attached in the supplemental material) that implements mainstream AL approaches and can accommodate most of the public benchmark datasets. For fair comparison, all active learning methods are constructed from the Pv-rcnn(Shi et al., 2020) backbone. All experiments are conducted on a GPU cluster with three V100 GPUs. The runtime for an experiment on KITTI and Waymo is around 11 hours and 100 hours, respectively. Note that, training Pv-rcnnon the full set typically requires 40 GPU hours for KITTI and 800 GPU hours for Waymo.
**Parameter Settings**. The batch sizes for training and evaluation are fixed to 6 and 16 on both datasets. The Adam optimizer is adopted with a learning rate initiated as 0.01, and scheduled by one cycle scheduler. The number of Mc-dropout stochastic passes is set to 5 for all methods.
**Active Learning Protocols**. As our work is the first comprehensive study on active 3D detection task, the active training protocol for all AL baselines and the proposed method is empirically defined. For all experiments, we first randomly select \(m\) fully labeled point clouds from the training set as the initial \(\mathcal{D}_{L}\). With the annotated data, the 3D detector is trained with \(E\) epochs, which is then freezed to select \(N_{r}\) candidates from \(\mathcal{D}_{U}\) for label acquisition. We set the \(m\) and \(N_{r}\) to 2.5 3% point clouds (_i.e._, \(N_{r}=m\)=100 for KITTI, \(N_{r}=m=\)400 for Waymo) to trade-off between reliable model training and high computational costs.The aforementioned training and selection steps will alternate for \(R\) rounds. Empirically, we set \(E=30\), \(R=6\) for KITTI, and fix \(E=40\), \(R=5\) for Waymo.
### Implementation Details of Baselines and Crb
In this section, we introduce more implementation details of both baselines and the proposed Crb.
**Crb**. In comparison with baselines as reported in Figure 2, the \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) are empirically set to \(300\), \(200\) for KITTI and \(2,000\) and \(1,200\) for Waymo. The gradient maps used for Rps are extracted from the second convolutional layer in the shared block of Pv-rcnn. Three dropout layers in Pv-rcnn are enabled during the Mc-dropout and the dropout rate is fixed to 0.3 for both datasets. The number of Mc-dropout stochastic passes are set to 5 for all methods. In the Gpcb stage, we measure the KL-divergence between the KDE PDF of the selected set and the uniform prior distribution of the point cloud density for each class. The goal of conducting a greedy search is to find the optimal subset that can achieve the minimum sum of KL divergence for all classes. Considering the high variance of KL divergence across different classes, we unify the scale of KL-divergence to \(\bar{d}_{c}\) by applying the following function,
\[\bar{d}_{c}=\frac{2}{\pi}\arctan\frac{\pi}{2}d_{c},\]
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & mAP Level 1 & mAP Level 2 \\ \hline Crb & \(58.60\) & \(52.65\) \\ FSL & \(66.68\) & \(59.83\) \\ \hline Gap (\(\downarrow\)) & \(-8.08\) & \(-7.18\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparing the performance of detectors with active learning (AL) by Crb and fully supervised learning (FSL) on Waymo _val_ set. Results (mAP \(\%\)) are calculated by Waymo official evaluation metric.
where \(d_{c}\) denotes the KL-divergence for the \(c\)-th class. To this end, the ultimate objective for greedy search is \(\arg\min_{\mathcal{D}_{S}\subset\mathcal{D}_{S_{2}}}\sum_{c\in[C]}\bar{d}_{c}\). The normalized measurement can avoid dominance by any single class.
**Coreset** (Sener and Savarese, 2018). The embeddings extracted for both labeled and unlabeled data are the output from the shared block, with the dimension of \(128\) by \(256\). The Coreset adopts the furthest-first traversal for k-Center clustering strategy, which computes the Euclidean distance between each embedding pair.
**Llal** (Yoo and Kweon, 2019). For implementing the loss prediction module in LLAL, we construct a two-block module that connects to two layers of the Pv-rcnn, which takes multi-level knowledge into consideration for loss prediction. Particularly, each block consists of a convolutional layer with a channel size of \(265\) and a kernel size of \(1\), a batchnorm layer, and a relu activation layer. The outputs are then concatenated and fed to a fully connected layer and map to a loss score. All real loss for each training data point is saved and serves as the ground-truth to train the loss prediction module.
**Badge**. According to (Ash et al., 2020), hypothetical labels for the classifier are determined by the classes with the highest predicted probabilities. The gradient matrix with the dimension \(256\) by \(256\) for each unlabeled point cloud is extracted from the last convolutional layer of the Pv-rcnn's classification head and then fed into the Badge algorithm.
## Appendix C Proof of Theorem 2.1
**Theorem C.1**.: _Let \(\mathcal{H}\) be a hypothesis space of Vapnik-Chervonenkis (VC) dimension \(d\), with \(f\) and \(g\) being the classification and regression branches, respectively. The \(\widehat{\mathcal{D}}_{S}\) and \(\widehat{\mathcal{D}}_{T}\) represent the empirical distribution induced by samples drawn from the acquired subset \(\mathcal{D}_{S}\) and the test set \(\mathcal{D}_{T}\), and \(\ell\) the loss function bounded by \(\mathcal{J}\). It is proven that \(\forall\;\delta\in(0,1)\), and \(\forall f,g\in\mathcal{H}\), with probability at least \(1-\delta\) the following inequality holds,_
\[\mathfrak{R}_{T}[\ell(f,g;\mathbf{w})]\leq\mathfrak{R}_{S}[\ell(f,g;\mathbf{w})]+ \frac{1}{2}disc(\widehat{\mathcal{D}}_{S},\widehat{\mathcal{D}}_{T})+\lambda^ {*}+\text{const},\]
_where \(\text{const}=3\mathcal{J}(\sqrt{\frac{\log\frac{d}{2}}{2N_{T}}}+\sqrt{\frac{ \log\frac{d}{2}}{2N_{t}}})+\sqrt{\frac{2d\log(eN_{f}/d)}{N_{r}}}+\sqrt{\frac{ 2d\log(eN_{t}/d)}{N_{t}}}\)._
_Notably, \(\lambda^{*}=\mathfrak{R}_{T}[\ell(f^{*},g^{*};\mathbf{w}^{*})]+\mathfrak{R}_{S}[ \ell(f^{*},g^{*};\mathbf{w}^{*})]\) denotes the joint risk of the optimal hypothesis \(f^{*}\) and \(g^{*}\), with \(\mathbf{w}^{*}\) being the model weights. \(N_{r}\) and \(N_{t}\) indicate the number of samples in the \(\mathcal{D}_{S}\) and \(\mathcal{D}_{T}\). The proof can be found in the supplementary material._
Proof.: For brevity, we omit the model weights \(\mathbf{w}\) in the following proof. Based on the triangle inequality of \(\ell\) and the definition of the discrepancy distance \(disc(\cdot,\cdot)\), the following inequality holds,
\[\mathfrak{R}_{T}[\ell(f,g)] \leq\mathfrak{R}_{T}[\ell(f^{*},g^{*})]+\frac{1}{2}\mathfrak{R}_ {T}[\ell(f,f^{*})]+\frac{1}{2}\mathfrak{R}_{T}[\ell(g,g^{*})]\] \[\leq\mathfrak{R}_{T}[\ell(f^{*},g^{*})]+\mathfrak{R}_{S}[\ell(f^ {*},g^{*})]+\frac{1}{2}|\mathfrak{R}_{T}[\ell(f,f^{*})]-\mathfrak{R}_{S}[\ell (f,f^{*})]|\] \[+\frac{1}{2}|\mathfrak{R}_{T}[\ell(g,g^{*})]-\mathfrak{R}_{S}[ \ell(g,g^{*})]|\] \[\leq\mathfrak{R}_{T}[\ell(f^{*},g^{*})]+\mathfrak{R}_{S}[\ell(f ^{*},g^{*})]+\frac{1}{2}disc(\mathcal{D}_{S},\mathcal{D}_{T})\] \[\leq\mathfrak{R}_{T}[\ell(f^{*},g^{*})]+\mathfrak{R}_{S}[\ell(f,g)]+\mathfrak{R}_{S}[\ell(f^{*},g^{*})]+\frac{1}{2}disc(\mathcal{D}_{S}, \mathcal{D}_{T}).\]
By defining the joint risk of the optimal hypothesis \(\lambda^{*}=\mathfrak{R}_{T}[\ell(f^{*},g^{*})]+\mathfrak{R}_{S}[\ell(f^{*},g^{*})]\) and the Corollary 6 in (Mansour et al., 2009), we have,
\[\mathfrak{R}_{T}[\ell(f,g)] \leq\mathfrak{R}_{S}[\ell(f,g)]+\frac{1}{2}disc(\mathcal{D}_{S}, \mathcal{D}_{T})+\lambda^{*}\] \[\leq\mathfrak{R}_{S}[\ell(f,g)]+\frac{1}{2}disc(\widehat{ \mathcal{D}}_{S},\widehat{\mathcal{D}}_{T})+\lambda^{*}+4q(\text{Rad}_{S}( \mathcal{H})+\text{Rad}_{T}(\mathcal{H}))\] \[+3\mathcal{J}(\sqrt{\frac{\log\frac{4}{3}}{2N_{r}}}+\sqrt{\frac{ \log\frac{4}{3}}{2N_{t}}}),\]
where \(N_{r}\) and \(N_{t}\) indicate the sample size of the selected set and the test set, respectively. \(q\) stands for the function is \(q\)-Lipschitz. As our regression loss, \(\ell^{reg}\) is the smooth-L1 loss function and bounded by \(\mathcal{J}\), \(q\) equals \(1\) in our case. \(\text{Rad}_{S}(\mathcal{H})\) and \(\text{Rad}_{T}(\mathcal{H})\) indicates the empirical Rademacher complexity of a hypothesis set \(\mathcal{H}\) whose VC dimension is \(d\) over the selected set and the test set.
Considering the Rademacher complexity is bounded by:
\[\text{Rad}_{S}(\mathcal{H})\leq\sqrt{\frac{2d\log(eN_{r}/d)}{N_{r}}},\quad \text{Rad}_{T}(\mathcal{H})\leq\sqrt{\frac{2d\log(eN_{t}/d)}{N_{t}}},\]
then we can rewrite the inequality as,
\[\mathfrak{R}_{T}[\ell(f,g)]\leq\mathfrak{R}_{S}[\ell(f,g)]+\frac{1}{2}disc( \widehat{\mathcal{D}}_{S},\widehat{\mathcal{D}}_{T})+\lambda^{*}+const,\]
where \(const=3\mathcal{J}(\sqrt{\frac{\log\frac{1}{3}}{2N_{r}}}+\sqrt{\frac{\log \frac{4}{3}}{2N_{t}}})+\sqrt{\frac{2d\log(eN_{r}/d)}{N_{r}}}+\sqrt{\frac{2d \log(eN_{t}/d)}{N_{t}}}\).
## Appendix D Algorithm Description
To thoroughly describe the procedure of active 3D object detection by the proposed CRB, we present the Algorithm 1 in detail. Firstly, the 3D detector consisting of an encoder \(\{e(\cdot)\), a classifier \(f(\cdot)\), and regression heads \(g(\cdot)\}\) is pre-trained with a small set \(\mathcal{D}_{L}\) of labeled point clouds. During the stage 1: Cls, the pre-trained 3D detector infers all samples from the unlabeled pool \(\mathcal{D}_{U}\) and obtains the predicted bounding boxes \(\widehat{\mathcal{B}}\), predicted box labels \(\hat{Y}\), and calculated box point densities \(\phi\) for each point cloud. Also, the hypothetical labels \(\bar{B}\) through stochastic monte-carlo sampling are computed by Equation (5) during inference. Based on the criterion of maximizing the label entropy, the set \(\mathcal{D}^{*}_{S_{1}}\) containing \(\mathcal{K}_{1}\) candidates are formed via Equations (2), (3), (4). In stage 2, we set the model to the training mode and allow the gradient back-propagation to retrieve gradients for each point cloud. Yet, the model weights will be fixed and not updated. Rps selects the set \(\mathcal{D}^{*}_{S_{2}}\) of size \(\mathcal{K}_{2}\) from the previous candidate set \(\mathcal{D}^{*}_{S_{1}}\) based on the Equation (6) and (7). In stage 3, Gbps selects the set \(\mathcal{D}^{*}_{S}\) of size \(N_{r}\) from set \(\mathcal{D}^{*}_{S_{2}}\), predicted boxes \(\widehat{\mathcal{B}}\) and box point densities \(\phi\) via Equation (8), (9). The final set \(\mathcal{D}^{*}_{S}\) at this round is then annotated by an oracle \(\widehat{\Omega}\) and merged with the selected set in the previous round as the training data. Notably, the selected set at the 0-th round is \(\mathcal{D}_{L}\). When the training data is determined, we re-train the 3D detector with the merged selected set until the model is converged. We iterate the above process starting with model inference for \(R\) rounds and add \(N_{r}\) queried samples to the selected set \(\mathcal{D}_{S}\) for each round.
## Appendix E More Experimental Results on KITTI
### AL performance comparisons on Easy difficulty level
In addition to the Moderate and Hard difficulties reported in the body text, we provide the additional quantitative analysis _w.r.t._ the Easy mode. Figure 5 depicts the mAP(%) variation of the baselines against the proposed Crb with an increasing number of selected bounding boxes. The solid lines indicate the mean value from three running trials and the standard deviation are shown in the shaded area. The results indicate that with increasing annotation cost, Crb consistently achieves the highest mAP and outperforms the state-of-the-art active learning approaches on both 3D and BEV views. Note that Crb with only 1k boxes selected for annotation reaches the comparable performance of Rand that selects around 3k boxes. Other AL baselines share the same trend as the ones under the difficulties of Moderate and Hard (reported in Figure 2 of the main paper).
### AL performance comparisons for each class
To investigate the effectiveness of AL strategies on detecting specific classes, we plot the results of Cyclist and Pedestrian at all difficulty levels in Figure 6 (3D AP) and Figure 7 (BEV AP). We mainly compare three aspects: performance, annotation cost and error variance. 1)Performance: the plots in Figure 6 and Figure 7 show that the proposed Crb outperforms all state-of-the-art AL methods by a noticeable margin, for all settings of difficulty, classes and views, except at easy cyclist. This evidences that our proposed AL approach explores samples with more conceptual semantics covering test sets so that the detector tends to perform better on more challenging samples. 2) Annotation cost: all the plots consistently demonstrate that the proposed Crb reaches comparable performance
Figure 5: 3D and BEV mAP (%) of Crb and AL baselines on the KITTI _val_ split at the Easy level.
while requiring very few (\(\sim\)1/3) annotation costs as baselines, except Entropy. Entropy takes the minimal annotation cost, yet its result is inferior, especially for difficult classes like Cyclist. 3) Variance: we observe that AP variance of Crb is lower than all baselines, which shows that our method is less sensitive to randomness and more stable to produce expected results.
### AL performance comparisons for each active selection round
Figure 8 compares the performance variation of the AL baselines against the proposed Crb with the increasing percentage of queried point clouds (from 2.7% to 16.2%). The reported performance is mAP scores (%) \(\pm\) the standard deviation of three trials for both 3D view (top row) and BEV view (bottom row) and all difficulty levels. We clearly observe that our method Crb consistently outperforms the state-of-the-art results, irrespective of percentage of annotated point clouds and difficulty settings. Surprisingly, when the annotation costs reaches 16.2%, Rand strategy outperforms all the baselines at the Moderate and Hard level. This implicitly evidences that existing uncertainty and diversity-based AL strategies fail to select samples that are aligned with test cases.
Figure 6: Detection results of different classes on the KITTI _val_ set (3D view) with an increasing number of queried bounding boxes.
Figure 7: Detection results of different classes on the KITTI _val_ set (BEV view) with increasing number of queried bounding boxes.
## Appendix F More Experimental Results on Waymo
To explore the performance for different classes on the Waymo dataset, we plot the AP(%) variation of Cyclist and Pedestrian yielded by the baselines and Crb with increasing annotated bounding boxes in Figure 9. We present the results at two levels of difficulty officially defined by Waymo. Level 1 (and Level 2) indicates there are more than five inside points (at least one point) of the ground-truth objects. As can be observed by the AP curves in the plots, Crb achieves the superior recognition accuracy when the annotation cost comes to \(\sim\) 45k bounding boxes. Specifically, the AP values of Crb are boosted by the largest margin (3.1% on Level 2 Cyclist and 1.6% on Level 2 Pedestrian) over the best performing baseline (Rand) that takes extra cost of 5k bounding boxes than ours. Surprisingly, note the results on the class of Pedestrian, the AP curves of most baselines except Entropy and Llal are bounded by Rand. The AP curves of Entropy and Llal are bounded by Crb with the increasing cost to 15k \(\sim\) 20k bounding boxes. This confirms the Crb's superiority over compared AL baselines. Besides, the boosted margin achieved by Crb set for Level 2 Pedestrian is larger than set for Level 1 Pedestrian. This indicates that the samples selected by Crb matches well with the data at the time, covering more diverse samples that span different difficulties.
## Appendix G Additional Qualitative Analysis
To intuitively demonstrate the benefits of our proposed active 3D detection strategy, Figure 10 visualizes that the 3D detection results produced by **Rand** (bottom left) and **Crb** selection (bottom right) from the corresponding image (upper row). Both 3D detectors are trained under the budget of 1K annotated bounding boxes. False positives and corrected predictions are indicated with red and green boxes. It is observed that, under the same condition, Crb produces more accurate and more confident predictions than Rand. Specifically, our Crb yields accurate predictions for multiple
Figure 8: Results on KITTI datasets with an increasing percentage of queried point clouds.
Figure 9: Results of Crb and baselines on the Waymo _val_ split for different classes at Level 2.
pedestrians on the right sidewalk, while Rand fails. Besides, note the car parked on the left that is highlighted in the orange box in Figure 10, the detector trained with Rand produces a significantly lower confidence score (\(0.62\)) compared to our approach (\(0.95\)). This validates that the point clouds selected by Crb are aligned more tightly with the test samples.
## Appendix H Additional Results for Parameter Sensitivity Analysis
**Sensitivity to Prototype Selection.** To further analyze the sensitivity of performance to different prototype selection approaches, _i.e._, Gmm, K-means, and K-means++, we show more results on BEV views in Figure 11 (right). We again run two trials for each prototype selection method and plot the mean and the variance bars. Note that there is very little difference (\(1.65\%\) in the last round) in the mAP(%) of our approach when using different prototype selection methods. This evidences that the more performance gains achieved by Crb than existing baselines do not depend on choosing the prototype selection method.
**Sensitivity to Bandwidth \(h\).** Figure 11 shows additional results w.r.t the BEV views of Crb with the bandwidth \(h\) varying in \(\{3,5,7,9\}\). Observing the trends of four curves, Crb with the bandwidth of all values yields consistent results within the 1.7% variation. This demonstrates that the Crb is insensitive to different values set for bandwidth and can produce similar mAP(%) on BEV views.
Figure 11: Performance comparison on KITTI _val_ set with varying KDE bandwidth \(h\) (left) and prototype selection approaches (right) with increasing queried bounding boxes.
Figure 10: Another case study of active 3D detection performance of **Rand** (bottom left) and **CRb** (bottom right) under the budge of 1,000 annotated bounding boxes. False positive (corrected predictions) are highlighted in red (green) boxes. The orange box denotes the detection with low confidence.
## Appendix I Related Work
**Generic Active Learning**. For a comprehensive review of classic active learning methods and their applications, we refer readers to (Ren et al., 2021). Most active learning approaches were tailored for image classification task, where the _uncertainty_(Wang & Shang, 2014;?. Lewis & Catlett, 1994; Joshi et al., 2009; Roth & Small, 2006; Parvaneh et al., 2022; Du et al., 2021; Kim et al., 2021; Bhatnagar et al., 2021) and _diversity_(Sener & Savarese, 2018; Elhamifar et al., 2013; Guo, 2010; Yang et al., 2015; Nguyen & Neudercus, 2004; Hasan & Roy-Chowdhury, 2015; Aodha et al., 2014) of samples are measured as the acquisition criteria. The hybrid works (Kim et al., 2021; Citrosky et al., 2021; Ash et al., 2020; MacKay, 1992; Liu et al., 2021; Kirsch et al., 2019; Houlsby et al., 2011) combine both paradigms such as by measuring uncertainty as to the gradient magnitude (Ash et al., 2020) at the final layer of neural networks and selecting gradients that span a diverse set of directions. In addition to the above two mainstream methods, (Settles et al., 2007; Roy & McCallum, 2001b; Freytag et al., 2014; Yoo & Kweon, 2019) estimate the expected model changes or predicted losses as the sample importance.
**Active Learning for 2D Detection**. Lately, the attention of AL has shifted from image classification to the task of object detection (Siddigui et al., 2020; Li & Yin, 2020). Early work (Roy et al., 2018) exploits the detection inconsistency of outputs among different convolution layers and leverages the query by committee approach to select informative samples. Concurrent work (Kao et al., 2018) introduces the notion of localization tightness as the regression uncertainty, which is calculated by the overlapping area between region proposals and the final predictions of bounding boxes. Other uncertainty-based methods attempt to aggregate pixel-level scores for each image (Aghdam et al., 2019), reformulate detectors by adding Bayesian inference to estimate the uncertainty (Harakeh et al., 2020) or replace conventional detection head with the Gaussian mixture model to compute aleatoric and epistemic uncertainty (Choi et al., 2021). A hybrid method (Wu et al., 2022) considers image-level uncertainty calculated by entropy and instance-level diversity measured by the similarity to the prototypes. Lately, AL technique is leveraged for transfer learning by selecting a few uncertain labeled source bounding boxes with high transferability to the target domain, where the transferability is defined by domain discriminators (Tang et al., 2021b; Al-Saffar et al., 2021). Inspired by neural architecture searching, Tang et al. (2021a) adopted the'swap-expand' strategy to seek a suitable neural architecture including depth, resolution, and receptive fields at each active selection round. Recently, some works augment the weakly-supervised object detection (WSOD) with an active learning scheme. In WSOD, only image-level category labels are available during training. Some conventional AL methods such as predicted probability, probability margin are explored in (Wang et al., 2022), while in (Vo et al., 2022), "box-in-box" is introduced to select images where two predicted boxes belong to the same category and the small one is "contained" in the larger one. Nevertheless, it is not trivial to adapt all existing AL approaches for 2D detection as the ensemble learning and network modification leads to more model parameters to learn, which could be hardly affordable for 3D tasks.
**Active Learning for 3D Detection**. Active learning for 3D object detection has been relatively under-explored than other tasks, potentially due to its large-scale nature. Most existing works (Feng et al., 2019; Schmidt et al., 2020) simply apply the off-the-shelf generic AL strategies and use hand-crafted heuristics including Shannon entropy (Wang & Shang, 2014), ensemble (Beluch et al., 2018), localization tightness (Kao et al., 2018) and Mc-dropout (Gal & Ghahramani, 2016) for 3D detection learning. However, the abovementioned solutions base on the cost of labelling point clouds rather than the number of 3D bounding boxes, which inherently being biased to the point clouds containing more objects. However, in our work, the proposed Crb greedily search for the unique point clouds while maintaining the same marginal distribution for generalization, which implicitly quires objects to annotate without repetition and save labeling costs.
**Active Learning for 3D Semantic Segmentation**. The adoption of active learning techniques has successfully reduced the significant burden of point-by-point human labeling in large-scale point cloud datasets. Super-point (Shi et al., 2021) is introduced to represent a spectral clustering containing points which are most likely belonging to the same category, then only super-points with high score are labeled at each round. An improved work Shao et al. (2022) further encoded the super-points with a graph neural network, where the edges denote distance between super-points, and then projects the super-point features into the diversity space to select the most representative super-points. Another streaming of work (Wu et al., 2021) is to obtain point labels for uncertain
and diverse regions to prevent the high cost of labeling the entire point cloud. Although semantic segmentation and object detection are different vision tasks, both can benefit from active learning to substantially alleviate the manual labelling cost.
**Connections to Semi-supervised Active Learning**. Aiming at unifying unlabeled sample selection and model training, the concept of semi-supervised active learning (Drugman et al., 2016; Rhee et al., 2017; Sinha et al., 2019; Gao et al., 2020; Liu et al., 2021; Kim et al., 2021; Zhang and Plank, 2021; Guo et al., 2021; Caramalau et al., 2021; Citovsky et al., 2021; Elezi et al., 2022; Gudovskiy et al., 2020) has been raised. (Drugman et al., 2016) combines the semi-supervised learning (SSL) and active learning (AL) for speech understanding that leverages the confidence score obtained from the posterior probabilities of decoded texts. (Sener and Savarese, 2018) incorporated a Ladder network for SSL during AL cycles, while the performance gains are marginal compared to the supervised counterpart. (Sinha et al., 2019) trained a variational adversarial active learning (Vaal) model with both labeled and unlabeled data points, where the discriminator is able to estimate how representative each sample is from the pool. (Elezi et al., 2022) proposed a combined strategy for training 2D object detection, which queries samples of high uncertainty and low robustness for supervised learning and takes full advantage of easy samples via auto-labeling. As our work is under the umbrella of the pool-based active learning, accessible unlabeled data are not used for model training in our setting, hereby the semi-supervised active learning algorithms were not considered in experimental comparisons.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.